(Economist) AI models make stuff up. How can so-called machine hallucinations be controlled?

It is an increasingly familiar experience. A request for help to a large language model (llm) such as Openai’s Chatgpt is promptly met by a response that is confident, coherent and just plain wrong. In an ai model, such tendencies are usually described as hallucinations. A more informal word exists, however: these are the qualities of a great bullshitter.

There are kinder ways to put it. In its instructions to users, Openai warns that Chatgpt “can make mistakes”. Anthropic, an American ai company, says that its llm Claude “may display incorrect or harmful information”; Google’s Gemini warns users to “double-check its responses”. The throughline is this: no matter how fluent and confident ai-generated text sounds, it still cannot be trusted.

Hallucinations make it hard to rely on ai systems in the real world. Mistakes in news-generating algorithms can spread misinformation. Image generators can produce art that infringes on copyright, even when told not to. Customer-service chatbots can promise refunds they shouldn’t. (In 2022 Air Canada’s chatbot concocted a bereavement policy, and this February a Canadian court has confirmed that the airline must foot the bill.) And hallucinations in ai systems that are used for diagnosis or prescription can kill.

The trouble is that the same abilities that allow models to hallucinate are also what make them so useful.

Read it all.

print

Posted in Science & Technology