Right now artificial intelligence is everywhere. When you write a document, you’ll probably be asked whether you need your “AI assistant.” Open a PDF, and you might be asked whether you want an AI to provide you with a summary. But if you have used ChatGPT or similar programs, you’re probably familiar with a certain problem—they make stuff up, causing people to view things they say with suspicion.
It has become common to describe these errors as “hallucinations.” But talking about ChatGPT this way is misleading and potentially damaging. Instead call it bullshit.
We don’t say this lightly. Among philosophers, “bullshit” has a specialist meaning, one popularized by the late American philosopher Harry Frankfurt. When someone bullshits, they’re not telling the truth, but they’re also not really lying. What characterizes the bullshitter, Frankfurt said, is that they just don’t care whether what they say is true. ChatGPT and its peers cannot care, so they are, in a technical sense, bullshit machines.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
We can easily see why this is true and why it matters. In 2023, for example, one lawyer found himself in hot water when he used ChatGPT in his research while writing a legal brief. Unfortunately, ChatGPT had included fictitious case citations. The cases it cited simply did not exist.
This outcome isn’t rare or anomalous. To understand why, it’s worth thinking a bit about how these programs work. OpenAI’s ChatGPT, Google’s Gemini chatbot and Meta’s Llama all work in structurally similar ways. At their core is a large language model, or LLM. These models all make predictions about language. Given some input, ChatGPT will make some prediction about what should come next or what is an appropriate response. It does so through an analysis of its training data, which consist of enormous amounts of text. In ChatGPT’s case, the initial training data included billions of pages of text from the Internet.
Given some text fragment or prompt, the LLM uses its training data to predict what should come next. It will arrive at a list of the most likely words (technically, linguistic tokens) to come next, then select one of the leading candidates. Letting it choose not to use the most likely word every time allows for more creative (and more human sounding) language. The parameter that sets the amount of deviation permitted is known as the “temperature.” Later in the process, human trainers refine predictions by judging whether the outputs constitute sensible speech. Extra restrictions may also be placed on the program to avoid problems (such as ChatGPT saying racist things), but this token-by-token prediction is the idea that underlies all of this technology.
Now, we can see from this description that nothing about the modeling ensures that the outputs accurately depict anything in the world. There is not much reason to think the outputs are connected to any kind of internal representation at all. A well-trained chatbot will produce humanlike text, but nothing about the process checks whether the text is true, which is why we strongly doubt an LLM really understands what it says.
So sometimes ChatGPT says false things. In recent years, as we have grown accustomed to AI, people have started to refer to these falsehoods as “AI hallucinations.” This language is metaphorical, but we think it’s not a good metaphor.
Consider Shakespeare’s paradigmatic hallucination in which Macbeth sees a dagger floating toward him. What’s going on here? Macbeth is trying to use his perceptual capacities in his normal way, but something has gone wrong. And his perceptual capacities are almost always reliable—he doesn’t usually see daggers randomly floating about! Normally his vision is useful in representing the world, and it is good at doing so because of its connection to the world.
Now think about ChatGPT. Whenever it says anything, it is simply trying to produce humanlike text. The goal is just to make something that sounds good. This effort is never directly tied to the world. When things go wrong, it isn’t because ChatGPT hasn’t succeeded in representing the world that particular time; it never tries to represent the world! Calling its falsehoods “hallucinations” doesn’t capture this aspect.
Instead we suggest, in a 2024 report in Ethics and Information Technology, that a better term is “bullshit.” As mentioned, a bullshitter just doesn’t care whether what they say is true.
If we do regard ChatGPT as engaging in a conversation with us—although even this idea might be a bit of a pretense—then the term seems to fit the bill. To the extent that ChatGPT intends to do anything, it intends to produce convincing humanlike text. It isn’t trying to say things about the world. It’s just bullshitting. And crucially, it’s bullshitting even when it says true things.
Why is this distinction important? Isn’t “hallucination” just a nice metaphor here? Does it really matter if it’s not apt? We think it does matter for at least three reasons.
First, the terminology we use affects public understanding of technology, which is important on its own. If we use misleading terms, people are more likely to misconstrue how the technology works. We think this risk in itself is a bad thing.
Second, how we describe technology affects our relationship with that technology and how we think about it. And these conceptions can be harmful. Consider people who have been lulled into a false sense of security by “self driving” cars. We worry that talking of AI “hallucinating”—a term typically used for human psychology—risks anthropomorphizing the chatbots. The ELIZA effect (named after a chatbot from the 1960s) occurs when people attribute human features to computer programs. We saw this effect in extremis in the case of the Google employee who came to believe that one of the company’s chatbots was sentient. Describing ChatGPT as a bullshit machine (even if it’s a very impressive one) helps to mitigate this risk.
Third, if we attribute agency to the programs, we may shift blame away from those using ChatGPT, or its programmers, when problems arise. If, as appears to be the case, this kind of technology will increasingly be used in important matters such as health care, it is crucial that we know who is responsible when things go wrong.
So next time you see someone describing an AI making something up as a “hallucination,” call bullshit!
This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.