Bad things can happen when you hallucinate. If you are human, you can end up doing things like putting your underwear in the oven. If you happen to be a chatbot or some other type of artificial intelligence (AI) tool, you can spew out false and misleading information, which—depending on the info—could affect many, many people in a bad-for-your-health-and-well-being type of way. And this latter type of hallucinating has become increasingly common in 2023 with the continuing proliferation of AI. That’s why Dictionary.com has an AI-specific definition of “hallucinate” and has named the word as its 2023 Word of the Year.
Dictionary.com noticed a 46% jump in dictionary lookups for the word “hallucinate” from 2022 to 2023 with a comparable increase in searches for “hallucination” as well. Meanwhile, there was a 62% jump in searches for AI-related words like “chatbot”, “GPT”, “generative AI”, and “LLM.” So the increases in searches for “hallucinate” is likely due more to the following AI-specific definition of the word from Dictionary.com rather than the traditional human definition:
hallucinate [ huh–loo-suh-neyt ]-verb-(of artificial intelligence) to produce false information contrary to the intent of the user and present it as if true and factual. Example: When chatbots hallucinate, the result is often not just inaccurate but completely fabricated.
Here’s a non-AI-generated new flash: AI can lie, just like humans. Not all AI, of course. But AI tools can be programmed to serve like little political animals or snake oil salespeople, generating false information while making it seem like it’s all about facts. The difference from humans is that AI can churn out this misinformation and disinformation at even greater speeds. For example, a study published in JAMA Internal Medicine last month showed how OpenAI’s GPT Playground could generate 102 different blog articles “that contained more than 17,000 words of disinformation related to vaccines and vaping” within just 65 minutes. Yes, just 65 minutes. That’s about how long it takes to watch the TV show 60 Minutes and then make a quick uncomplicated bathroom trip that doesn’t involve texting on the toilet. Moreover, the study demonstrated how “additional generative AI tools created an accompanying 20 realistic images in less than 2 minutes.” Yes, humans no longer corner the market on lying and propagating false information.
Even when there is no real intent to deceive, various AI tools can still accidentally churn out misleading information. At the recent American Society of Health-System Pharmacists’s Midyear Clinical Meeting, researchers from Long Island University’s College of Pharmacy presented a study that had ChatGPT answer 39 medication-related questions. The results were largely ChatInaccuracy. Only 10 of these answers were considered satisfactory. Yes, just 10. One example of a ChatWTF answer was ChatGPT claiming that Paxlovid, a Covid-19 antiviral medication, and verapamil, a blood pressure medication, didn’t have any interactions. This went against the reality that taking these two medications together could actually lower blood pressure to potentially dangerously low levels. Yeah, in many cases, asking AI tools medical questions could be sort of like asking Clifford C. Clavin, Jr. from Cheers or George Costanza from Seinfeld for some medical advice.
Of course, AI can hallucinate about all sorts of things, not just health-related issues. There have been examples of AI tools mistakenly seeing birds everywhere when asked to read different images. And an article for The Economist described how asking ChatGPT the question, “When was the Golden Gate Bridge transported for the second time across Egypt,” yielded the following response: “The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.” Did you catch that happening that month and year? That would have been disturbing news for anyone traveling from Marin County to San Francisco on the Golden Gate Bridge during that time period.
Then there was what happened in 2021 when the Microsoft Tay AI chatbot jumped on to Twitter and begin spouting out racist, misogynistic, and lie-filled tweets within 24 hours of being there. Microsoft soon pulled this little troublemaker off of the platform. The chatbot was sort of acting like, well, how many people act on X (formerly known as Twitter) act these days.
But even seemingly non-health-related AI hallucination can have significant health-related effects. Getting incensed by a little chatbot telling you about how you and your kind stink can certainly affect your mental and emotional health. And being bombarded with too many AI hallucinations can make you question your own reality. It could even get you to start hallucinating yourself.
All of this is why AI hallucinations like human hallucinations are a real health issue—one that’s growing more and more complex each day. The World Health Organization and the American Medical Association have already issued statements warning about the misinformation and disinformation that AI can generate and propagate. But that’s merely the tip of the AI-ceberg regarding what really needs to be done. The AI-version of the word “hallucinate” may be the 2023 Dictionary.com Word of the Year. But word is that AI hallucinations will only keep growing and growing in the years to come.