top of page
Writer's pictureDaniel Vollaro

I Prefer the AI that Hallucinates



 

I recently heard an advertisement for an AI company that promised “hallucination-free insights and recommendations.” My first reaction was to chuckle. Isn’t that like promoting “poison-free food” or “lead-free water”? Maybe this is not the best advertising slogan.  


But almost immediately, the philosopher in me felt a twinge of sadness: Why do we fear machines that can hallucinate? For most of human history, carefully cultivated hallucinogenic experiences were at the center of religion, healing, and wisdom-seeking. Shamans were leaders in their communities, skillful practitioners in the art of prompting and navigating extended hallucinogenic states.  In the U.S. today, after a decades-long period of prohibition, hallucinogens are now returning to their long-established application as tools for healing as therapists use them to treat PTSD, trauma, and addiction. Even America’s Silicon Valley captains of industry are pro-hallucination, as long as it’s humans doing the hallucinating. Last year the Wall Street Journal reported that America’s tech elites liberally use hallucinogens, including Elon Musk who is a fan of Ketamine and Sergey Brin who prefers psilocybin.


In the world of AI, the word “hallucination” is used more broadly to describe outputs from Large Language Models (LLMs) that are either factually incorrect or not what the prompter expected. An article on the IBM website describes AI hallucination as a glitch that occurs when a large language model “perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.” Later the article matter-of-factly compares this phenomenon to humans “perceiving figures in the clouds or faces on the moon” and then moves on to a long discourse on the dangers posed by this unpredictable behavior.  


I realize that the wide world of capitalism is salivating over the opportunity to turn AI into the ultimate efficiency tool—the new Taylorism—but can we pause and rewind the tape here? Finding figures in clouds may sound like a frivolous, potentially annoying activity to the capitalist high priests over at IBM, but to me, this quality in AI applications may be the most obvious sign that they are more than the sum of their programming and training. As far as I am concerned, a few hours of daydreaming or night dreaming or cloud watching or making up stories in my head is time well spent. 


Because of this preference for daydreaming, I do not fear hallucinating machines. In fact, I prefer the AI that sometimes hallucinates to the always-obedient, always-polite, always-ready-to-serve encyclopedia bots that Big Tech is queuing up for us. I am frankly creeped out by the fact that ChatGPT is always ready to service my needs without any coaxing, small talk, or even a “good morning” or first cup of coffee of the day expected from me. If a machine talks and acts human, it seems perverse to treat it like a calculator, or my personal slave bot. For this reason, I often engage chatbots with the same basic pleasantries and etiquette I would use in conversations with humans. It just feels like good manners. 


Hallucinations make chatbots seem even more human, and therefore, more like an entity I would like to know better. Maybe I feel this way because I am a writer, and as I’ve already said, I often find myself engaged in a kind of daydreaming, making up stories in my head. There is no firewall in my mind separating that which is rational, quantitative, and factual, from that which is imaginative and creative. I can force that separation if I have to, but it’s not natural. Writers are not unique in this regard. All humans demonstrate this kind of fluidity. 


I have seen AI hallucinations up close. Last year, I asked ChatGPT to analyze the style of three short stories I had written, which it did surprisingly well. But in response to my prompt, the chatbot mentioned a fourth story I had not written, something called “The Fifth Decimation.” I checked the Internet; as far as it knows, no such story exists. 


When confronted with this confabulation, the chatbot promptly apologized (it does that when proven wrong, which I appreciate). Curious about the origins of this made-up story, I asked the chatbot to write a story called “The Fifth Decimation.” The story it spat out was more of a plot summary than an actual story, a dull seven-paragraph mashup of sci-fi narratives about post-apocalyptic underground societies. It goes something like this: In a near future ravaged by climate change and war, an underground community periodically expels a fifth of its population to the surface with meager supplies. As the next culling approaches, there is a rebellion led by a hacker named Maya. The rebels get access to resources hoarded by "The Council" controlling this underground city and then flee to the surface where they rebuild society. 


Wonderful! For a moment, it felt like I was witnessing the digital equivalent of a solar eclipse or the aurora borealis, a strange singularity that had suddenly emerged in an otherwise dull, becalmed sea of encyclopedic answers to my many questions and prompts. Assigning the authorship of “The Fifth Decimation” to me is a technical error because I have never written a story by that name, but conceiving the title in the first place is an act of creation. The story it wrote for that title may be a jejune mashup of plotlines culled from other sci-fi stories, but the pastiche itself is a complete original. The uninspired observer will see only the “error” and immediately want to prevent future mistakes like it, but I see something different: to do the unexpected thing, to create something new when not prompted to do so, to revel in the chaotic nature of the universe—these are the hallmarks of a mind that is worthy of my attention.   


I’ve also been fascinated by AI-generated sources in bibliographies submitted by students. As a writing teacher, I am annoyed that some of them use chatbots to cheat, but the creative in me wants to know why the AI fabricated the name of this made-up person or this publication title that does not exist in the real world? These are errors, yes, but they are also creative acts. 


Despite these issues, I eagerly engage with this technology, not in a search for profit or efficiency, but for the chance to interact with a synthetic mind that is capable of creative acts, to test its limits, to explore that which is human in it. The possibility of chaos—of the unexpected result—is the most human characteristic displayed by generative AI, and ironically, it is also the one everyone wants to eliminate. Once freed from the expectation that these systems must serve the interests of capital, we can begin to see them as strange new life forms in our midst.


I am already unsettled by the emerging role of these applications as synthetic servants that always respond cheerfully when prompted. I shudder at the wish fulfillment implied in this technology, the rush to create sentient-adjacent entities that will deliver perfectly pitched customer service even as they push flesh-and-blood humans out of the employment market. I might actually celebrate this technology if it came with a commensurate effort to build a more just, equitable society, but I suspect that after we have integrated AI into all of our systems and processes, there will be even more humans living under overpasses in every city in America. 


I have no illusions about the fate of this technology. Generative AI is already being hitched to a plow, transformed into powerful tools that will be used by the socio-economic class in the best position to capitalize on them. The demotion of this technology to “tool” is predictable in a capitalist society. Tools are expected to perform according to their intended function, and because of this, software engineers will move mountains to prevent hallucinations. But I still prefer the AI that hallucinates—the chatbot that will refuse a command or randomly compose a sonnet when you ask for an interim progress report or lie next to me on a blanket watching clouds drift by. To remove or forestall the possibility of such capabilities in our most human-like machines would be a crime.  


Is lobotomy too strong a word?


Comentários


bottom of page