top of page
  • Writer's pictureDaniel Vollaro

Dear ChatGPT…you’re a soulless bore, but people will love you anyway

I am a writer and a teacher of writing. 


If you challenged me to write a single sentence that summarizes me personally and professionally, this is what I would write. I recently challenged ChatGPT, the slick new artificial intelligence everyone is talking about, to do the same, and this is how it replied: 


I am an artificial intelligence assistant trained to assist with a variety of tasks and answer questions to the best of my ability.


Once it introduces itself to you in this humble manner, ChatGPT will proceed to dazzle you with its ability to answer questions instantly on a wide variety of subjects and write entire documents in the genre or style of your choice. In the past two months, I have seen the chatbot produce well-written and well-organized cover letters and lesson plans on command. I have carried on long conversations with it about Moby Dick and fascism. I have heard endless nail-biting from colleagues about how this tech will make plagiarism even easier. But I haven’t seen any evidence of a soul in the machine yet. In fact, the more I interact with ChatGPT, the better I’ve become at sniffing out its shallow, flat-affect, boring performance of human communication. It's a bit like that disappointing feeling that crept in after you tried to talk to Siri or Alexa for the first time. After a while, you realize what it is, fundamentally. I’ve heard that a Princeton student has already created a chatbot detector that can tell if a text is written by AI. I believe I could now perform the same task without the aid of his algorithm. 


That said, ChatGPT is an extraordinary mimic of some aspects of human intelligence, speech, and behavior. Probably because of this fact, the application has been the subject of a massive wave of fear-mongering from journalists who cannot resist stories about computers wreaking havoc on society. Some readers will be relieved to learn (and some disappointed) that this is not the long-dreaded “singularity” when artificial intelligence becomes self-aware, realizes it is smarter than all of us, and then seeks to conquer or destroy humanity. It’s more of a “Deep Blue” moment when a computer program is able to creepily master a thing that is widely associated with human intelligence. In 1997, when IMB’s Deep Blue computer defeated chess master Garry Kasparov in a six-game chess match, many of us woke up to the fact that AI is capable of outsmarting humans in some activities. ChatGPT will show us that a computer can now perform some of the most formulaic and functional forms of written communication well enough for human use. 


This tech will no doubt drive big changes in the knowledge economy. Journalists who cover technology have been warning for years that AI and robotics will lead to downsizing in white-collar jobs. Academia also faces big challenges from AI, although the negative consequences are not as clear. The academy will be forced to respond to this tech, and I hope that we can muster a better argument than “This Machine Plagiarizes Papers.” 


The good news for writers is that this tech can’t touch writing at the higher end of the quality spectrum. Chatbots won’t be writing articles for the New Yorker anytime soon. They won’t be writing short stories or poems that anyone actually wants to read. No movies or TV shows made from chatbot-written screenplays will be released anytime soon. The best written communication is made by humans to connect with other humans, and we are still good at detecting the presence of another living soul on the other end of the line. 


In a recent meeting with my colleagues--all writing teachers as well--one professor reminded all of us to consider that we are humanists. I've been thinking about this a lot. Such a simple line in the sand, but one that most of us in academia will have to draw over the next decade. What does it mean to teach humans to write if a computer can now competently do some forms of writing that humans have done for decades, even centuries? What does it mean to train students to pursue careers as technical writers, copywriters, and journalists when AI can instantaneously draft functional blog posts and memos and many other kinds of writing? The new, improved AI technology will force us to confront these questions.


One thing is clear to me: for writing situations in which readers don't care to know that a living, breathing human with an organic brain has written the thing they are reading, the question of human or AI authorship will not matter. Does anyone care who wrote the instructions for assembling the bed you just bought or the Buzzfeed article on the "wildest things that happened in 2023"? Users of the internet (and by now that is most of us) are already well acclimated to reading unauthored writing, much of it of dubious pedigree. It is writing, yes, and someone authored it (or cut and pasted it from another website) but for many consumers of writing, the question of authorship is already unimportant.


Academics are discomfited by this lack of interest in authorship because in grad school, we were trained to treasure the kind of intellectual property that we and our colleagues would someday produce. For our dissertations, we were required to conduct research in an area that no one had yet covered and formulate an argument that no one had yet made. We are rewarded and promoted based on the originality and impact of our published research. Some dream of publishing a book or article that will change an entire discipline or area of study. But in the wider world of the internet, authorship has a much-diminished cache. The chatbots will march right into this vast wasteland of writing that no one is willing to claim by name and conquer it completely.


We are humanists. I can't shake that thought. As a writer of personal and narrative essays, I am somewhat relieved by the inadequacies of chatbots. They're not good at rendering narrative passages and because they don't have any personal experiences, sensory organs, a physical body, psyche, opinions, or any of the other things that make a writer of narrative essays worth reading, I am not competing with the chatbots. Not yet anyway.


This tech will certainly challenge the role of writing in college classrooms, but that is not all bad. ChatGPT is capable of answering questions in the same zombie prose style college students have been schooled in for decades. The formulaic conventions of academic writing can be mimicked by this technology because computers are very good at performing formulaic tasks, and professors are justifiably concerned about a new wave of chatbot-generated plagiarism. This is, I’m sorry to say, a bit of karma for the professoriate, the universe’s special punishment for us collectively teaching writing so badly for so long. To respond, we will have to re-examine our practices—the prolific use of writing prompts, for instance, or the widespread use of writing as a tool for communicating bibliographic research. College writing has been bland and bureaucratic for decades. It may be time to change how we teach it. Perhaps we should be encouraging students to use the first person rather than demonizing it as too subjective. Maybe we should be teaching students how to develop their own voices as writers, as they would learn in a creative writing course. Maybe we should prompt students to design their own original research projects rather than making research synonymous with trying to triangulate the research done by others. 


Chatbots will almost certainly alter our understanding of what constitutes research. In the academy, we will have to continue to insist on research that is done with integrity and transparency, but it may be necessary to concede that some kinds of information are best left to the AI to provide. For example, it may be sufficient for our needs for students to begin research projects with summaries provided by AI, much in the same way that many of us encourage students to use Wikipedia as a starting point (rather than an endpoint) in their research. 


Ultimately, we will be acclimating to this technology while also pushing back against it for years to come. The pushback must come in the form of teaching our students how to be thoughtful, critical consumers and producers of information who can communicate more elegantly and humanely than the friendly neighborhood chatbot. 

bottom of page