Last fall, I wrote an essay for The New Yorker titled “What Kind of Writer is ChatGPT?”. My intention with the piece was to explore how undergraduate and graduate students were utilizing AI to assist with their writing tasks.
At that time, there were worries that such tools might turn into machines for plagiarism. (“AI seems almost designed for cheating,” noted Ethan Mollick in his bestselling book, Co-Intelligence.) However, what I observed was a bit more nuanced.
Students weren’t using AI to compose their work for them; rather, they engaged in dialogues about their writing. This method appeared to be more time-consuming and less efficient compared to simply sitting down and writing. From my interviews, it became evident that the students aimed not to lessen their overall effort but to mitigate the maximum mental strain needed to create written works.
I noted, “‘Talking’ to the chatbot about the article was more enjoyable than laboring in silence.” Regular writing demands intense bursts of concentration, while using ChatGPT “softened the experience, smoothing those spikes into the gentle curves of a sine wave.”
I thought about this essay recently because a new paper from the MIT Media Lab, titled “Your Brain on ChatGPT,” supports my hypothesis. The researchers assigned one group of participants to write an essay without any external assistance, while another group was allowed to use ChatGPT 4o. They monitored both groups' brain activity using EEG machines.
“The most significant difference was seen in alpha band connectivity, with the Brain-only group demonstrating considerably stronger networks for semantic processing,” the researchers state, adding, “the Brain-only group also exhibited stronger occipital-to-frontal information flow.”
What does this imply? The researchers suggest the following interpretation:
“The elevated alpha connectivity in the Brain-only group indicates that writing independently likely prompted greater internally driven processing...their brains probably engaged in more internal brainstorming and semantic retrieval. The LLM group...might have relied less on purely internal semantic creation, resulting in lower alpha connectivity, as some creative responsibility was shifted to the tool.” [emphasis mine]
In essence, writing with AI, as I observed last fall, lessens the cognitive strain on the brain. Many commentators responding to this article consider this to be an inherently positive development. A tech CEO on X explained, “Cognitive offloading occurs when excellent tools allow us to work more efficiently and with less mental effort for the same output. The spreadsheet didn't eliminate math; it created billion-dollar industries. Why should we insist on our brains working the same way for the same tasks?”
My perspective on this situation is divided. On one hand, I believe there are situations where reducing the strain of writing is clearly advantageous. Professional communication, like emails and reports, comes to mind. The writing here serves the broader purpose of conveying useful information, so if a simpler method exists to achieve this aim, then why not utilize it?
However, in the academic context, cognitive offloading doesn’t seem as harmless. The MIT paper raises several pertinent concerns regarding AI in writing and learning [emphases mine]:
“Generative AI can create content instantly, providing students with quick drafts based on minimal input. While this can save time and inspire, it also affects students’ ability to retain and recall information, which is crucial for learning.”
“When students depend on AI to generate extensive or complex essays, they may skip the process of synthesizing information from memory, hindering their comprehension and retention of the material.”
“This indicates that while AI tools can boost productivity, they may also lead to a kind of ‘metacognitive laziness,’ causing students to offload cognitive and metacognitive responsibilities to AI, potentially undermining their ability to self-regulate and engage thoroughly with the learning material.”
“AI tools…can make it easier for students to sidestep the intellectual work necessary to internalize key concepts, vital for long-term learning and knowledge transfer.”
In an educational setting, experiencing some strain often accompanies intellectual growth. To alleviate this strain is akin to using an electric scooter to ease marches in military boot camp; it may achieve that short-term aim, but it undermines the long-term conditioning goals of those marches.
In this specific debate, we glimpse a larger tension that partly characterizes the emerging Age of AI: to fully engage with this new technology, we must better understand both the usefulness and the dignity of human thought.
####
For a more in-depth discussion of this new study, check out today’s episode of my podcast, where I am joined by Brad Stulberg to analyze its findings and implications [ listen | watch ].
Last autumn, I released an essay in The New Yorker named, “What Kind of Writer is ChatGPT?”. My intention for the article was to gain a deeper insight into how ... Read more