Reine Opperman
– October 15, 2025
5 min read

As artificial intelligence (AI) seeps into everyday life, few tools have proved as seductive as ChatGPT. For students and professionals alike, it has become a godsend: a tireless assistant that can turn half-formed thoughts into polished prose, and do so in seconds.
Yet many who rely on AI for writing will recognise a peculiar unease. Revisit something you produced with ChatGPT’s help, and you may find it oddly unfamiliar. That disconnection, researchers at the Massachusetts Institute of Technology’s (MIT), is more than a passing feeling. It is evidence of a growing “cognitive debt”; a mental deficit incurred when we let AI shoulder the work of thinking for us.
The MIT researchers conducted a study where 54 university students were asked to write short essays. They were split into three groups: one used ChatGPT freely; another relied on traditional web searches; the third worked without the help of AI.
Brain activity was recorded via electroencephalography (EEG), allowing the researchers to track how hard the student’s brains worked to plan, interpret and stay focused.
The results were stark. Neural activity fell in direct proportion to the level of digital assistance. The ChatGPT group displayed up to 55% less overall connectivity than those writing unaided. The “brain-only” participants showed high, synchronised activity across regions tied to creativity, meaning-making and self-monitoring, evidence of deep, self-directed thought.
The search group’s brains, by contrast, lit up in visual and attentional regions, mirroring the mental effort of scanning, selecting and evaluating information online. Both groups, notably, could recall and quote from their work with near-perfect accuracy.
No comparison
The ChatGPT group, however, showed no comparable activation in either pattern. Instead, their brains shifted into what researchers called an: “automated, scaffolded cognitive mode”, a state where much of the heavy mental lifting was outsourced to the AI, reducing genuine, self-generated thought.
The group was largely unable to quote the work they submitted, suggesting that the process of encoding what they wrote into memory had been almost entirely bypassed.
The researchers pulled a clever move: in the final session, they took ChatGPT away from the ChatGPT group. When asked to write unaided, their brains struggled to activate the neural pathways that the “brain-only” group had developed, evidence that reliance on digital assistance may weaken the very circuits needed for critical thinking, trading short-term ease for long-term cognitive cost.
The essays themselves told a parallel story. The ChatGPT submissions were strikingly uniform, echoing the stylistic and ideological biases embedded in the model’s training data. The search-engine users produced content dictated by search engine optimisation hierarchies, whatever ranked highest on Google.
Only the “brain-only” group displayed genuine diversity of thought. Yet even they were not immune to digital influence. The researchers noted traces of social-media-style discourse, language shaped by the rhythm of online debate and the craving for visibility. The “echo chamber” effect, it seems, has seeped so deeply into the cultural bloodstream that it now colours offline cognition.
Not a surprise
This should come as no surprise. The average Gen Z adult spends nine hours a day on screens and reading rates have plummeted since the smartphone’s debut in 2010. We’re drifting away from traditional research methods and engaging instead with social media algorithms and opinionated language models that feed us comfortable, confirming information.
Coupled with their conversational design, these systems further narrow our exposure to diverse ideas. The digital age, it seems, may be quietly stealing our capacity to reason.
The MIT study is less a warning about machines than about ourselves. The study shows that how we introduce AI in schools and workplaces matters: first the mind, then the machine. Students, and society at large, must learn to build the muscle of reasoning before outsourcing it to code.
For all its brilliance, AI cannot yet teach us to think; that remains a human obligation.