ChatGPT use among students raises concerns over critical thinking
University lecturers say essays generated with AI often lack insight, originality and critical thought.
A university lecturer in the United States says many students are increasingly relying on ChatGPT to write essays—even about the ethics of AI—raising concerns about critical thinking in higher education.
Dr Jocelyn Leitzinger from the University of Illinois noticed that nearly half of her 180 students used the tool inappropriately last semester. Some submissions even repeated generic names like ‘Sally’ in personal anecdotes, hinting at AI-generated content.
A recent preprint study by researchers at MIT appears to back those concerns. In a small experiment involving 54 adult learners, those who used ChatGPT produced essays with weaker content and less brain activity, as recorded by EEG headsets.
Researchers found that 80% of the AI-assisted group could not recall anything from their essay afterwards. In contrast, the ‘brain-only’ group—those who wrote without assistance—performed better in both comprehension and neural engagement.
Despite some media headlines suggesting that ChatGPT makes users lazy or less intelligent, the researchers stress the need for caution. They argue more rigorous studies are required to understand how AI affects learning and thinking.
Educators say the tool’s polished writing often lacks originality and depth. One student admitted using ChatGPT for ideas and lecture summaries but drew the line at letting it write his assignments.
Dr Leitzinger worries that relying too heavily on AI skips essential steps in learning. ‘Writing is thinking, thinking is writing,’ she said. ‘When we eliminate that process, what does that mean for thinking?’
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!