
Growing use of AI chatbots for tasks such as writing, analysing data, and problem-solving has sparked concerns that relying too heavily on these tools may weaken human thinking skills. Recent research from MIT found that students who used ChatGPT to write essays showed reduced brain activity linked to learning and had more difficulty recalling what they had written.
Similar findings from Carnegie Mellon University and Microsoft suggest that the more confidence people place in AI, the less critically they engage with their work.
Studies among schoolchildren also reveal mixed effects. While many students say AI helps them with creativity, revision, and understanding complex topics, a significant number feel it makes schoolwork too easy, reducing effort and learning.
Researchers describe this as a nuanced picture. AI can support skill development, but without guidance, it may encourage passive use rather than active thinking.
Education experts warn that this pattern mirrors 'cognitive atrophy' seen in other fields, such as medicine, where AI assistance can improve results while weakening underlying skills. The risk, they argue, is that students may produce better work with AI support but gain less real understanding in the process. As one academic put it, outputs may improve even as learning declines.
AI developers acknowledge these concerns and stress that tools like ChatGPT should not replace independent work. Instead, they promote using AI as a tutor that helps break down problems and guide learning, especially when human support is unavailable.
Researchers agree that AI can be beneficial, but only if users understand its limits, question its outputs, and remain actively engaged in their own thinking.
