How AI is influencing the way we learn – and why we should be wary of the easy way

Share:

When OpenAI launched its “Study Mode” in July 2025, the company touted ChatGPT’s educational benefits. “When ChatGPT is used for teaching or tutoring, it can significantly improve academic performance,” the company’s vice president of education told reporters at the product launch. But any teacher involved in their profession is entitled to ask: Is this just marketing hype, or are these claims actually supported by scientific research?

While generative AI tools are appearing at breakneck speed in classrooms, research on the issue is progressing at a much more reasonable pace. Some preliminary studies have reported benefits for certain groups, such as computer programming students and English language learners . There have also been other optimistic studies on AI in education, such as one published in the journal Nature in May 2025, which suggests that chatbots can facilitate high-level learning and thinking. But researchers in this field have pointed to significant methodological weaknesses in many of these research papers .

Other studies have painted a darker picture, suggesting that AI could impair performance or cognitive abilities such as critical thinking skills . One paper showed that the more a student used ChatGPT during learning, the less successful they were later on similar tasks when ChatGPT wasn’t available.

In other words, early research is only just beginning to scratch the surface to shed light on how this technology will actually affect learning and cognition in the long run. Where can we find more clues? As a cognitive psychologist who has studied college students’ use of AI , I’ve found that my field offers valuable insights into when AI can stimulate the brain and when it risks draining it.

Talent comes from effort

Cognitive psychologists have proposed that our thoughts and decisions are the result of two modes of processing, commonly referred to as System 1 and System 2 .

The first is a system based on pattern recognition, intuition, and habits. It is fast and automatic, requiring little conscious attention or cognitive effort. Many of our routine daily activities (getting dressed, making coffee, cycling to work or school) fall into this category. System 2, on the other hand, is generally slow and deliberative, requiring more conscious attention and sometimes painful cognitive effort , but it often produces more robust results.

We need both systems, but acquiring knowledge and mastering new skills relies heavily on System 2. Difficulty, friction , and mental effort are essential for learning , remembering , and strengthening connections in the brain . Every time a confident cyclist gets on their bike, they are relying on the hard-earned pattern recognition in their System 1, which they developed over many hours of effort in their System 2 to learn how to ride a bike. You cannot achieve skill mastery, and you cannot effectively cross-reference information for higher-level processing, without first putting in cognitive effort.

I tell my students that the brain is a lot like a muscle: you have to work really hard to get results . Without stimulation, it cannot develop.

What if a machine did the work for you?

Now imagine a robot that accompanies you to the gym and lifts weights for you, without you having to exert any effort. Before long, your muscles will atrophy, and you’ll become dependent on the robot for even simple tasks like moving a heavy box.

Misused AI, for example, to answer a quiz or write an essay, prevents students from developing the knowledge and skills they need. It deprives them of mental training.

Using technology to offload cognitive exercises can have a detrimental effect on learning and memory and cause people to misjudge their own understanding or overall skills, leading to what psychologists call metacognitive errors . Research has shown that routinely relying on GPS for car navigation can impair spatial memory , and relying on an external source such as Google to answer questions causes people to overestimate their own knowledge and memory.

Are there similar risks when students assign cognitive tasks to AI? One study showed that students who researched a topic using ChatGPT rather than a traditional search engine had less cognitive load during the task (they didn’t have to think as much) and produced less relevant reasoning about the topic they had studied. Superficial use of AI may reduce cognitive load in the moment, but it’s like letting a robot do your gym exercises for you. Ultimately, it leads to a deterioration in thinking skills.

In another study, students using AI to revise their essays performed better than those who revised without AI, often simply copying and pasting sentences from ChatGPT. But it didn’t appear that these students acquired or assimilated more knowledge than their peers working without AI. The AI ​​group also engaged in less rigorous thinking processes. The authors warn that such “metacognitive laziness” can lead to short-term improvements in performance, but can also lead to long-term skill stagnation.

Offloading a task can be helpful once the foundation is in place. But that foundation can only be created if your brain does the initial work needed to encode, connect, and understand the issues you’re trying to master.

Using AI to support learning

Returning to the gym metaphor, it may be helpful for students to think of AI as a personal trainer that can help them stay focused on their goal by tracking, coaching , and pushing them to work harder. AI has great potential as a modular learning tool , a personalized tutor , with a vast knowledge base… and one that never needs sleep.

This is precisely what AI technology companies are trying to create: the ideal tutor. In addition to OpenAI’s entry into the educational field , Anthropic launched its “learning” mode for Claude in April 2025. These models are supposed to engage in a Socratic dialogue with users, asking them questions and providing clues, rather than simply giving answers.

Early research indicates that, while they can be beneficial, AI tutors also pose problems . For example, one study found that high school students who reviewed their math with ChatGPT performed worse than those who didn’t use AI. Some students used the basic version, while others used a customized version of the tutor that provided hints without revealing the answers. When the students took a test without access to AI, those who used the basic version of ChatGPT performed significantly worse than the group that studied without AI, but they didn’t realize their performance was worse. Those who studied with the robot tutor didn’t perform better than the students who reviewed without AI, but they mistakenly believed they had done better. Therefore, the AI ​​was not helpful and introduced metacognitive errors.

Although tutorial modes need to be refined and improved, it is best for students to select this mode and play along , accurately providing the context for their question and avoiding unnecessary or overly simplistic queries, such as flattery .

The remaining problems of AI tutors can be solved by adjusting the design and interfaces of the tools . But the temptation to use generative AI as a default mode will remain a more fundamental and classic problem of course design and student motivation.

As with other complex technologies such as smartphones , the internet , or even writing, it will take time for researchers to fully understand the true extent of AI’s effects on cognition and learning. Ultimately, the picture will likely be nuanced and highly dependent on context and use cases.

But what we know about learning processes tells us that deep knowledge and mastery of a skill will always require real cognitive training, with or without AI.

Author Bio: Brian W. Stone is Associate Professor of Cognitive Psychology at Boise State University

Tags: