Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows


Using AI chatbots for even just for 10 minutes may have a shockingly negative impact on people’s ability to think and problem-solve, according to a new study from researchers at Carnegie Mellon, MIT, Oxford, and UCLA.

Researchers tasked people with solving various problems, including simple fractions and reading comprehension, through an online platform that paid them for their work. They conducted three experiments, each involving several hundred people. Some participants were given access to an AI assistant capable of solving the problem autonomously. When the AI helper was suddenly taken away, these people were significantly more likely to give up on the problem or flub their answers. The study suggests that widespread use of AI might boost productivity at the expense of developing foundational problem-solving skills.

“The takeaway is not that we should ban AI in education or workplaces,” says Michiel Bakker, an assistant professor at MIT involved with the study. “AI can clearly help people perform better in the moment, and that can be valuable. But we should be more careful about what kind of help AI provides, and when.”

I recently met up with Bakker, who has chaotic hair and a wide grin, on MIT’s campus. Originally from the Netherlands, he previously worked at Google DeepMind in London. He told me that a well-known essay on the way AI may disempower humans over time inspired him to think about how the technology could already be eroding people’s abilities. The essay makes for slightly bleak reading, because it suggests that disempowerment is inevitable. That said, perhaps figuring out how AI can help people develop their own mental capabilities should be part of how models are aligned with human values.

“It is fundamentally a cognitive question—about persistence, learning, and how people respond to difficulty,” Bakker tells me. “We wanted to take these broader concerns about long-term human-AI interaction and study them in a controlled experimental setting.”

The resulting study seems particularly concerning, says Bakker, because a person’s willingness to persist with problem-solving is crucial to acquiring new skills and also predicts their capacity to learn over time.

Bakker says it may be necessary to rethink how AI tools work so that—like a good human teacher—models sometimes prioritize a person’s learning over solving a problem for them. “Systems that give direct answers may have very different long-term effects from systems that scaffold, coach, or challenge the user,” Bakker says. He admits, however, that balancing this kind of “paternalistic” approach could be tricky.

AI companies do already think about the more subtle effects that their models can have on users. The sycophancy of some models—or how likely they are to agree with and patronize users—is something that OpenAI has sought to tone down with newer releases of GPT.

Putting too much faith in AI would seem especially problematic when the tools may not behave as you expect. Agentic AI systems are particularly unpredictable because they do complex chores independently and can introduce odd errors. It makes you wonder what Claude Code and Codex are doing to the skills of coders who may sometimes need to fix the bugs they introduce.

I recently got a lesson in the danger of offloading critical thinking to AI myself. I’ve been using OpenClaw (with Codex inside) as a daily helper, and I’ve found it to be remarkably good at solving configuration issues on Linux. Recently, however, after my Wi-Fi connection kept dropping, my AI assistant suggested running a series of commands in order to tweak the driver talking to the Wi-Fi card. The result was a machine that refused to boot no matter what I did.

Perhaps, instead of simply trying to solve the problem for me, OpenClaw should have paused to teach me how to fix the issue for myself. I might have a more capable computer—and brain—as a result.


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.


Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top