AI Creating a New Way of Thinking, But Overdependence Could Harm Human Critical Thinking: Scientists Warn

Artificial intelligence (AI) is changing the way we think, but some scientists are concerned about what this could mean for our future. In a new article published in Nature Human Behaviour, experts warn that over-relying on AI might put human critical thinking at risk, leaving us too dependent on machines to solve problems.

AI Creating a New Way of Thinking, But Overdependence Could Harm Human Critical Thinking: Scientists Warn

The article introduces a concept called “System 0,” which represents a new form of thinking where we outsource certain cognitive tasks to AI. AI can handle massive amounts of data and perform complex calculations far beyond what humans can do. While this sounds impressive, the danger lies in how much we might depend on it.

"The risk is that if we lean too much on System 0, we could lose the ability to think critically on our own," the experts explained. "If we simply accept whatever solutions AI gives us without questioning them, we might lose our creativity and our ability to come up with new ideas." In a world that's becoming more automated every day, it’s more important than ever that we continue to challenge and question what AI presents to us.


Generative AI, like ChatGPT and Bard, has already become a part of everyday life for millions of people. These systems can create text, images, audio, and video based on user prompts. By 2023, some of these tools had already reached over 100 million users, according to the US Government Accountability Office. The article highlights that the growing use of these AI tools is changing the way we think and make decisions, which could have lasting effects.

System 0 works alongside two other well-known types of human thinking: System 1, which is fast and intuitive, and System 2, which is slow and more analytical. Nobel Prize-winning psychologist Daniel Kahneman introduced these two systems back in 2002 when studying human decision-making. But unlike System 1 and System 2, System 0 is not something that happens inside our brains—it’s an artificial system that interacts with and enhances our natural ways of thinking.

According to the article, System 0 is considered its own independent thinking system because it meets certain criteria, such as reliability, transparency, and how well it processes information. But while System 0 can work with data efficiently, it doesn’t actually understand the information the way humans do. That’s where we come in.


To truly benefit from AI, we need to interpret the data and use both our intuitive (System 1) and analytical (System 2) thinking to make sense of it. AI might be fast and powerful, but without human interpretation, it can't produce meaningful results. This means we still have an important role to play.

The scientists emphasize that transparency and accountability are key to ensuring people can use AI responsibly. They argue that it's crucial to educate the public about how to interact with AI in a smart and critical way. If we don’t, we risk becoming too dependent on these systems, which could weaken our ability to think for ourselves.

Looking ahead, the researchers suggest that we need to develop better ways of evaluating how trustworthy and transparent AI systems are. This would help establish guidelines for using AI ethically, especially when it comes to making important decisions. They also call for more efforts to promote digital literacy and critical thinking skills so that people can navigate the AI-driven world confidently.


Ultimately, they believe that we need to encourage interdisciplinary research that explores the psychological, cognitive, and social effects of integrating AI into our daily lives. This research could help us understand the long-term impacts of relying on AI and ensure that we use these powerful tools responsibly without losing what makes human thinking unique.