AI prompts and thinking skills are becoming a growing topic of debate as generative AI tools like ChatGPT become part of daily life for students and professionals. From structuring essays to analyzing data and refining job applications, AI is increasingly used to offload cognitive tasks. However, researchers and educators are now questioning whether this convenience comes at the cost of human critical thinking and problem-solving abilities.
Concerns intensified earlier this year when researchers at the Massachusetts Institute of Technology published a study examining how people use AI to write essays. The findings showed that participants who relied on ChatGPT displayed reduced activity in brain networks associated with cognitive processing. Not only was their mental engagement lower, but they also struggled to recall and quote their own work compared to those who completed the task without AI assistance.
The researchers described their findings as highlighting a “pressing matter” around the possible decline in learning skills. The study involved 54 participants from MIT and nearby universities, whose brain activity was measured using electroencephalography.
Participants used AI for tasks such as summarizing essay questions, finding sources, refining grammar, and even generating ideas. While AI improved output quality, it appeared to reduce deeper cognitive involvement.
Similar concerns have emerged from workplace studies. Research conducted jointly by Carnegie Mellon University and Microsoft surveyed more than 300 white-collar workers who used AI tools weekly.
Analyzing hundreds of AI-assisted tasks, the study found that higher confidence in AI’s abilities correlated with less critical thinking effort from users. In simple terms, the more people trusted AI, the less they questioned or evaluated its output.
The researchers warned that while generative AI can boost efficiency, it may also inhibit critical engagement with work. Over time, this could lead to overreliance and weakened independent problem-solving skills. This trade-off between speed and depth is now central to discussions around AI prompts and thinking skills in professional environments.
Education presents an equally complex picture. A survey published by Oxford University Press found that six in ten schoolchildren believed AI had negatively affected their school-related skills. Yet the same research revealed nuance.
According to Dr Alexandra Tomescu, a generative AI specialist at OUP, nine in ten students also reported that AI helped them develop at least one skill, such as creativity, problem-solving, or revision techniques.
This mixed response suggests that AI’s impact depends heavily on how it is used. Dr Tomescu notes that many students feel AI sometimes makes work “too easy,” reducing the effort required to learn.
At the same time, students are asking for clearer guidance on responsible and effective use, highlighting a gap between access to AI tools and understanding how to integrate them into learning.
Some experts argue that current safeguards are insufficient. While OpenAI has released prompt guides designed to encourage thoughtful use among students, Professor Wayne Holmes of University College London believes more independent research is needed. He points out that there is still no large-scale evidence proving AI tools are safe or effective in education, or that they consistently improve learning outcomes.
Professor Holmes also draws attention to the concept of cognitive atrophy, where skills decline due to overuse of automated tools. He cites examples from healthcare, where AI-assisted radiology improved some clinicians’ performance but reduced others’.
A Harvard Medical School study found that while AI support boosted accuracy for certain users, it negatively affected decision-making for others, underscoring the complexity of human-AI interaction.
The core concern is not whether AI improves results, but whether it improves understanding. Students may submit higher-quality essays with AI assistance, yet learn less in the process. As Professor Holmes puts it, outputs may be better, but learning can be worse. This distinction is central to the debate around AI prompts and thinking skills.
OpenAI acknowledges these concerns. Jayna Devani, who leads international education at OpenAI, has emphasized that tools like ChatGPT should not be used to outsource thinking. Instead, she describes AI as a tutor that can guide users through problems, especially when human help is unavailable. Used this way, AI can break down complex questions and support understanding rather than replace it.
Ultimately, experts agree on one key point: AI is not just a more advanced calculator. Its reasoning, data handling, and influence on cognition demand careful use. Understanding how AI works, questioning its outputs, and maintaining active engagement are essential if users are to benefit without losing critical skills.
The debate over AI prompts and thinking skills is far from settled. What is clear is that balance matters. When used thoughtfully, AI can accelerate learning and insight. When used blindly, it risks dulling the very skills it aims to support.
For more thoughtful coverage on AI, ethics, and emerging research, visit ainewstoday.org and stay informed on how AI is reshaping human intelligence.