Microsoft Researchers Find A.I. Tools Erode ‘Quality of Human Thought’


Digital illustration of brain against purple background
A new survey correlated higher confidence in A.I. tools to less critical thinking. Photo by Milad Fakurian on Unsplash

Humans have always been wary of how technological advances could deteriorate our thinking skills. The printing press, for example, led some to worry that scribes would become lazy; calculators spooked math teachers; and the internet spawned widespread anxiety over its cognitive impacts. When it comes to generative A.I., however, there may be real cause for concern, according to a new study from researchers at Microsoft (MSFT) and Carnegie Mellon University.

Like printing, calculators and the internet, A.I. tools “are the latest in a long line of technologies that raise questions about their impact on the quality of human thought,” said the survey’s authors, who found that individuals with higher confidence in generative A.I. tools rely less on critical thinking skills. According to the researchers, this correlation could lead to widespread obstacles for workers, as improperly used technologies “can and do result in the deterioration of cognitive faculties that ought to be preserved.”

The study surveyed 318 “knowledge workers,” defined as professionals who handle information, and examined 936 examples of how they use generative A.I. at work. Some of the cited tasks included a lawyer using ChatGPT to find relevant laws for a legal case, a teacher using DALL-E to generate an image for a presentation on hand washing at school, and a commodities trader using ChatGPT to seek recommendations in improving their trading skills. Besides asking participants to self-report A.I. work tasks, the survey evaluated their confidence in generative A.I.’s abilities and their confidence in evaluating A.I. outputs and completing the same tasks without the technology.

The results showed that those with less confidence in such tools used critical thinking skills to ensure to improve the quality of their work, while a reliance on A.I. tools often diminished “independent problem-solving,” according to the study, which noted that knowledge workers are increasingly trading “hands-on engagement for the challenge of verifying and editing A.I. outputs.”

Participants linked upticks in critical thinking to a desire to avoid potential negative outcomes of their A.I. use, ranging from outdated information to wrong outcomes or false mathematical formulas. This was especially evident in high-stakes scenarios, such as work assignments that could impact one’s employment or events that could cause social conflict, like communications to coworkers with different cultural backgrounds.

Could A.I. dull cognitive abilities over time?

While the use of generative A.I. for seemingly low-stakes tasks, like grammar-checking, might appear less worrying, the study’s authors warn that such overreliance could inadvertently cause negative outcomes to occur more frequently when more important tasks appear. “Without regular practice in common and/or low-stakes scenarios, cognitive abilities can deteriorate over time, and thus create risks if high-stakes scenarios are the only opportunities available for exercising such abilities,” they said.

Sure, generative A.I. tools can lighten grunt work by automating tasks for workers. But, according to the study, a “key irony” of such automation is that it deprives humans of “routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do rise.”

Overreliance on A.I. Erodes ‘the Quality of Human Thought,” Microsoft Study Finds





<

Leave a Comment