"Cognitive surrender" AI forces users to abandon logical thinking, study finds



When it comes to tools that work with the big language model, there are generally two broad categories of users. On the one hand, there are those who see AI as a powerful but sometimes flawed service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other hand, there are those who routinely outsource their critical thinking to what they see as an omniscient machine.

Recent research goes a long way toward formulating a new psychological framework for the second group, which routinely engages in “cognitive surrender” to AI’s seemingly authoritative responses. This study also provides some experimental research on when and why people are willing to withdraw their critical thinking from AI, and how factors such as time pressure and external stimuli may influence this decision.

Just ask the answering machine

In “Thinking – Fast, Slow, and Artificial: How Artificial Intelligence is Reshaping Human Thinking and Augmenting Cognitive Discipline” Researchers at the University of Pennsylvania sought to build on existing scholarship that outlines two broad categories of decision-making: one is “rapid, intuitive, and affective processing” (System 1); and one formed by “slow, deliberative, and analytical reasoning” (System 2). According to the researchers, the emergence of artificial intelligence systems has created a new, third category of “artificial intelligence,” in which decisions are driven by “external, automated, data-driven reasoning originating from algorithmic systems rather than human intelligence.”

Read the full article

Comments



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *