Artificial intelligence makes us think faster, more productively, and worse



AI is everywhere, the pressure to adopt it is relentless, and the evidence that it’s making us smarter grows thinner every quarter.

On New Year’s Day 2026, a programmer named Steve Yegge Launched an open source platform called Gas Town. It allows users to control swarms of artificial intelligence coding agents simultaneously, compiling software at a speed that no human can match.

One of the first people to try it described the experience in terms that had nothing to do with productivity. “There really is a lot for you to understand intelligently” he wrote. “I felt a palpable stress watching him.”

This sentence should be plastered on the wall of every boardroom, every venture capital boardroom, and every CES main stage where the word “exploration” is thrown like confetti. Because something strange is happening between humans and the technology we call smart.

Cars are getting faster and faster. People who interact with them are more tired, more anxious, and, by several measures, less capable of something that intelligence is supposed to enhance: thinks clearly.

The pressure to accept AI is now so pervasive that it has developed its own vocabulary of coercion.

You must have AI.

You have to use AI.

Need to buy AI.

Your competitors are already using it.

Your children will be left behind without it.

Language does not come from engineers quietly solving problems. It comes from earnings calls, product presentations, and LinkedIn posts written with the manic energy of people who confuse product sales with describing reality.

In January 2026 World Economic Forum in DavosMicrosoft CEO Satya Nadella offered a statement that is as revealing as it deserves to be studied as a cultural artifact. He warned that artificial intelligence is at risk of losing “social permission” consuming large amounts of energy unless it begins to provide tangible benefits to people’s lives.

The framework was amazing: it’s not a question of whether the technology works, but whether the public can be kept on board while the industry figures out that it works. Nadella called AI “cognitive enhancer” offer “access to infinite minds.”

A month later Circana survey of US consumers It found that 35 percent of them do not want artificial intelligence in their devices. The main reason was not confusion or technophobia. It was simpler than that. They said it was not necessary.

The gap between rhetoric and evidence has become difficult to ignore. In March 2026, Goldman Sachs released its analysis of fourth-quarter earnings data and found this, in the words of chief economist Ronnie Walker. “There is no meaningful relationship between productivity and AI adoption at the economy level.”

The bank noted that 70 percent of S&P 500 management teams mentioned artificial intelligence in earnings calls. Only 10 percent measured its impact on specific use cases. One percent quantified its impact on earnings. Meanwhile, The five largest technology companies in the United States A total of $667 billion is expected to be spent on AI infrastructure in 2026, up 62 percent from the previous year.

The National Bureau of Economic Research described the situation as follows “productivity paradox”: perceived gains greater than those measured.

There are real productivity improvements, but they are surprisingly narrow. Goldman found average gains of about 30 percent in two specific areas: customer support and software development. Outside of these areas, in the bank’s assessment, evidence for widespread improvement was essentially absent. For now, the promised revolution is taking place in two rooms of a very large house.

What’s going on in these rooms is worth a closer look, because even where AI is introduced, something else breaks.

In February 2026 Researchers at the UC Berkeley Haas School of Business published the results of an eight-month study at a 200-person US technology firm. They found that AI did not reduce the workload. He strengthened them. Tasks have become faster, so expectations have increased. Expectations have risen, so the scope has expanded. The scope has expanded, so employees have taken on responsibilities that previously belonged to other roles. Product managers started writing code. The researchers took over the engineering work. Role boundaries were abolished because the tools felt it was possible, and then came burnout.

I’m just tired of writing.

Researchers have identified a period they call “Workload creep”: the gradual accumulation of neglected tasks until cognitive fatigue degrades the quality of each decision.

The Harvard Business Review gave this phenomenon an obvious name: “AI Brain Frying.” A Boston Consulting Group study of nearly 1,500 US workers found that 14% of those using AI tools that require significant control reported experiencing a different form of mental fog, characterized by difficulty concentrating, slower decision-making and headaches after prolonged interaction with AI.

The workers most affected were not skeptics or laggards. They were eager receivers, doing exactly what each keynote speaker told them.

The distribution of this depletion is not random. 62 percent of partners and 61 percent of entry-level employees reported AI-related burnoutAccording to a Harvard Business Review study.

Among C-suite executives, that number dropped to 38 percent. The pattern is consistent with what anyone who has spent time in an organization would predict: The people making strategic decisions about AI adoption aren’t the people driving its results, cleaning up its bugs, and switching between its tools eight hours a day.

All of this raises a question the industry would prefer to skip: what do we mean when we use the word “intelligence”?

Term “artificial intelligence” It was created in a workshop at Dartmouth College in 1956 and has been doing a specific ideological job ever since. By naming the field after human quality, its creators took a step that was as much marketing as science. He invited us to see computation as cognition, pattern matching as insight, and speed as wisdom.

Whenever a product is described as “intelligent,” it takes on the emotional weight of the word, which for most of human history meant something like judgment, the ability to reason, and the ability to sit in limbo long enough to think clearly about it.

These systems do not. What they do, often brilliantly, is statistical prediction on an extraordinary scale. They recognize patterns in data, create plausible continuations of sequences, and optimize for goals set by their designers.

This is really helpful. It is not intelligence in the sense that any philosopher, psychologist, or any thinking person on the street would recognize. The shift between the two meanings is not accidental. It is the engine of the entire commercial project.

The deepest irony is this: in our rush to surround ourselves with artificial intelligence, we are eroding the conditions under which actual human intelligence operates. Intelligence, the real kind, requires the things that the AI ​​economy systematically destroys: sustained attention, a tolerance for uncertainty, a willingness to sit with a problem before arriving at a solution, and the cognitive space to doubt, reconsider, and change one’s mind.

In a paper published in February 2026, researchers at the London School of Economics argued that manufactured urgency about artificial intelligence narrows the space for democratic debate, collapses the future into a single inevitability, and leaves no room for the slow, uncertain, clear human process of deciding together what we actually want.

There is something almost comical about the situation.

We’ve built machines that can process language, create images, and write code at superhuman speeds, and people who use them report increased mental fog, difficulty concentrating, and increased ability to think.

A senior engineering manager cited in the BCG study described juggling multiple AI tools to measure technical decisions, create drafts and summarize data. The constant switching and checking created what he called “mental confusion.” His effort shifted from solving the underlying problem to managing the tools.

Not all are compatible. A third of consumers have considered artificial intelligence coming into their phones and laptops and clearly said no. Employees whose organizations value work-life balance AI reported 28 percent less fatigueBCG’s research suggests that the problem lies less with the technology itself than with the culture of forced adoption wrapped around it.

The question is not whether AI is useful. In certain applications, this is obvious. The question is whether the frenzy surrounding it, the relentless pressure to master, integrate and accelerate, is making us smarter or more fit.

Sixty-seven billion dollars in quarterly investment. Take notes on earnings calls. All conferences dedicated to the word “exploration.”

In the January survey, the most common reason given by a person for not wanting any of these was four words: I don’t need it. Calm and unaffected, this sentence might be the smartest thing anyone has said about artificial intelligence in years. Now the question is, do we still have the attention span to hear it?



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *