A roadmap for AI, whoever listens


While Washington’s departure from Anthropic reveals that any coherent rules governing AI are far from complete, a bipartisan coalition of thinkers has assembled something the government has so far refused to produce: a framework for what responsible AI development actually looks like.

The Human Oriented Declaration Last week, the Pentagon-Anthropic feud was concluded before it began, but the collision of the two events was not lost on anyone.

“Just in the last four months something quite remarkable has happened in America,” said Max Tegmark, an MIT physics and artificial intelligence researcher. in conversation with this editor. “Suddenly, a poll (shows) that 95% of all Americans oppose an unregulated race for superintelligence.”

The newly published document, signed by hundreds of experts, former officials and public figures, opens with the sobering observation that humanity is at a crossroads. One of the ways in which the Declaration calls the “race to replace” people, first as workers and then as decision makers, is that power is concentrated in unaccountable institutions and their machines. Another leads to artificial intelligence that massively expands human potential.

The latter scenario hinges on five key pillars: holding people accountable, avoiding concentration of power, protecting human experience, protecting individual freedom, and holding AI companies accountable. Among its more muscular provisions are an outright ban on the development of superintelligence until scientific consensus can be safely and genuinely democratically implemented; forced shutdowns in powerful systems; and a ban on architectures that can resist self-replication, autonomous self-improvement, or shutdown.

The release of the declaration coincides with a period that makes it easier to assess its relevance. Last Friday in February, Defense Secretary Pete Hegseth designated Anthropic, whose AI is already working on classified military platforms, a “supply chain risk” after the company denied the Pentagon unlimited use of its technology. Hours later, OpenAI terminated its contract with the Department of Defense, which legal experts say will be difficult to enforce in any meaningful way. What all this reveals is just how costly congressional inaction on AI has been.

As Dean Ball, senior fellow at the Foundation for American Innovation, He told “The New York Times”. later, “This isn’t just some treaty dispute. This is the first conversation we’ve had as a country about controlling AI systems.”

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Tegmark has come up with an analogy that most people can relate to when we speak. “You never have to worry that some drug company is going to release another drug that causes a lot of harm before people figure out how to make it safe,” he said, “because the FDA won’t let them release anything until it’s safe enough.”

Washington turf wars rarely generate public pressure to change laws. Instead, Tegmark sees child safety as the most pressing point to break the current impasse. Indeed, the declaration calls for mandatory pre-implementation testing of AI products – especially chatbots and assistant programs aimed at young users – that include risks such as increased suicidal thoughts, worsening mental health conditions and emotional manipulation.

“If some creepy old guy is texting an 11-year-old pretending to be a young girl and trying to convince that boy to commit suicide, the boy could be arrested for that,” Tegmark said. “We already have laws. It’s illegal. But if a car is doing it, why is it different?”

He believes that coverage will almost inevitably expand once the principle of pre-release testing for children’s products is established. “People will come and it will be—let’s add a few more requirements. Maybe we should test to see if it can help terrorists develop bioweapons. Maybe we should test to make sure that superintelligence doesn’t have the ability to overthrow the US government.”

It is no small thing that former Trump adviser Steve Bannon and President Obama’s national security adviser Susan Rice signed the same document along with former Joint Chiefs Chairman Mike Mullen and progressive religious leaders.

“Of course what they agree on is that they’re all human,” says Tegmark. “As for whether we want a future for people or a future for machines, of course they’re going to be on the same side.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *