Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124


An artificial intelligence company filed two federal lawsuits on Monday, alleging that the Trump administration’s “supply chain risk” designation is unconstitutional retaliation for protected speech.
There’s a statement in Anthropic’s court filing that sets the tone for everything below: “Anthropic is turning to the court system as a last resort to protect its rights and stop the Executive’s illegal campaign of retaliation.” This is the language of a company that believes it is fighting a constitutional dispute, not just a contractual one.
Monday, a San Francisco-based AI company Two federal lawsuits have been filed against the Trump administrationThe Pentagon officially announced last week that Anthropic a “Supply chain risk to national security”a label historically reserved for companies tied to foreign competitors such as China and Russia.
This designation is believed to have been applied to an American company for the first time.
The first lawsuit was filed United States District Court for the Northern District of California. He is asking the judge to vacate the appointment and immediately stay the case pending. A second, shorter lawsuit was filed in the U.S. Court of Appeals for the District of Columbia Circuit, targeting a separate law enforced by the government that can only be challenged in that jurisdiction.
Both cases make essentially the same argument: that the administration acted illegally, without proper legislative authority, and in violation of Anthropic’s First Amendment rights.
More than a dozen federal agencies are named as defendants, including the Defense Department, the Treasury Department, the State Department and the General Services Administration.
The legal action is the culmination of an unusually fast-growing two-week standoff in what is becoming one of the more notable recent confrontations between a technology company and the US government.
The dispute centers on two conditions Anthropic insisted on in its contracts with the Pentagon: that its Claude AI system not be used for mass domestic surveillance of American citizens and that it not be used to power fully autonomous weapons. systems capable of aiming and firing without human authorization.
The Pentagon, which has been using Claude on classified networks since the company became the first AI lab to achieve this authorization, has demanded that any renewed contract remove those restrictions and allow military use of Claude “for all lawful purposes.” Anthropic refused.
What followed was a sequence of events that proceeded with surprising speed. On February 27, President Trump invited Anthropic to the Truth Social “Radical left, sober company” and directing each federal agency to “immediately cease” use of its technology.
Within hours, Defense Secretary Pete Hegseth announced at X that he had named Anthropic a supply chain risk, meaning that no contractor, supplier or partner company doing business with the US military could conduct any commercial activity with the company. The official letter confirming the appointment arrived on March 3, five days after Anthropic’s deadline to agree to the Pentagon’s terms.
The practical scope of the appointment turned out to be narrower than Hexeth’s original announcement. Anthropic CEO Dario Amodei said in a statement last Thursday that the relevant law restricts the designation from direct use of Claude in Pentagon contracts, Amodei said it cannot be used to sever all commercial ties between defense contractors and the company.
Microsoft, Google and Amazon all reviewed the appointment and came to the same conclusion, and issued statements confirming that Cloud would remain open to clients for work unrelated to defense contracts. Hegseth clearly stated otherwise in his original post.
The economic stakes are nevertheless significant. In declarations accompanying Monday’s filings, Anthropic executives detailed the damage. Chief Financial Officer Krishna Rao warned the court that if the designation is allowed to stand and clients read its scope broadly, it could reduce Anthropic’s 2026 revenue by “several billion dollars,” which would be “almost impossible to recover.”
Chief Commercial Officer Paul Smith gave a specific example: a partner with a multi-million dollar annual contract already switched to a competing AI model, eliminating more than $100 million in expected revenue pipeline; negotiations with financial institutions totaling about 180 million dollars were also broken.
The complaint itself makes two distinct legal arguments. The first is a First Amendment claim: the administration’s actions punish Anthropic for its public advocacy of AI safety, autonomous weapons, and its position on domestic surveillance, which constitute protected speech.
“The Constitution does not allow the government to use its vast power to punish a company for protected speech,” he said. The second argument challenges the statutory basis for the designation, citing 10 USC 3252, the procurement statute relied upon by the Pentagon. Anthropic said the law requires the government to use the “least restrictive means” to protect its supply chain, positioning it as a punitive tool against a local company for a policy disagreement.
The Pentagon’s position is that the dispute is about operational control rather than speech. Pentagon officials argue that a private contractor cannot insert itself into the chain of command by limiting the legitimate use of a critical capability, and that the military should have full discretion over how it deploys the technology in national security scenarios.
In an indication that the designation was not directly related to security, Anthropic’s court filing quoted a Pentagon official as saying the government intended to “make sure they pay a price” for the company’s refusal, which Anthropic’s lawyers pointed to as evidence of ulterior motives.
The case drew an unusual display of solidarity from Anthropic’s direct competitors. A group of 37 researchers and engineers from OpenAI and Google DeepMind, including Google’s Chief Scientist Jeff Dean, personally signed on, and filed an amicus brief supporting the suit on Monday.
The brief argues that the designation “chills professional debate” about the risks of artificial intelligence and undermines America’s competitiveness. “By silencing a lab, the government reduces the industry’s potential to innovate solutions,” the researchers wrote. The filing is notable given that OpenAI reached a new deal with the Pentagon just hours after the Trump administration’s order, which drew sharp criticism from OpenAI employees and Altman later admitted seemed “sloppy and opportunistic.”
Legal observers are skeptical that the designation will survive judicial review. Paul Scharre, a former Army Ranger and now executive vice president of the Center for a New American Security, told Breaking Defense that Hegseth’s original description of the ban went beyond what the supply chain risk law allowed, and even a narrower formal designation would be fought in court given the law’s requirement for the least restrictive means. Procurement laws passed by Congress, Anthropic argues in its filings, do not give the Pentagon or the president the power to blacklist a company based on policy disagreements.
According to reports, the first hearing could take place this Friday in San Francisco. Anthropic sought a temporary injunction that would allow it to continue working with military contractors while the legal case unfolds. The DoD said it does not comment on litigation.
Among the contradictions the complaint points out is that after the ban was announced, the military continued to use Claude during active combat operations in Iran. A six-month phase-out order was issued at the same time as the immediate ban. And the company maintains an active FedRAMP clearance and facility and personnel security clearances that are not normally inconsistent with national security risk disclosures. None of these discrepancies have been made public by the government.
Regardless of the court’s decision, the case has already set a different precedent: a large AI company backed by researchers from its rivals is publicly suing for its right to weaponize government procurement law against a local company for taking a public stance on how its technology should and shouldn’t be used. The result could determine whether any American company can “negotiate with the government” without risking its existence, Anthropic’s complaint said.