Anthropic filed two sworn statements in California federal court on Friday afternoon, retracting the Pentagon’s claim that the AI company poses an “unacceptable risk to national security” and arguing that the government’s case was based on technical misunderstandings and allegations.
The statements were filed with a response brief in Anthropic’s lawsuit against the Department of Defense, which comes ahead of a hearing scheduled for Tuesday, March 24, before Judge Rita Lin in San Francisco.
The controversy dates back to late February, when President Trump and Defense Secretary Pete Hegseth publicly announced they were cutting ties with Anthropic after the company refused to allow unrestricted military use of its AI technology.
The two people presenting the declarations are Sarah Heck, Anthropic’s Head of Policy, and Thiyagu Ramasamy, the company’s Head of Public Sector.
Heck is a former National Security Council official and worked in the White House during the Obama administration before moving on to Stripe and then to Anthropic, where he managed the company’s government relations and policy affairs. On February 24, he personally participated in the meeting of CEO Dario Amodei with Defense Minister Hegseth and Pentagon adviser Emil Michael.
In it declarationHeck calls out what he describes as a fundamental lie in government documents: Anthropic claimed some kind of approval role over military operations. According to him, this claim is simply not true. “At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee indicate that the company wanted such a role,” he wrote.
It also alleges that the Pentagon’s concerns about Anthropic potentially disabling or altering its technology mid-operation were never raised during the negotiations. Instead, he says, it was revealed for the first time in government court filings, which did not give Anthropic an opportunity to respond.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Another notable detail in Heck’s statement is that on March 4 — a day after the Pentagon formally finalized its supply chain risk designation against Anthropic — he emailed Deputy Secretary Michael Amodei, saying the two sides were “very close” on two issues.
The email, which Heck attached as an exhibit to his declaration, is worth reading along with what Michael said publicly in the days that followed. On March 5, Amodei announced that the company’s “productive conversations” with the Pentagon. The day after that, Michael Posted in X “There are no active negotiations between Anthropic and the War Department.” A week after that, he told CNBC that there was “no chance” of renewed talks.
Heck’s point seems to be: If Anthropic’s position on these two issues is what makes it a national security threat, why did the Pentagon’s own official say immediately after the appointment was finalized that the two sides were on the same page on exactly those issues? (He stops short of saying the government is using the designation as a bargaining chip, but the timeline he provides leaves the question open.)
Ramasamy brings a different experience to the job. Before joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI applications for government clients, including classified environments. At Anthropic, he is credited with building the team that brought his Claude models to the national security and defense setting. $200 million contract The Pentagon announced last summer.
His declaration takes on the government’s claim that Anthropic could theoretically interfere with military operations by disabling the technology or otherwise altering its behavior, which Ramasamy says is technically impossible. After Claude was placed in a government-protected, “air-gapped” system operated by a third-party contractor, Anthropic had no access to him, he said; no remote kill switch, no backdoor, and no mechanism to push unauthorized updates. Any “operational veto” is a fiction, he said, explaining that changing the model would require the Pentagon’s express approval and action to install.
Anthropic, he says, means the government can’t even see what its users are typing into the system, let alone retrieve that information.
Ramasamy also rejects the government’s claim that Anthropic’s hiring of foreign nationals poses a security risk to the company. He notes that Anthropic employees have passed a U.S. government security check — the same background check process required for access to classified information — adding in his statement that “to my knowledge,” Anthropic is the only AI company with cleared personnel building AI models designed to work in truly classified environments.
Anthropic’s lawsuit alleges that the supply chain risk designation — the first ever applied to an American company — violates the First Amendment and amounts to government retaliation for the company’s publicly stated views on AI safety.
The government, in a 40-page filing earlier this week, completely rejected this frameworkIt said Anthropic’s refusal to authorize all legitimate military uses of its technology was a business decision, not protected access, and that the designation was a direct national security call and not punishment for the company’s views.




