After more than a week of negotiations over the Pentagon’s use of Anthropic’s Claude technology, the Trump administration Anthropic supply chain risk definedand the AI company said it would fight the designation in court.
OpenAI, meanwhile, quickly announced its own deal and found the opposite Users who delete ChatGPT and Pushing Anthropic’s Claude to the top of the App Store charts. And at least one OpenAI manager resigned Concerns over the hasty release of the ad without proper safeguards.
In the latest release TechCrunch’s Equity podcastKirsten Korosec, Sean O’Kane, and I discussed what this means for other startups looking to work with the federal government, particularly the Pentagon, as Kirsten wondered, “Are we going to see a little bit of a change of tune?”
Sean noted that this is an unusual situation in several ways, partly because OpenAI and Claude are producing products that “no one can shut up about.” And most importantly, it’s a debate about “how their technology was or wasn’t used to kill people”, so it’s naturally going to get more research.
However, Kirsten claimed it was a situation that “should give any startup pause”.
Read a preview of our conversation below, edited for length and clarity.
Kirsten: I wonder if other startups are starting to look at what’s going on with the federal government, particularly the Pentagon and Anthropic, in the sparring and wrestling match, and give them pause about whether they want to go after federal dollars. Will we see a slight change of tune?
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Sean: That’s why I’m interested. I think, not to some extent, but in the near term, if you really try to think about all the different companies, whether they’re startups or more established Fortune 500s that work with the government, particularly the Department of Defense or the Pentagon, it flies under the radar.
General Motors manufactures defense vehicles for the military and has been doing so for a very long time, and is working on all-electric and autonomous versions of these vehicles. There are things that go on all the time and never fit the zeitgeist. I think the problem that OpenAI and Anthropic faced last week is that these are companies that make products that a ton of people use — and more importantly, (that) nobody can shut up.
So there’s a focus on them that naturally brings their involvement to a level that I don’t think most other companies that contract with the federal government, especially any of the war-fighting elements of the federal government, necessarily have to deal with.
The only caveat I would add to this is that there is a lot of heat around this discussion between Anthropic and OpenAI and the Pentagon. It’s not just the focus on them and our familiarity with their brands, but I think there’s a more abstract element when you think of General Motors as a defense contractor or whatever.
I don’t think we’ll see Applied Intuition or any of these companies positioning themselves as dual-use because I’m not focused on that and there’s not a shared understanding of what that impact might be.
Anthony: This story is in many ways very unique and specific to these companies and personalities. I mean, it’s been a lot really interesting pieces of thought About: What is the role of technology in government? (Of) AI in government? I think these are all good and worthwhile questions to ask and explore.
However, I think it’s a very interesting lens through which to explore some things, because Anthropic and OpenAI aren’t really that different in many ways or positions. this no like a company saying, “hey, I don’t want to work with the government,” and another saying, “yes, I do.” Or someone says “you can do what you want”. and (the other) says, “No, I want limitations.” Both of them, at the very least, say clearly: “We want limits on how our artificial intelligence can be used.” It looks like Anthropic is digging their heels in more: You can’t change the terms like that.
In addition, Anthropic’s CEO and many TechCrunch readers have a personality layer like Emil Michael. Remember the days of Uberand now (Chief Technology Officer of the Department of Defense). It seems they don’t really love each other. According to information.
Sean: Yes, there is a huge “girls fight” element here that we shouldn’t overlook.
Kirsten: Yes, a little. There is, but the effects are a little stronger than that. Again, to backtrack a bit, what we’re talking about here is the Pentagon and Anthropic getting into a debate that Anthropic seems to be losing, even though it’s still heavily used by the military. They’re considered an important technology, but OpenAI is kind of involved, and it’s evolving and will likely change by the time this episode comes out.
The return has been interesting for OpenAI, which we have seen a lot ChatGPT removals are up 295% I think After OpenAI signed a contract with the Ministry of Defense.
To me, all of this is hype for something really critical and dangerous, the Pentagon’s attempt to change the existing terms of an existing treaty. And that’s really important and should give pause to any startup because the political machine that’s happening right now, especially with the DoD, looks different. This is not normal. Contracts at the government level go on forever and it’s a problem for them to try to change those terms.




