Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Caitlin Kalinowski spent 16 months building OpenAI’s physical AI software. On Saturday, he said the company was moving too fast on something very important.
The week that began with Anthropic being blacklisted by the Pentagon and ending with OpenAI getting its contract now claims to be OpenAI’s top hardware manager.
Caitlin Kalinowski, who joined OpenAI in November 2024 to lead its robotics and consumer hardware division, announced her resignation at X on Saturday. His statement was short, direct and more candid than anything OpenAI has said about the deal.
“AI has an important role in national security” he wrote. “But American surveillance without judicial oversight and lethal autonomy without human authorization are lines that deserve more debate than they get.
In a subsequent post, he provided more specific information about the nature of the complaint. “This is primarily a management concern,” he said. “These are very important for deals or announcements in a hurry.”
Kalinowski was careful to interpret his departure in personal terms. “It was about principles, not people,” he wrote. “I have the utmost respect for Sam and the team.”
That last note carries some weight: Sam Altman himself admitted that the Pentagon deal was “definitely rushed” and caused serious repercussions.
What Kalinowski’s resignation adds to this perception is a name and a headline: the top person at OpenAI, whose job it is to bring AI into physical systems, has decided that the process by which it now enters weapons systems and control infrastructure is not good enough.
The sequence of events here unfolded over a period of about a week. Anthropic, the only AI company allowed to work on the Pentagon’s classified networks after a $200 million contract awarded in July 2025, has spent weeks in tense negotiations with the Defense Department over the terms of its continued use.
Anthropic’s position was that its models should not be deployed for mass internal surveillance or fully autonomous weapons. Under Secretary of Defense Pete Hegseth, the Pentagon insisted on allowing use for “all lawful purposes” without special carve-outs.
With talks collapsing on February 28, President Trump ordered all federal agencies to stop using Anthropic’s technology, calling the company “radically woke” on Truth Social.
Hegseth formally designated Anthropic a national security supply chain risk, a classification previously reserved for foreign adversaries and requiring DoD vendors and contractors to certify that they are not using Anthropic models.
Hours later, Altman wrote in X that OpenAI had agreed to host its models on the Pentagon’s secret network.
OpenAI’s stated position is that its deal includes the same basic safeguards that Anthropics sought: no massive internal surveillance, no autonomous weapons.
The company published a blog post outlining its approach, arguing that its cloud-only architecture, a stored security stack, and contract clauses based on existing US law rather than bespoke bans make it more secure than any previous classified AI implementation, including Anthropic.
Kalinowski’s career before OpenAI was unusual in its breadth. Before he moved to Meta’s Oculus division, he spent nearly six years at Apple as CTO of the Mac Pro and MacBook Air programs, including the original unibody MacBook Pro, before leading virtual reality hardware for more than nine years.
His last role at Meta was Project Nazare, later renamed Orion, an augmented reality glasses initiative that Meta unveiled as a prototype in September 2024 and described as the most advanced AR glasses ever made.
It joined OpenAI the following month.
During his 16 months at OpenAI, Kalinowski trained what the company describes as a physical artificial intelligence program, including a robotic arm in a laboratory in San Francisco that employs about 100 data collectors.
At a time when OpenAI has big ambitions to move beyond software, his departure leaves the effort without its most experienced hardware leader.
OpenAI confirmed his resignation on Saturday, saying in a statement: “We believe our agreement with the Pentagon creates a workable path for responsible national security use of AI while clarifying our red lines: no domestic surveillance and no autonomous weapons.
We understand that people have strong views on these issues and we will continue to engage in discussions with workers, government, civil society and communities around the world.”
The fallout from OpenAI’s Pentagon deal has not been limited to domestic discontent. After the announcement, ChatGPT downloads have reportedly increased by 295%, replacing Anthropic’s Claude ChatGPT to the number one position in the US App Store. As of Saturday afternoon, the two programs remained first and second, respectively.
The resignation of the company’s head of robotics on Thursday confirms that the costs of the deal for OpenAI are still being calculated. Altman wanted to reduce the conflict between the government and the AI industry. He may yet succeed. Whether the price of this de-escalation is worth paying, in talent, trust, and the specific question of who is right about the guardrails, is a question that will take longer to answer.