More than 580 Google employees, including DeepMind researchers, are calling on Pichai to withdraw from the Pentagon’s secret AI contract.



TL;DR

More than 580 Google employees, including more than 20 directors and vice presidents and top DeepMind researchers, have signed a letter urging CEO Sundar Pichai to step down from classified military artificial intelligence work for the Pentagon. The letter claims that in secret networks with an air gap, Google cannot control how its AI is used, making “trust us” the only safeguard against autonomous weapons and mass surveillance. Google’s workforce won the Project Maven battle in 2018, but the company has since de-gunned AI principles, won a share of the $9 billion JWCC cloud contract, deployed Gemini to 3 million Pentagon employees, and is now negotiating classified access under “all lawful uses” terms.

More than 580 Google employees, including more than 20 directors, CEOs and vice presidents, have signed a letter urging CEO Sundar Pichai to give up secret military artificial intelligence work for the Pentagon. According to Bloomberg. The letter, which included senior researchers at Google DeepMind, was sent to Pichai on Monday. “We at Google are deeply concerned about the ongoing negotiations between Google and the US Department of Defense,” it said. “As people working on artificial intelligence, we know that these systems can centralize power, and they make mistakes.” The signatories want Google to reject all classified workloads, arguing that in air-gapped secret networks isolated from the public Internet, the company would have no way to control or limit how its AI tools are actually used. “At this time, the only way to guarantee that Google is not involved in such damages is to reject any confidential workload,” the letter said. “Otherwise, such uses may occur without our knowledge or authority to stop them.”

date

Googlers have fought this battle before. In 2018, nearly 4,000 employees signed an internal petition and at least 12 resigned over the Pentagon’s Project Maven, which uses artificial intelligence to detect and analyze objects in drone video footage. The protest forced Google not to pursue weapons or surveillance technology and to implement AI principles that allowed the Maven contract to expire in March 2019. Palantir took it over. The Maven contract was worth several million dollars. Palantir’s Maven investment has since grown to $13 billion. 2018’s triumph was real, but it was also the last time Google’s workforce successfully limited the company’s defensive ambitions. Since then, Google has systematically rebuilt every bridge burned by the protest.

In December 2022, Google won part of the Pentagon’s $9 billion Joint Warfare Cloud Capability contract, along with Amazon, Microsoft and Oracle. In February 2025, Google removed a passage from its AI principles that pledged not to use the technology in “weapons or other technologies whose primary purpose or application is to cause or directly facilitate injury to humans” and to avoid “technologies that collect or use data for surveillance that violates internationally accepted norms.” A blog post co-authored by Google DeepMind CEO Demis Hassabis cited “the global race for leadership in artificial intelligence” as the rationale. Human Rights Watch and Amnesty International both condemned the withdrawal. In December 2025, the Pentagon launched the GenAI.mil platform, powered by Google’s Gemini chatbot, available to all defense personnel. “The future of American warfare is here, and it’s artificial intelligence,” Defense Secretary Pete Hegseth said. In March 2026, Google deployed Gemini AI agents at an unclassified level to the Pentagon’s three million workforce, with eight agents pre-trained for tasks including summarizing meeting notes, setting budgets and verifying activities against defense strategy.

Let’s talk

Collusion is the next step. Emil Michael, the undersecretary of defense for research and engineering, told Bloomberg in March that the Pentagon would “start with unclassified because that’s where most of the users are, and then we’ll go into classified and top secret.” He confirmed that talks are already underway with Google about using Gemini agents in its private cloud infrastructure. In April, The Information reported that talks were moving toward “all legitimate uses” of Google’s AI tools, a phrase that falls short of the red lines established before Anthropic was appointed. The Pentagon has identified supply chain risk for refusing to lift restrictions on autonomous weapons and domestic mass surveillance. The Pentagon strongly disputed Anthropic’s characterization, arguing that commercial companies should not be able to dictate policies for wartime or wartime use.

OpenAI signed its own Pentagon deal hours after the Anthropic blacklistwith three revealed red lines: no mass internal surveillance, autonomous weapons, and high-stakes automated decisions. But the application of those red lines in private networks is a question raised by Google employees. In an air-gapped system, the artificial intelligence runs on a network separated from Google’s infrastructure by design. Google cannot see what queries are executed, what results are generated, or what decisions are made with those results. The Pentagon’s “trust us” guarantee is the only mechanism that prevents uses that would violate any red lines the company might negotiate. Sofia Liguori, an artificial intelligence research engineer at Google DeepMind in the UK who signed the letter, told Bloomberg that the main response to employee concerns has been to encourage the workforce to trust management to sign good contracts. “But everything remained very wide,” he said. “Agent AI is particularly worrisome because of the level of autonomy it can achieve. It’s like giving up a very powerful tool while giving up all control over its use.”

stakes

The Pentagon’s AI budget explains what the secret deal will fund. The defense budget for 2026 includes $13.4 billion for artificial intelligence and autonomy. The 2027 funding request submitted in April calls for $54.6 billion for the Defense Autonomous War Group, a 24,000% increase over the previous year and within a total defense budget of $1.5 trillion, representing a 42% year-over-year increase. The Pentagon is already testing humanoid robot soldiers Foundation along with Future Industries and formalized Palantir’s Maven as a major military system with multi-year funding. The scale of military AI investment in 2018 moved from the experimental stage that characterized Project Maven to an industry structure that views AI as a core capability of the American military. Confidential workloads, which Google employees have protested, will be at the heart of this structure.

Organizers of the letter said, “Maven is not over. Workers will continue to organize against the weaponization of Google’s AI technology until the company draws clear, enforceable lines.” The frame is important. In 2018, the fight was about a contract for a program. In 2026, the fight is over whether Google’s entire AI stack, Gemini, DeepMind’s research, concludes that TPU chips can be turned into military infrastructure in secret networks where no one outside the Pentagon can see what it’s doing. Management’s paradox blacklists Anthropic as it urges banks to embrace its AI reflects the political environment: companies that resist unrestricted military use are designated as supply chain risks, while companies that honor contracts worth billions are. Googlers are asking Pichai to back out of the deal, which the Pentagon has made clear would be punishing for reneging, as the company spent three years rebuilding its defense credentials precisely to win the deal.

Gap

580 signatures are distinguished by their seniority. Twenty directors, CEOs and vice presidents signed on, along with DeepMind’s senior researchers. Two-thirds of the signatories agreed to be named; a third requested anonymity for fear of retaliation. In February, an intercompany letter signed by about 800 Google employees and 100 OpenAI employees expressed support for Anthropic’s position against unrestricted military AI use. More than 100 DeepMind employees separately signed an internal letter demanding that no DeepMind research or models be used for weapons development or autonomous targeting. Jeff Dean, Google’s chief scientist, wrote in X that “mass surveillance violates the Fourth Amendment and has a chilling effect on free speech.” Internal dissent is not marginal. It extends to the technical leadership that builds the systems the Pentagon wants to deploy.

However, the gap between internal disagreement and corporate decision-making has widened since 2018. In 2018, 4,000 signings and dozens of resignations were enough to kill a multi-million dollar contract. In 2026, 580 signatures confront a secret AI market worth tens of billions, a Pentagon that has indicated it will respond to companies that reject its terms, a company that has already eliminated its own red lines, and a CEO who has approved the deployment of Gemini to three million Pentagon personnel. Trump said he was open to a Pentagon deal with Anthropic if the company lifts the restrictions, he suggests the administration consider compliance as the end goal for every AI company, regardless of where it starts. Googlers are asking their company to make an exception. The company’s trajectory over the past three years suggests it is destined to become the norm.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *