
A Viral sharing on X from a veteran programmer and former Google engineer Steve Yegge A rhetorical firestorm has erupted this week, prompting sharp public rebuttals from some of Google’s most prominent AI leaders and rekindling a sensitive question for the company: how deeply are its own engineers really using the latest generation of AI coding tools?
The debate began after Yegge summarized the views of a friend, a current and longtime Googler (or Google employee), who argued that Gemini AI firm Gemini’s in-house AI implementation seemed more mundane and less advanced than outsiders expected.
Yegge said that a Google employee friend claimed that Google engineering reflected the “average” industry pattern of a 20%-60%-20% split: a small group that eschews AI altogether (20%), a larger middle group that still relies mostly on simpler conversational and coding-assistant workflows (60%), and another small group that uses AI primarily in professional tools, using advanced tools20.
A VentureBeat search X Using its parent company’s artificial intelligence assistant Grok, Yegge’s April 13 post has gone viral, and as of April 14 has surpassed 4,500 likes, 205 quotes, 458 replies and 1.9 million views.
We’ve reached out to Google for comment on the allegations and will update when we hear back.
Veteran, powerful Googler voice
Why was Yegge’s unnamed Googler friend so harsh? Partly because Yegge isn’t just another commentator shooting from the sidelines.
He spent nearly 13 years at Google after previously working at Amazon and GeoWorks, then joined Grab and became head of engineering at Sourcegraph in 2022. He has long been known in software circles for his widely read essays on programming and engineering culture. An earlier internal Google memo that was accidentally made public in 2011 and attracted wide media attention.
This history helps explain why engineers and managers still take criticism seriously, even if they dismiss it.
Yegge has built a reputation over the years as an outspoken insider-outsider voice on software culture, someone with enough clout in the industry to move quickly when his judgment hits a nerve, especially at big tech companies.
Wikipedia summary of his career notes that he has worked at Google for a long time and has paid close attention to his blog posts and previous Google criticisms.
Unraveling Yegge’s friend’s argument
In this case, Yegge’s argument wasn’t simply that Google uses AI too little. It was that the company’s reception may have been uneven, culturally constrained, and less transformative than its branding implied.
His friend argued that some Googlers couldn’t use Anthropic’s Claude Code because it was labeled “hostile” and that Gemini wasn’t yet sufficient for fully agent coding workflows. He compared Google to a smaller group of companies that moved faster.
A retreat from accountants and current Googlers
The first big push came from Demis Hassabis, co-founder and CEO of Google DeepMind. answered directly and forcefully. “Maybe tell your friend to do some actual work and definitely stop spreading nonsense. This post is completely false and pure clickbait,” Hassabis wrote.
Other Google leaders followed with longer defenses.
Addy OsmaniThe director of Google Cloud AI wrote that Yegge’s account “does not match the state of agent coding in our company.” He added, “Over 40K SWEs use agent coding here weekly.”
Osmani said Googlers have access to internal tools and systems, including “custom models, skills, CLIs, and MCPs,” and pushed back on the idea that Googlers are sealed off from outside models, adding that “people can even use @AnthropicAI models at Vertex,” concluding that “Google is nothing but mediocre.”
Other current Googlers reinforced this message. Jana DoganA software engineer at Google tweeted a quote: “Everyone I work with uses @antigravity like every second of the day.” X says: "Unpopular opinion: If you think burned tokens are an indicator of productivity, no one should take you seriously. Imagine you are a top 0.0001% writer and they only count the tokens you produce."
Paige BaileyThe head of DevX engineering at Google DeepMind said the teams have agents “working 24/7”.
Several other Google and DeepMind officials have also disputed Yegge’s characterization, with some disputing the factual basis of his claims and others suggesting that he lacks visibility into current internal usage.
Yegge’s rebuttal
Yegge, for his part, did not back down. a Pursuit of Hassabishe wrote, “I’m not trying to mislead anyone,” but argued that by its own standard for advanced AI adoption, Google still doesn’t look particularly good.
He pointed to the use of tokens and replacing old development habits with true agent workflows as a more meaningful benchmark, and said he would be willing to retract his criticism if Google could show its engineers were working at that level.
AI adoption and AI transformation
This leaves the underlying debate unresolved, but clearer. It’s a less meaningful adoption fight than the fight over whether Google engineers should use AI at all.
Googlers point to scale, weekly usage, and availability of internal and external tools. Yegge argues that these measures can capture broad exposure without proving a deeper shift in how engineering is done, an AI transformation. The clash reflects a broader industry divide between visible usage metrics and more transformative, power-user behavior.
For Google, the topic is particularly sensitive. Yegge has previously criticized the company, including a 2018 essay explaining why you broke up, where he argued that Google was too risk-averse and had lost much of its ability to innovate.
If his last criticism had come from a lesser-known poster, it might have faded away. With memorable public criticism from a former Google engineer, he instead received direct responses from some of the company’s top AI figures, turning a single post into a larger public debate about whether Google’s AI leadership is as deep on the inside as it appears on the outside.





