Google Just Gave the Pentagon Full Access to Its AI — and 600 Employees Tried to Stop It
Google signed a classified deal letting the U.S. military use Gemini for 'any lawful government purpose.' Over 600 employees, including top DeepMind researchers, tried to block it. They failed.
By Troy Brown
Google just signed a deal with the Pentagon that lets the U.S. Department of Defense use its most powerful AI on classified military networks. The agreement was finalized on Monday, April 28. It covers Google's Gemini models — the same technology behind its consumer products — and extends their reach into some of the most sensitive operations in the U.S. government.
The key phrase in the contract: "any lawful government purpose." That is not a narrow scope. It means mission planning, intelligence analysis, logistics, and potentially weapons targeting. Google does not get a veto over how the technology is used once it is deployed.
The deal also requires Google to adjust its AI safety settings and filters at the government's request. In plain terms, the guardrails built into Gemini for everyday users can be loosened or removed when the Pentagon asks.
This did not happen quietly. More than 600 Google employees signed an open letter to CEO Sundar Pichai urging him to reject the deal before it was finalized. At least 18 senior staff members added their names. Many of the signatories came from DeepMind, Google's elite AI research lab — the people who actually build the models.
The letter was direct. The employees wrote that their "proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses." They flagged concerns about lethal autonomous weapons, mass surveillance, and the basic fact that AI systems make mistakes — mistakes that carry a different weight when the stakes involve human lives.
Google signed it anyway.
If this story sounds familiar, it should. In 2018, Google faced a nearly identical crisis over Project Maven, a Pentagon contract that used AI to analyze drone footage. That time, roughly 4,000 employees protested, several resigned, and Google eventually pulled out of the project. The company published a set of AI principles pledging not to build weapons or surveillance technologies.
Those principles are gone. In February 2025, Google quietly removed the language about avoiding weapons and surveillance from its AI guidelines. That change cleared the path for the deal signed this week.
There is an odd contradiction worth noticing. Just two months before signing this classified agreement, Google withdrew from a separate $100 million Pentagon competition to build autonomous drone swarm technology. The company cited an internal ethics review. So it said no to drones — then said yes to giving the military open-ended access to its most capable AI. The line being drawn is hard to follow.
Google is not alone in this space. OpenAI signed a similar deal with the Pentagon. So did Elon Musk's xAI. The U.S. military is actively building relationships with every major AI lab, and the labs are saying yes. The Pentagon's AI chief confirmed the expanded use of Google's technology and noted that relying on a single model is "never a good thing" — meaning the military wants options.
For the average person, this matters more than it might seem. The AI you use to summarize emails, plan trips, and answer questions is now the same AI being deployed in classified military operations. The models are not different. The safety settings are.
This also raises a question that does not have a clean answer: who decides what AI should and should not be used for? Google's employees tried to draw a line. Management overruled them. The contract language gives the government broad authority. And there is no public oversight mechanism for how these models are used once they enter classified environments.
The practical concern is not science fiction. AI systems hallucinate. They misinterpret context. They generate confident answers that are wrong. In consumer settings, that means a bad restaurant recommendation. In military settings, the consequences are obviously different.
None of this means AI should never be used in defense. Governments have always adopted the most capable technology available. The question is whether there should be clear, enforceable boundaries — and right now, for classified AI deployments, there are very few.
The takeaway is simple but uncomfortable. The same companies building the AI tools you use every day are also building the AI tools that militaries will use in ways you will never see. The 600 Google employees who tried to stop this deal understood something important: once the technology ships, you do not get to take it back.
Subscribe
Get the next issue in your inbox.
Join The AI Signal for clear weekly notes on tools, workflows, and the handful of AI developments that are actually worth your attention.