Policy2026-05-026 min read

The Pentagon Just Signed AI Deals With 7 Big Tech Companies — One Major Lab Was Left Out

The U.S. military will deploy AI from OpenAI, Google, Microsoft, and others on classified networks. Anthropic was frozen out after refusing to allow its tech in autonomous weapons.

By Troy Brown

The U.S. Department of Defense just made one of the biggest AI moves in its history. On Friday, the Pentagon announced it has signed agreements with seven major tech companies to deploy their artificial intelligence systems on classified military networks.

The companies: OpenAI, Google, Microsoft, Amazon Web Services, Nvidia, SpaceX, and Reflection AI. Their AI tools will be integrated into the Pentagon's most sensitive systems — the Impact Level 6 and Level 7 networks used for classified and secret operations.

In plain terms, this means large language models and AI tools will now help the U.S. military process intelligence, make faster decisions in the field, and handle tasks that would take humans far longer to complete manually.

But the biggest story here might be who was left out.

Anthropic — the company behind the Claude AI model — was not part of the deal. And it was not an oversight. The Pentagon has actively frozen Anthropic out, going so far as to designate it a "supply chain risk" to national security earlier this year.

The reason? Anthropic refused to let the military use its technology for fully autonomous weapons and mass surveillance of American citizens. It wanted those restrictions written into any contract. The Pentagon said no.

Defense Secretary Pete Hegseth went further, declaring that no contractor or partner doing business with the Pentagon may engage in commercial activity with Anthropic. The company is fighting the designation in court, and a federal judge blocked enforcement of the ban last month — but Anthropic still was not invited to the table for this round of deals.

OpenAI, by contrast, signed on. It published a blog post outlining three "red lines" it says it negotiated: no mass domestic surveillance, no directing autonomous weapons, and no automated high-stakes decisions like social credit scoring. CEO Sam Altman later admitted the initial rollout looked "opportunistic and sloppy" and the contract was amended in March after public backlash.

Whether those safeguards hold up in practice is an open question. The full contract text has not been released publicly. Critics point out that "lawful operational use" is a broad phrase, and the Pentagon's definition of it may stretch further than most people expect.

For the tech companies involved, this is enormous business. Military AI contracts are long-term, high-value, and come with guaranteed demand. For the Pentagon, the stated goal is to "prevent AI vendor lock-in" and ensure long-term flexibility — which is why it signed multiple vendors rather than going exclusive with one.

What does this mean for the rest of us? A few things worth watching.

First, the line between commercial AI and military AI is disappearing. The same models that help you write emails and summarize documents are now being deployed in classified military environments. That changes the risk profile for everyone building on these platforms.

Second, there is now a real cost to saying no. Anthropic took a principled stand, and it got blacklisted. Other AI companies will see that and think twice before drawing their own red lines on military use.

Third, this is not going away. The Pentagon has made clear that AI is central to its future strategy. The question is no longer whether the military will use frontier AI — it is how, with what limits, and who gets to decide.

For small business owners and creators using AI tools every day, this is worth paying attention to. The companies building your everyday productivity tools are now also building classified military infrastructure. That does not make their products dangerous — but it does mean the stakes around AI governance are higher than ever.

The takeaway: AI is no longer just a business tool or a creative assistant. It is now a military asset. How companies and governments handle that dual reality will shape the technology — and the trust people place in it — for years to come.

Subscribe

Get the next issue in your inbox.

Join The AI Signal for clear weekly notes on tools, workflows, and the handful of AI developments that are actually worth your attention.