Policy2026-05-095 min read

The White House Wants to Vet AI Models Like the FDA Approves Drugs

After Anthropic's Mythos model exposed thousands of software vulnerabilities, the White House is considering an executive order that would require AI models to pass government safety reviews before public release.

By Troy Brown

The White House just floated an idea that could change the entire AI industry: what if new AI models had to pass a government safety review before they were released to the public?

On May 7, National Economic Council Director Kevin Hassett said the administration is drafting an executive order to create a vetting process for AI models. He compared it directly to the FDA's drug approval system. His words: future AI models that potentially create vulnerabilities should go through a process so that they are released in the wild after they have been proven safe, just like an FDA drug.

This is a stunning shift from an administration that has spent the last year and a half championing AI development with as few guardrails as possible. So what changed? One model.

Anthropic, the company behind Claude, recently revealed a frontier AI model called Mythos. It is not a chatbot upgrade. It is a cybersecurity machine. During internal testing, Mythos identified and exploited thousands of zero-day vulnerabilities across every major operating system and every major web browser.

Zero-day vulnerabilities are security flaws that nobody knows about yet — no patch exists, no fix is available. They are the most dangerous kind of software bug, and Mythos found thousands of them. In days, not years.

That got Washington's attention fast. The concern is straightforward: if a model this capable ended up in the wrong hands, it could be used to attack critical infrastructure, financial systems, or government networks before anyone had time to respond.

Anthropic clearly understood the risk. Instead of releasing Mythos to the public, it launched something called Project Glasswing — a coalition of major tech companies including AWS, Apple, Microsoft, Google, CrowdStrike, and Palo Alto Networks. The idea is to let these partners use Mythos to find and fix vulnerabilities before attackers can exploit them.

Anthropic is backing the effort with up to $100 million in usage credits for Mythos and $4 million in direct donations to open-source security organizations. About 40 additional organizations that build or maintain critical software have also been granted access.

So the model that triggered the regulation conversation is also being used as the tool to strengthen defenses. That tension — the same technology that can break things can also fix them — is exactly what makes this moment so complicated.

Not everyone in the administration agrees on how far to go. White House Chief of Staff Susie Wiles has reportedly pushed back on the FDA comparison, saying the administration is not in the business of picking winners and losers. She argued for a lighter touch — ensuring safe deployment without creating a bottleneck that slows the entire industry.

Critics outside the administration have been sharper. The American Enterprise Institute called the vetting proposal bad policy, arguing that mandatory government reviews would stifle innovation and competition without meaningfully improving security. Their concern is that a slow approval process would hand an advantage to foreign competitors who face no such requirements.

There is also a practical question: who actually does the reviewing? Hassett suggested it could fall to the Center for AI Standards and Innovation. But building the expertise and infrastructure to evaluate increasingly complex AI models is a massive undertaking — and one that would need to keep pace with an industry that moves at breakneck speed.

Still, the administration appears to be moving. Reports indicate that one or more AI executive orders could be signed within weeks. The specifics are still being debated, but the direction is clear: the federal government wants a seat at the table before the next frontier model ships.

For small business owners and everyday users, this matters more than it might seem. If new AI tools start going through a government review process, the pace at which you get access to the latest technology could slow down. On the other hand, the tools that do reach you would come with a stronger baseline of safety.

This is also a signal about where AI risk is heading. For the past few years, most of the public conversation about AI danger focused on misinformation and job displacement. Mythos shifted the spotlight to cybersecurity — the idea that AI could become a weapon against the software infrastructure the entire economy runs on.

The takeaway is this: AI is now powerful enough that the U.S. government is seriously debating whether it needs to be approved before release. Whether the FDA comparison holds up or fades into something lighter, the era of shipping frontier models first and asking questions later may be coming to an end.

Subscribe

Get the next issue in your inbox.

Join The AI Signal for clear weekly notes on tools, workflows, and the handful of AI developments that are actually worth your attention.