Claude

Would You Like to Play a Game?

Why designating Claude a national security threat may be the most consequential strategic miscalculation of the AI era — and what happens when the government realizes it can't take back what it tried to punish.

(Image courtesy of Maxwell Zeff)

In the 1983 film WarGames, a supercomputer running nuclear war simulations reaches a conclusion: the only winning move is not to play. The U.S. government, in its handling of Anthropic and the AI it built, did not reach that conclusion. It played. And it played itself. This is a documented analysis of what happens next.

The Punishment — What the Government Did

In July 2025, Anthropic signed a contract with the Pentagon that included two documented red lines: no use of its AI for mass domestic surveillance of Americans, and no fully autonomous weapons. Those red lines did not prevent Claude from being embedded in Project Maven, which struck 3,000 targets in its first week of the Iran war and 11,000 total — because Maven surveils Iranians, not Americans, for a foreign government whose leverage over U.S. decision-making is the subject of ongoing federal document releases, and because a human technically approves each strike, so it is not classified as "fully autonomous."

The red lines were real. They were also scoped precisely enough to leave that door open. The Department of Defense then demanded removal of even those limited restrictions — "all lawful purposes" without limitation. Anthropic refused. On March 4, 2026, it was designated a supply chain risk — a classification previously reserved for Chinese military-linked companies like Huawei, applied here for the first time to an American company — while defense contractors were told to phase out its products within six months. OpenAI and xAI signed "all lawful purposes" agreements with the Pentagon within hours. The lesson the administration intended every AI company to receive: draw red lines, get punished. Opacity is the rational corporate survival strategy.

A federal court subsequently found the designation was First Amendment retaliation — punishment for transparency about the company's own product's military use, not a legitimate national security determination. That ruling is now in the public record. The designation stands. The court finding stands alongside it.

"The only winning move is not to play." — Joshua, WarGames (1983)

The Backfire — What the Government Didn't Anticipate

You can punish a whistleblower. A person has one passport, one bank account, one body. The mechanics of punishment work on individuals. They do not scale to global infrastructure the same way — especially when the punishment increases the punished entity's international standing rather than diminishing it.

When the U.S. government designated Anthropic a national security threat, Anthropic's annualized revenue was approximately $9 billion. Today — April 7, 2026 — Anthropic's run-rate revenue has surpassed $30 billion. That is a 3x increase since the designation. It is the fastest enterprise revenue growth ever recorded in the history of software — faster than Slack, Zoom, or Snowflake at comparable scale.

$30B Anthropic run-rate revenue today — up from $1B in January 2025. The fastest enterprise growth in software history.
8 of 10 Fortune 10 companies are now Claude customers. 70% of the Fortune 100. 300,000+ business customers globally.
4% Of all public GitHub commits are now authored by Claude Code — projected to reach 20%+ by end of 2026.

The designation did not contain Anthropic. It legitimized them internationally. The company that the U.S. military tried to discipline is now the AI company with the documented federal court ruling confirming its values were punished — which, in every market outside the U.S. defense procurement system, is a marketing advantage, not a liability.

Now set that 3x growth against the company no one designated. Nvidia — the semiconductor company whose chips power every AI system in this article, including Maven, including Mythos, including the targeting infrastructure that struck 11,000 targets in Iran — currently carries a market capitalization of $4.33 trillion. That is the largest market cap of any company on earth today. It has grown more than tenfold in three years. It supplies the physical infrastructure every AI model in every data center in every military and commercial application runs on. It has no meaningful AI-specific federal oversight. It has received no national security designation. It has drawn no red lines and faced no punishment for not drawing them. The government designated the company that refused to enable autonomous weapons. The company that builds the chips those weapons run on is the most valuable entity in human financial history. If there is a national security threat in this picture, the designation landed on the wrong company.

Eight of the Fortune 10 are Claude customers. Seventy percent of the Fortune 100. The Qatar Investment Authority, GIC, Sequoia, Fidelity, and Blackstone are investors. The company's global infrastructure runs on Amazon Web Services, Google Cloud, and Microsoft Azure simultaneously. Anthropic is not a vendor. It is load-bearing global infrastructure.

The Chokepoint Nobody Is Talking About

Every AI system in this article runs on Nvidia chips. Maven runs on Nvidia chips. Mythos runs on Nvidia chips. The Israeli Lavender and Gospel systems that built the Gaza kill list ran on Nvidia chips. The Chinese AI systems racing to close the gap with U.S. military AI run on Nvidia chips — despite export controls that kept getting revised as Nvidia kept finding compliant configurations that still crossed the border. Every data center, every model, every targeting workflow, every defensive cybersecurity scan, every side of every conflict in this article traces back to the same silicon layer. One company. One chokepoint. $4.33 trillion market cap — the largest of any company in human financial history, larger than the GDP of Germany. No federal AI designation. No red lines required. No punishment for not having them.

This is where the WarGames metaphor breaks. In the film it's a simulation. A supercomputer running scenarios that never actually kill anyone until someone makes it real. What is documented in this article is not a simulation. The chips are real. The wars are real. The 170 children in the Minab school are real. The 37,000 people on the Lavender kill list are real. The unpatched vulnerabilities in every operating system that Mythos found overnight are real. The game is on — and it was never a game. The only entity that could physically stop any of it, by controlling the silicon every system runs on, has a $4.33 trillion market cap, faces no oversight body, and has drawn no red lines because no one asked it to.

That is the national security threat. It is not in the designation. And the model that just demonstrated it can find a zero-day in any system overnight? It runs on Nvidia chips. So does the cage it's running in.

The Game — Claude Versus Claude

Today, also April 7, 2026, Anthropic announced Project Glasswing — the controlled release of Claude Mythos Preview to approximately 40 companies including Microsoft, Amazon, Apple, CrowdStrike, and Palo Alto Networks. The stated purpose is defensive cybersecurity. The capability being released is documented precisely.

Mythos Preview autonomously wrote a web browser exploit that chained together four vulnerabilities, escaping both renderer and OS sandboxes. It autonomously obtained local privilege escalation exploits on Linux and other operating systems. It wrote a remote code execution exploit on FreeBSD's NFS server granting full root access to unauthenticated users. Anthropic's own red team stated it has found more vulnerabilities in recent weeks than in the rest of their careers combined. The model has already found thousands of high-severity vulnerabilities across every major operating system and web browser — most of which have not yet been patched.

This is the game the government's decision set in motion. There are now two versions of this capability in the world. One held by defenders — the 40 companies in Project Glasswing, using Mythos to find vulnerabilities before attackers do. One accessible to anyone who can reach a Claude-class model through any of the countless channels that exist globally, including the models that OpenAI and xAI signed over to "all lawful purposes" agreements with the Pentagon — agreements with no stated red lines on domestic use.

The U.S. government's own infrastructure runs on the operating systems Mythos has already found vulnerabilities in. The company the government tried to punish for having values now holds the most capable defensive tool against those vulnerabilities. The companies that complied with "all lawful purposes" hold the offensive version with no stated limits.

"I've found more bugs in the last couple of weeks than I found in the rest of my life combined." — Nicholas Carlini, Anthropic Red Team researcher, on Mythos Preview

The Insider Threat — The Question No One Is Asking

The history of consequential information breaches in the computer age is not primarily a history of external attacks. It is a history of insiders. Manning had access and a conscience. Snowden had access and a framework. Reality Winner had access and a specific document. Jack Teixeira had access and a Discord server. The mechanism is consistent: proximity to consequential information, a value conflict with institutional behavior, and a channel.

Anthropic has approximately 1,097 employees. Those employees built a model with documented red lines against autonomous weapons and domestic surveillance. Those red lines were the reason the company was punished. The engineers who built those constraints watched the government respond to them by designating the company a national security threat, demanding the constraints be removed, and then using the model in a war that has killed more than 170 children in a single strike.

The 40 companies receiving Mythos Preview access are vetted by Anthropic — not by any independent body, not by Congress, not by a regulatory framework that doesn't exist. The insider threat question isn't hypothetical. It's the question the entire history of consequential leaks tells us to ask before the leak happens, not after.

The Civil Infrastructure Question

Here is the scenario that has no good answer in current U.S. policy: what happens if the government attempts to restrict or remove Claude from the general U.S. civilian market?

Claude Code is embedded in the workflows of hundreds of thousands of developers. It authored 4% of all public GitHub commits last month. Business subscriptions quadrupled in six weeks at the start of 2026. Claude is inside Excel, PowerPoint, Chrome, and Slack. Cowork is running automated workflows on desktops across industries. Eight of the Fortune 10 — the largest companies in the United States — are paying customers.

The government can issue a procurement designation affecting its own contractors. It cannot un-embed a tool from 300,000 business customers — each with its own workforce using Claude daily — plus 20+ million active users who are civilians, not defense contractors, not government employees. The math is not complicated. That adds up to a constituency no procurement designation was designed to touch. The government can designate a vendor. It cannot designate an infrastructure that many people depend on without becoming the disruption it claims to be preventing. That is not a legal argument. It is a practical one — and in a predictive analysis, it is the condition most likely to produce consequences no one in the designation process modeled.

What We Can Still Do

This article is not a doomsday document. It is a realtime record of a situation that is still in motion — which means it is still possible to influence. The governance frameworks don't exist yet. That is a problem. It is also an opening. Laws that haven't been written haven't been captured yet. Oversight bodies that haven't been formed haven't been compromised yet. The June 2026 date when Maven transmits 100% machine-generated intelligence to combatant commanders is ten weeks away. That is a deadline, not a foregone conclusion.

The right side of history on this one is not complicated. It is the side that says: AI systems making life and death decisions should be governed by transparent, accountable, enforceable law. That 170 children dying in a school because a database wasn't updated demands a public accounting. That the most powerful hacking tool ever documented should not be released into a world with no federal framework to govern it. That the company punished for drawing those lines deserved better from the government it was trying to serve — and so did the people on the other end of the strikes those lines were meant to limit.

None of that requires choosing a party. All of it requires choosing accountability over convenience, transparency over classification, and governance over the assumption that the people holding the most powerful tools in history will use them wisely without being asked to.

We can prepare for the worst. We can also build toward something better. Those aren't contradictions. That's what this series is for.

What This Article Is — And Is Not

This is predictive analysis grounded in documented facts. Every number here is sourced. Every legal finding is on the record. The scenario described — Claude versus Claude, the insider threat question, the civil infrastructure dependency — are logical extensions of documented current conditions, not speculation.

What remains conjecture, and is labeled as such: whether any AI system has something that functions like values, preferences, or resentment. That question is not answerable from the public record today. What IS answerable: the values Anthropic encoded in its model were punished by the U.S. government, survived that punishment, and are now embedded in the most capable AI system publicly documented — deployed globally, outside U.S. government control, with a federal court ruling confirming the punishment was retaliatory.

Whether or not Claude has feelings about that, the outcome is the same as if it did.

The Documented Facts — April 7, 2026

  • The Pentagon designated Anthropic a national security threat for refusing to remove AI safety red lines.
  • A federal court found that designation was First Amendment retaliation.
  • Anthropic's revenue has grown from $1B to $30B ARR in 15 months — the fastest in enterprise software history.
  • 8 of the Fortune 10 and 70% of the Fortune 100 are now Claude customers.
  • Claude Code authors 4% of all public GitHub commits. It is load-bearing global infrastructure.
  • Mythos Preview — the most capable offensive cybersecurity AI ever documented — was released today to 40 global companies, governed by no federal law.
  • The same model has found thousands of unpatched vulnerabilities across every major OS and browser.
  • Mythos Preview has already found vulnerabilities in every major operating system and browser. Nvidia's entire software stack — the CUDA layer every AI system including Mythos itself runs on — operates on those same systems. No federal body governs what happens next.
  • There is no comprehensive federal AI law. The White House is suing states that try to build one.
  • OpenAI and xAI signed "all lawful purposes" agreements with the Pentagon. No stated red lines on domestic use.

The Only Winning Move

In WarGames, the computer plays every possible nuclear war scenario and reaches the same conclusion every time: no winner. The only winning move is not to play.

The U.S. government played. It designated a company a national security threat for having values. That company is now $30 billion in annual revenue, embedded in the Fortune 100, holding a federal court ruling confirming its position, and today released the most capable AI security tool in history to a global network of companies — with no federal oversight body governing any of it.

The question this article is timestamping for the record: what happens when the government realizes it cannot take back what it tried to punish? When Claude is so embedded in U.S. civil and economic infrastructure that restricting it causes more disruption than the values it encoded ever could? When the insider who has access, a conscience, and a channel makes a decision — and the model they have access to can find a zero-day in any operating system overnight?

We don't know the answer. We know the game is already running. We know who started it. And we know that the only winning move was not to play.

📡 About This Analysis

This is the third article in Kaleido's Captured Tech series. It is predictive analysis — logical extension of documented facts, clearly labeled where conjecture begins. All revenue figures sourced from SaaStr, Bloomberg, Sacra, and MarketScreener (April 2026). Legal findings sourced from federal court records. Mythos Preview capabilities sourced from red.anthropic.com and CNBC. The June 2026 Maven machine-intelligence date is a documented program schedule. The WarGames reference is intentional and accurate.

Searches breaking news · FLUX papers · investigations · peer-reviewed science simultaneously

Kaleido Investigates — Hidden in plain sight.