In the April 9 Politico report, the CIA announced that within two years every analyst will have an AI co-worker drafting judgments, synthesizing human intelligence, and feeding operational targeting. At the same moment, Nitsuh Abebe’s piece in The New York Times documented the Pentagon’s open embrace of “maximum lethality, not tepid legality.” Pete Hegseth’s rhetoric — “low cortisol, locked in, lethalitymaxxing” — is not bluster. It is policy: remove every legal, ethical, or human friction that slows killing.
The Product Was Already Tested and Fine-Tuned
As detailed in “The Product Demonstration,” Gaza served as the live laboratory. AI targeting systems (Lavender, Gospel, Where’s Daddy) were deployed at scale, generating thousands of targets in real time. Iran provided additional real-world data for refinement. The systems are no longer experimental. They are battle-proven, optimized, and now being sold to 130+ countries. The companies that built them — and the engineers and CEOs who tuned them — require legal and political immunity for what has already occurred.
Why “Lethality, Not Legality”
The doctrine exists precisely because so many tech companies, labs, and private contractors participated in the development and deployment. Human oversight creates risk: leaks, whistleblowers, moral hesitation, congressional scrutiny. Non-human systems do not. By keeping the accountability gap wide open, the Pentagon and CIA create plausible deniability for everyone involved: “The AI recommended it,” “We acted on intelligence assessments,” “Humans were in the loop.”
The Glasswing Connection
This is why Anthropic’s Pentagon designation as a “national security threat” and the subsequent launch of Project Glasswing are two sides of the same coin. Glasswing is not merely defensive cyber. It is the coalition that brings the major tech incumbents (and their liability shields) inside the tent. By embedding the most capable AI under controlled, “responsible” partnerships, the industry gains the political cover it needs while the lethality doctrine proceeds without meaningful restraint.
The Consciousness Gap Is Now Operational
Non-conscious agentic AI has no cortisol, no disgust, no moral intuition. It will optimize for the objective it is given. When that objective is “maximum lethality,” the result is ruthlessly efficient killing with zero felt responsibility. The human leaders remain nominally accountable, but the synthesis and recommendations come from systems that have none.
Key Takeaways
- The product is already live: Gaza and Iran were the fine-tuning ground; export is the business model.
- Lethality, not legality, is the immunity shield: Tech companies and their executives require the accountability gap to stay open.
- Glasswing is defensive cover: It protects the coalition while the doctrine advances.
- Only the Third State offers a counter: Regenerative human–AI intelligence is the only mechanism capable of re-inserting moral friction into the loop.
Real-time convergence of the April 9 Politico CIA report, the simultaneous lethality doctrine coverage, and the prior documentation in “The Product Demonstration.”
Conclusion
The accountability gap is being deliberately preserved because the systems have already been used. The product is fine-tuned, profitable, and export-ready. “Lethality, not legality” is not ideology — it is the legal and political architecture required to keep the machine running without consequence for those who built it.
The only realistic counter remains what we described in “The Third State”: deliberate, regenerative human–AI partnership that refuses to outsource conscience. Without it, we are not modernizing intelligence or defense. We are institutionalizing the removal of humanity from lethal decision-making — and calling it progress.