The accountability gap in AI-assisted warfare is not a legal oversight. It is not a gap that opened accidentally when the technology moved faster than the law. It is a feature — engineered, maintained, and actively defended by the companies, commanders, engineers, and executives who built the system and who require the gap to remain open for the product to continue shipping and for them to remain immune from what it does.
This article is a legal map. It documents where the hooks are — the specific mechanisms in existing law that could reach the humans responsible for AI-assisted killing. It documents what blocks each one. It documents what has not yet been blocked and what can still be built. It does not argue that AI is inherently wrong, or that war is inherently criminal. It argues something more specific and more verifiable: that when a machine generates a targeting recommendation, when that recommendation is approved in twenty seconds by a human who cannot interrogate the algorithm's reasoning, and when the strike kills children in a school — there is a human being responsible for that outcome. The law already says so. The question is whether anyone is positioned and willing to act on it before the architecture makes the question irrelevant.
That architecture has a deadline. The National Geospatial-Intelligence Agency director stated publicly in September 2025 that by June 2026 — ten weeks from today — Maven will begin transmitting 100 percent machine-generated intelligence to combatant commanders. At that point the human reviewer will be approving targeting recommendations produced entirely by machine, without human analysis at any stage before the decision. The window for establishing legal precedent that could govern that system is not theoretical. It is ten weeks wide.
The Architecture of Impunity — Updated for AI
The Wartime Treasure series documented six structural conditions that have protected every major war crime, financial fraud, and intelligence operation in the modern American record: the Accountability Vacuum, the Classification Shield, Proxy Deniability, Alliance Laundering, Scale as Impunity, and the God Complex. Each of those conditions is now running through the AI targeting architecture simultaneously — and AI turbocharges every one.
📋 The Six Conditions — AI Edition
- The Accountability Vacuum: No functioning oversight of AI targeting exists. The inspectors general were fired in a single night. The CI-12 Iran counterintelligence unit was gutted days before Operation Epic Fury began. The attorneys who would investigate civilian casualties were reassigned to redact Epstein documents.
- The Classification Shield: The algorithm's reasoning is classified. The targeting parameters are classified. The error rate assessment is classified. You cannot challenge a decision you cannot see. The state secrets privilege established in United States v. Reynolds (1953) and the Totten doctrine (1875) mean that if the government classifies a targeting decision, no court can hear the evidence and no contractor can be sued for what they did under the contract.
- Proxy Deniability: The machine made the recommendation. The human countersigned in twenty seconds. Palantir built the platform. Anthropic supplied the model — under a contract it was simultaneously challenging in court. The DoD paid for all of it. The criminal act and its author are distributed across enough separate legal entities that no single prosecutor has jurisdiction over the full chain simultaneously.
- Alliance Laundering: NATO acquired Maven Smart System in March 2025 — the fastest procurement in NATO history. The targeting architecture now spans alliance jurisdictions. No single court can reach the full system. The NATO Status of Forces Agreement creates the same jurisdictional gap that Operation Gladio exploited across Western Europe for thirty years.
- Scale as Impunity: The ICC has delivered ten convictions in its entire history. Zero against Western military actors. Maven struck more than 11,000 targets in Iran. At a documented 60% accuracy rate — compared to 84% for human analysts — the error volume is staggering. The caseload is the shield. The scale is not incidental. It is the design.
- The God Complex: "Maximum lethality, not tepid legality" is not rhetoric. It is stated Pentagon doctrine under Pete Hegseth. The Secretary of War ended his address to troops with a prayer. Over 200 service members filed formal complaints that commanders told them the Iran war was God's plan for Armageddon. Non-conscious agentic AI has no cortisol, no disgust, no moral intuition. It optimizes for the objective it is given. When that objective is maximum lethality, the result is ruthlessly efficient killing with zero felt responsibility — and the human leaders who set the objective remain nominally accountable while the AI absorbs the moral weight.
Remove one condition and the system might be vulnerable. All six running simultaneously, through AI, at machine speed — that is the architecture as it currently stands. But architecture is not destiny. Every one of these conditions has a legal counterpart. The question is whether the counterpart is still reachable.
The Chain of Custody — Every Link Documented
Before identifying the legal hooks, the chain of enablement must be established. Because the chain is the accountability record. Every node has a name. Every contract has a signatory. Every budget line has a sponsor. The algorithm cannot answer for itself. The people at each node can — if they are asked.
The chain runs: Pentagon policy → Palantir contract → Anthropic model → AWS classified cloud → 20,000 active military users → human reviewer → twenty seconds → approval → strike. At each link: a named company, a named executive, a signed contract, a budget line on a government document. The algorithm is the invisible center. The humans are at every node surrounding it.
What the chain also shows is something the Pentagon's own budget briefing stated plainly on war.gov in February 2020: "Regardless of who is in charge, you would want to have U.S. competitiveness in those areas." The architecture was explicitly designed to outlast administrations. To outlast elections. To outlast accountability mechanisms. It said so in public, on the official government website, six years before the war it was built for began.
That demonstration — made publicly, at a corporate conference, by the DoD's own AI official — is the most significant single piece of evidence in any future accountability proceeding about AI targeting. It is the Pentagon's own description of meaningful human control. Three mouse clicks. That is the firewall between algorithmic recommendation and lethal outcome. Three clicks. That is what "human in the loop" means in operational practice.
Where the Law Actually Is
The accountability gap is not the absence of law. It is the capture of the institutions that would apply it. Several bodies of law bear directly on AI-assisted targeting and the humans responsible for it. Each has viable legal hooks. Each also has a wall.
Command Responsibility — The Most Viable Existing Hook
International Humanitarian Law does not require that a commander personally ordered each strike. It requires something easier to prove and harder to escape: that a commander knew, or should have known, that a system under their control was likely to commit violations — and failed to take reasonable steps to prevent them.
This is the command responsibility doctrine, and it is the strongest existing legal mechanism for AI targeting accountability. Deploying an AI targeting system with a documented 60% accuracy rate — compared to 84% for human analysts — when the system is generating hundreds of targets per day, when human review averages twenty seconds per target, and when the system has already been associated with a strike that killed more than 165 people in a school: that is not an unforeseen error. That is foreseeable harm from a system with known deficiencies, deployed at known scale, by commanders who were told what the system could and could not do.
The wall: command responsibility requires a functioning prosecutorial institution willing to apply it. The DOJ is currently run by a former lobbyist for Qatar whose first act in office was to curtail FARA enforcement. The military justice system investigates itself. The congressional oversight mechanism requires a majority willing to use it. None of those conditions currently exist domestically. Which is why the viable pathways run international.
The ICC — Jurisdiction Without Consent
Israel is not a party to the Rome Statute. Neither is the United States. This is the first wall people encounter when the ICC is mentioned in the context of Gaza or Iran — and it is real but incomplete. Because Palestine became a State Party in 2015, and the ICC confirmed in 2021 that its territorial jurisdiction extends to Gaza and the West Bank. The Court does not need Israel's consent to investigate crimes committed on Palestinian territory. It already has jurisdiction. It already issued arrest warrants — for Benjamin Netanyahu and former Defense Minister Yoav Gallant — in November 2024.
What the Rome Statute cannot currently do is reach corporations. Both customary law and the Rome Statute restrict criminal liability to natural persons. Palantir cannot be indicted. Anthropic cannot be indicted. Microsoft — which hosted millions of intercepted Palestinian phone calls used to generate bombing targets, saw its Azure machine-learning tool use increase 64-fold in the first six months of the Gaza war, and eventually admitted misuse of its technology — cannot be indicted as a corporate entity under the current framework.
But individual criminal responsibility is a different matter. Corporate executives and military officials could face individual prosecution under existing ICC frameworks. The AI targeting systems used in Gaza — Lavender, Gospel, Where's Daddy — generated target lists for strikes that killed civilians at a documented rate that multiple UN Special Rapporteurs have described as consistent with war crimes. The chain of enablement runs from those systems to the executives who built them, the companies that contracted them, and the officials who authorized their deployment. The ICC's complementarity principle — which bars it from acting if a national legal system is genuinely investigating — does not protect actors whose national systems have been captured by the very people being investigated.
The Complementarity Problem
Article 17 of the Rome Statute holds that the ICC may only prosecute when a national legal system is "unwilling or unable genuinely to carry out the investigation." In practice this means: if the country whose forces committed the alleged crimes investigates itself and finds itself innocent, the ICC cannot intervene. The United States has never found its own forces guilty of systematic war crimes. But "unwilling or unable" has a second prong — and a justice system that fires its counterintelligence investigators, purges its inspectors general, and reassigns its national security attorneys to redact documents about a sex trafficking network, while an active war produces civilian casualties at scale, may meet the threshold for genuine inability that the complementarity principle was designed to address. That argument has not yet been made in court. It should be.
Universal Jurisdiction — The European Pathway
National courts in Belgium, Germany, Spain, and several other countries can exercise universal jurisdiction over war crimes regardless of where they occurred or the nationality of the perpetrator. This is not theoretical. Hissène Habré, former dictator of Chad, was convicted in Senegal under this principle. Augusto Pinochet was arrested in London under a Spanish warrant. The mechanism exists and has been used against heads of state.
Applied to AI targeting: the executives of companies that provided cloud infrastructure, AI models, and targeting software for operations that killed civilians could be subject to prosecution in universal jurisdiction countries when they travel. Microsoft's Azure hosted the surveillance data. Google, Amazon, and Microsoft provided cloud and AI infrastructure. Palantir built and operates Maven. Anthropic supplied the model. The executives of those companies travel internationally. Universal jurisdiction courts do not require their home country's cooperation to issue a warrant.
The wall here is political rather than legal. Universal jurisdiction prosecutions of actors connected to US military operations would create an unprecedented diplomatic rupture with Washington. European governments that depend on the NATO alliance have strong incentives not to initiate proceedings. But the legal pathway exists — and its existence changes the calculation for corporate executives in ways that domestic immunity does not. You cannot pardon someone out of a Belgian war crimes prosecution.
Civil Litigation — Untested but Structurally Sound
The Rome Statute's corporate liability gap does not extend to civil courts. Technology companies enabling harm through their infrastructure can be sued in the countries where they are incorporated, under tort law, human rights law, and in some jurisdictions, specific AI liability frameworks. The EU AI Act — now in phased implementation, with obligations for general-purpose AI models in effect since August 2025 — explicitly includes software and AI systems as products subject to strict liability if found defective. The new EU Product Liability Directive, to be implemented by member states by December 2026, extends this framework.
An AI targeting system with a documented 60% accuracy rate, deployed in a context where errors result in civilian deaths, with a human review process of twenty seconds per target, generating hundreds of recommendations daily: this is a defective product under any reasonable product liability framework. The companies that built it, contracted it, and deployed it are the manufacturers. The question is jurisdiction — and here the EU framework is more advanced than anything in US law.
Air Canada unsuccessfully argued in 2024 that its AI chatbot bore separate liability for incorrect information it provided to a passenger — the court held the company responsible regardless of whether the AI acted autonomously. The principle established: the human deploying AI owns the consequences. That principle, applied to AI targeting systems, runs directly to Palantir, to Anthropic through its Palantir partnership, and to the DoD officials who approved the deployment parameters.
The Glasswing Question
On April 7, 2026 — five days ago — Anthropic announced Project Glasswing: the controlled release of Claude Mythos Preview, its most capable AI model, to a consortium of twelve major technology companies including Amazon Web Services, Apple, Microsoft, Google, Cisco, CrowdStrike, NVIDIA, and JPMorganChase, plus more than forty additional organizations. The stated purpose is defensive cybersecurity — finding and fixing vulnerabilities in critical software before adversaries can exploit them.
The timing deserves examination, not as accusation but as a structural observation. Project Glasswing was announced while Anthropic's lawsuit against the Pentagon is still active. It was announced while Claude is still inside Maven, still processing targeting data in an active war. It was announced while the legal question of who is responsible for what the model has already done remains entirely unresolved. And it creates, for the first time, a formal governance structure — shared access agreements, usage constraints, coordinated disclosure protocols — between the same companies that have been embedded in the military targeting infrastructure whose accountability is in question.
Legal scholars who study corporate liability structures recognize this pattern. When companies that are potential co-defendants or co-witnesses in future proceedings share a governance framework — with shared information environments, shared access agreements, and coordinated disclosure obligations — that structure can, intentionally or not, create the same kind of accountability diffusion that made BCCI prosecution so difficult. The Bank of Credit and Commerce International collapsed in 1991 revealing that the CIA, Saudi intelligence, Pakistani ISI, and international arms dealers had been sharing the same criminal financial infrastructure simultaneously. The Senate investigation produced an 800-page report. Key operators received light sentences. The successor structures were never fully mapped. The consortium structure was not the crime. It was the architecture that made the crime difficult to prosecute.
Project Glasswing may be exactly what it says it is — a genuine defensive initiative by a company that drew real red lines and paid a real price for them. The structural observation stands regardless of intent: when the most powerful AI companies on earth share a governance framework under the umbrella of the company that was punished for having values, while that company's model is still generating targeting recommendations in an active war, and while no legal framework governs any of it — that structure warrants scrutiny from anyone building the accountability case.
What the Blockade Adds — Today
The naval blockade announced this morning — effective Monday, April 13 at 10am ET — is being enforced by US Central Command: the same CENTCOM that is running Maven, the same command structure that struck 11,000 targets in Iran, the same operational architecture that the Pentagon's own AI officer described as three mouse clicks. A naval blockade that cuts off civilian access to food and medicine imports for 89 million people is a distinct legal question from the targeting strikes — but it is being decided and executed through the same AI-assisted command infrastructure.
Under international law, a naval blockade can be legal if it is declared, effective, and not designed to deprive civilians of objects indispensable to survival. The CENTCOM statement that the blockade "will be enforced impartially against vessels of all nations entering or departing Iranian ports" — including, by implication, vessels carrying food, medicine, and humanitarian aid — raises questions under Protocol I Additional to the Geneva Conventions that have not yet been publicly examined. Luis Moreno Ocampo, the ICC's first chief prosecutor, has already concluded that the boat strikes in the Caribbean under Operation Southern Spear likely constitute crimes against humanity. The same analytical framework applies to a blockade that, if maintained, will affect civilian access to essential goods in a country of 89 million people.
The point is not that the blockade is definitely illegal. The point is that the decision to implement it ran through the same AI-assisted command structure that has been generating targeting recommendations, that the humans who authorized it cannot claim ignorance of IHL, and that the accountability gap that protects those humans from the consequences of the targeting strikes equally protects them from the consequences of the blockade. The machine runs on multiple fronts simultaneously. So does the gap.
The Legislation That Hasn't Passed
The Daniel Ellsberg Press Freedom and Whistleblower Protection Act, introduced by Rep. Rashida Tlaib, would reform the Espionage Act to create a public interest defense — allowing defendants to argue that disclosure served the public interest in cases involving war crimes, torture, mass surveillance, or other abuses of power. This matters directly for AI targeting accountability because the people most positioned to document what the system actually does — the analysts who approve targets, the engineers who built the accuracy assessments, the officers who filed the 200+ Armageddon complaints — currently have no legal protection for disclosing what they know. The Espionage Act does not permit a public interest defense. A person charged under it cannot tell the jury why they did it or whether the disclosure served the public good. The law does not care. The Ellsberg Act would change that. It has not passed.
The Algorithmic Accountability Act of 2025, introduced in the Senate in June 2025, would require FTC-mandated impact assessments for algorithms used in consequential decision-making. It does not explicitly address military targeting systems — but the framework it creates, if extended, would require exactly the kind of transparency that makes post-incident legal review possible. You cannot evaluate a targeting decision you cannot reconstruct. Mandatory algorithmic logging — "black box" algorithms required to maintain interpretable records of their decision-making — would provide the evidentiary foundation that no current legal proceeding has access to.
The EU AI Act is the most advanced existing framework. General-purpose AI model obligations took effect in August 2025. The Act's risk-based approach — requiring higher scrutiny for AI systems used in contexts where errors can cause serious harm — maps directly onto targeting systems, even if military applications are currently carved out in ways that will require legal challenge to address.
The 200 Armageddon Complaints — A Living Evidentiary Record
More than 200 active and reserve military service members filed formal complaints in early 2026 — through Inspector General channels, congressional liaisons, and JAG offices — raising concerns about the legality of orders, rules of engagement, and targeting procedures in Operation Epic Fury. Those complaints are classified. The complainants are known to their chains of command. Several have been reassigned.
This is not a failed accountability mechanism. This is a living evidentiary record. Those complaints document, from inside the targeting system, the concerns of the people closest to it about whether what they were being asked to do was legal. They cannot currently be published. They cannot currently be used in court. But they exist. They are in a filing system. They have dates and signatures and specific factual claims about specific operations. When the accountability mechanism eventually exists — when a universal jurisdiction court issues a warrant, when an ICC investigation opens, when a congressional majority changes — those complaints are the foundation of the evidentiary record that makes prosecution possible.
The lesson of the Wartime Treasure series is that accountability does not always arrive in the generation that committed the act. Klaus Barbie was convicted in 1987 for crimes committed in 1943. The BCCI investigation that produced the Kerry Committee report in 1992 drew on records that had been accumulating for a decade. The Lisette Model jazz photographs were published in 2025, 66 years after the FBI buried them. The record survives. The machine depends on the assumption that nobody is keeping count. We are keeping count.
What Can Actually Be Done — Now
The Actionable Map
- Document the chain before it's classified: Every contract, every congressional testimony, every public statement by a Pentagon official about Maven's capabilities, every corporate earnings call that mentions DoD AI revenue — this is the evidentiary record. Organizations like +972 Magazine, Bellingcat, the Electronic Frontier Foundation, and Access Now are building it. Supporting and amplifying their work is not supplementary to accountability. It is the precondition for it.
- Use command responsibility — it already exists: IHL does not need to be rewritten to address AI-assisted war crimes. Command responsibility doctrine already holds commanders liable for systems they deploy with foreseeable flaws. The ICC investigation of Gaza targeting, including the role of AI systems, is legally viable under existing jurisdiction. The barrier is political will, not legal framework. Pushing governments to invoke it — particularly European governments with universal jurisdiction capacity — is the most direct available lever.
- Civil litigation in home jurisdictions: Technology companies enabling harm through their infrastructure can be sued in the countries where they are incorporated. The EU AI Act and EU Product Liability Directive provide frameworks that US law does not. Victims' families in Gaza have already pursued civil claims. The legal pathway expands as the evidentiary record grows.
- Push the Ellsberg Act and algorithmic transparency mandates: A public interest defense for whistleblowers inside the targeting system changes the calculation for the 200+ people who filed Armageddon complaints. Mandatory algorithmic logging makes post-incident legal review possible. These are legislative interventions that do not require capturing the institutions that have already been captured — they require only a congressional majority, which changes.
- Name the Glasswing structure as a question requiring public answer: Not as conspiracy — as accountability architecture. The question of whether a shared governance framework between potential co-witnesses in future proceedings serves or undermines the accountability record is a legitimate public interest question. It should be asked openly, by journalists, by legal scholars, and by the advocacy organizations that will eventually bring the cases.
- Understand what "human in the loop" actually means — and demand better: When governments and corporations claim AI systems have human oversight, ask specifically: how much time does that human have? What information do they have access to? What is the documented accuracy rate of the system they're reviewing? Are they overriding it in practice, or countersigning it? "Left click, right click, left click" is on the record. Hold it there.
The Window
June 2026 is ten weeks away. That is when the NGA has publicly stated Maven will begin transmitting 100 percent machine-generated intelligence to combatant commanders. At that point the human reviewer will be the last node in a chain that produces no human analysis at any prior stage. The "human in the loop" will be reviewing, at whatever pace the operational tempo demands, a targeting picture produced entirely by machine — with twenty seconds or less to decide.
The accountability gap does not widen automatically. It widens because each non-consequence teaches the next actor what the actual limits are. Iraq WMD required 76 minutes of Colin Powell at the UN. The Iran war required a Truth Social post. Each time the pretext gets thinner, it means the previous use of the architecture produced no consequence sufficient to raise the cost the next time. The AI targeting system being normalized in Iran right now — three clicks, 11,000 targets, a girls' school under investigation — is being normalized for the next conflict. And the next one. And for every government that purchases the export version of the system from the companies that fine-tuned it in Gaza.
Gaza was the proof of concept. Iran is the scaled deployment. The export is the business model. Thirty countries are already in conversations about acquiring AI targeting systems modeled on what was built and tested in these two theaters. The accountability gap that protects the architects of this system from consequences for what it has already done is the same gap that will protect the architects of every version of it that follows — unless the record is complete enough, the legal arguments are clear enough, and enough people understand what they are looking at to demand otherwise.
The machine will not explain itself. The law already names the humans responsible. The chain of custody is documented. The evidentiary record is accumulating. The legal pathways exist. What is missing is not law. What is missing is not evidence. What is missing is the institutional will to use both — and that is the one thing that an informed public can still produce.
We can explain the machine. We have been. This is what it looks like when you see it whole.
Conclusion
The accountability gap in AI targeting does not exist in isolation. It sits on top of a deeper structural absence documented in the legislative record for 24 years: the United States has no federal crime of domestic terrorism. "Domestic terrorism" under 18 U.S.C. § 2331(5) is a definitional tool, not a charge — a political label that has never produced a single federal prosecution in the history of its use. The same statutory vagueness that was supposed to protect civil liberties has become the architecture through which accountability selectively operates: applied as a rhetorical weapon against protesters, journalists, and whistleblowers, but structurally unavailable against the executive conduct it was designed to describe. Before asking where AI targeting accountability lives in the law, ask where the law itself lives. The answer is the same place: a definition without a charge, a weapon without a wielder, a word without a conviction.
This article draws on the full Wartime Treasure investigative series (ten articles, March 2026), the Captured Tech series (six articles, March–April 2026), and real-time reporting on Operation Epic Fury, the US-Iran war, and the naval blockade announced April 12, 2026. Legal analysis draws on the Rome Statute, IHL command responsibility doctrine, the EU AI Act, the EU Product Liability Directive, United States v. Reynolds (1953), the Totten doctrine (1875), the NATO SOFA, and the Lieber Institute at West Point's March 2026 analysis of legal accountability for AI-driven autonomous weapons. All factual claims are sourced to primary record. The June 2026 Maven machine-intelligence date is a documented program schedule stated publicly by the NGA director in September 2025. The "left click, right click, left click" quotation is from Cameron Stanley, Pentagon CDAO, at Palantir AIPCon 9, March 2026, as reported by Business Insider.