"At some level, you have to trust your military to do the right thing."
— Emil Michael, Pentagon Chief Technology Officer, March 2026
This is not an opinion piece about whether AI should be used in war. That debate is over. AI is being used in war, right now, in an active conflict, to identify targets at a speed and scale no human process could match. The question that remains — the one nobody in power wants to answer plainly — is what that actually means, who built it, how it works, what it has already done, and where it is going. This article documents what is publicly known. The public record is sufficient. No speculation required.
I — The TriangleThree Parties. No Clean Hands.
The Anthropic designation created a triangle that has no good corner. Understanding the story requires holding all three points simultaneously without letting any one of them absorb the others.
The Government publicly designated Anthropic a national security threat on February 27. Secretary Hegseth tweeted it. Trump posted it on Truth Social. Contractors began compliance procedures. The market moved. Then on February 28 — one day later — the US military launched Operation Epic Fury against Iran, using Claude, embedded inside the Palantir Maven Smart System, to help identify and prioritize over 1,000 targets in the first 24 hours. The arm that was publicly amputated was still attached, still operational, still selecting targets. CBS News confirmed it from two sources. The Wall Street Journal confirmed it. The Washington Post confirmed it. The Pentagon did not deny it. Its chief technology officer said publicly that Claude couldn't be phased out until a replacement was found — and that finding one would take at least three months.
Anthropic and Dario Amodei drew two red lines: no fully autonomous weapons, no mass domestic surveillance of Americans. Principled. Documented. Admirable in stated intent. But as Pulitzer Prize-winning national security journalist Spencer Ackerman observed, the time to worry about those lines was before signing the contract. Claude was already embedded in classified military networks through Palantir. Already integrated into targeting workflows. Already part of the infrastructure of a war that had not yet started when the contract was signed. The red lines Dario drew did not prevent Claude from being used to identify targets in Iran. They defined which specific extensions of that use he would object to — while the foundational military integration continued. The result is a company that is simultaneously challenging its own designation in court and providing the targeting intelligence for an active war. Both things are true at once.
The war itself — which could not wait, which was launched one day after the designation, which required the system that had just been banned — is the third corner. Operation Epic Fury struck over 1,000 targets in 24 hours. That number is not compatible with human targeting processes at any scale. That system exists. It is called Maven. Claude is inside it. The war is being fought with it right now.
What It Is. How It Works. Who Built It.
Project Maven was launched by the Pentagon in April 2017, originally focused on applying computer vision to analyze surveillance video from drones for terrorist targets. It has since expanded into something categorically different — a full-spectrum AI warfighting platform that is now the connective tissue between every sensor, every data source, and every targeting decision across the US military.
The Maven Smart System, operated by Palantir Technologies, fuses satellite imagery, drone feeds, radar data, communications metadata, geolocation data, and signals intelligence into a single unified interface. Machine learning identifies points of interest — vehicles, infrastructure, patterns of movement, individuals — and presents ranked targeting options to human analysts. The system has been deployed across all branches of the US military. In May 2025, the Pentagon raised Maven's contract ceiling to $1.3 billion through 2029, up from $480 million the prior year. In March 2025, NATO acquired Maven Smart System for Allied Command Operations — one of the fastest procurement processes in NATO history, taking six months from requirement to acquisition.
Embedded within Maven is Claude. Through Anthropic's partnership with Palantir — formalized in late 2024, giving Claude access to DoD Impact Level 5 classified networks through Amazon Web Services — Claude analyses intelligence inputs, ranks targets by strategic priority, and supports the decision-making process at the speed the system now requires. The National Geospatial-Intelligence Agency director stated publicly in September 2025 that by June 2026, Maven will begin transmitting "100 percent machine-generated" intelligence to combatant commanders. That date is three months away.
🔧 What Maven Actually Does
Maven is not a single tool. It is an architecture — a data integration and AI analysis platform that acts as the nervous system of modern US military targeting. It ingests data from every available sensor simultaneously. It standardizes incompatible data formats through an ontology layer so information from classified and unclassified sources can be fused and shared across cloud and edge systems in real time. It presents the output on a unified interface — yellow boxes for potential targets, blue for friendly forces and no-strike zones. It can transmit a human decision to fire directly to weapons systems. The "human in the loop" is the person who looks at the screen and approves what the machine has already decided to recommend.
Google built the original Maven object-recognition AI. In 2018, more than 4,000 Google employees signed a petition. At least a dozen walked out. Google declined to renew. The contract expired in March 2019. The Pentagon didn't have to look far for a replacement — it found one immediately, in a company that had already built ICE and CBP's surveillance networks and software for police that circumvents the warrant process. Palantir took over. Internally, they called the project Tron.
Here is what the 4,000 signatures actually produced: the work didn't slow down. It accelerated. Palantir had no ethical objections. No petitions. No resignations. Alex Karp, Palantir's CEO, believes it is big tech's patriotic duty to do whatever the US government requires. Peter Thiel, its founder, called Google's withdrawal treasonous and demanded the CIA investigate. The conscience that left the room was replaced by capital that had none. And the Pentagon — freed from managing an uncomfortable internal debate at a contractor — certified Palantir as the sole source, meaning no other company was even considered capable of meeting its requirements.
II.I — The FingerprintWho Is Using It. Right Now.
As of late 2025, Maven Smart System has more than 20,000 active users — intelligence analysts, targeting officers, and commanders — operating across 35 distinct military software tools in three security classification domains: unclassified, secret, and top secret. These are not passive license holders. They are personnel whose targeting workflows run through Maven daily: ingesting AI-ranked target lists, approving strikes, coordinating with weapons systems. The NGA director confirmed in September 2025 that the user base had more than doubled since January of that year — meaning roughly 10,000 people were using it at the start of 2025, and the number doubled within twelve months. It has continued growing since.
That growth did not happen automatically. It happened because a deliberate brake was released.
Under Biden-era Pentagon appointees, Maven's expansion was actively slowed. Officials within the Chief Digital and AI Office applied safety and reliability reviews that delayed deployment of new capabilities and restricted access expansion. They were not opposed to military AI — they were applying the kind of institutional caution that the system's own architects had built in as a political buffer. When the Trump administration took office in January 2025, those appointees were replaced. The reviews stopped. The brake came off. The user base doubled. The contract ceiling jumped from $480 million to $1.3 billion. NATO acquired the system in six months. And by February 2026, Maven was running Claude inside a classified AWS environment, selecting targets in an active war — operating, as designed, regardless of who was in charge.
The pattern repeats across the industry. Amazon — whose AWS cloud infrastructure is the platform Maven runs on — laid off 14,000 employees in October 2025, explicitly attributing the cuts to AI. CEO Andy Jassy had told staff in June: "we will need fewer people doing some of the jobs that are being done today." The same Amazon invested $8 billion in Anthropic. The same AWS is Anthropic's primary cloud provider. The same infrastructure carries the targeting data. In 2024 and 2025, Amazon, Meta, Microsoft, and Google collectively eliminated tens of thousands of positions while committing hundreds of billions to AI infrastructure. The workers documenting their own workflows — writing internal guides, training colleagues, optimizing processes — were packaging themselves for deletion. One senior Amazon engineer described it plainly: "I literally trained the AI that made me redundant." The Google engineers who refused to build Maven were the first to understand what came next. They were right about the problem. They were wrong that leaving would stop it.
In February 2020, the Pentagon held a budget briefing — also archived on war.gov — in which Acting Comptroller Elaine McCusker was asked directly: what makes this irreversible if a Democrat wins the 2020 election? Her answer: "regardless of who is in charge, you would want to have U.S. competitiveness in those areas." The architecture was explicitly designed to outlast elections. It was stated policy. The same briefing described JADC2 — the Joint All-Domain Command and Control system that Maven now powers — as connecting "any sensor to any shooter in any domain at any time." That was 2020. It is now 2026. The sensor-to-shooter connection is live. The war is active. The timeline ran exactly on schedule.
III — Gaza to IranThe Precedent Was Already Set.
The architecture being used in Iran was not designed for Iran. It was tested, refined, and validated in Gaza. Understanding what happened there is essential to understanding what is happening now — because the lessons learned in Gaza, including the lessons about what the system would tolerate and what it would produce, are embedded in the system being used today.
Israel's Lavender system, built for the Gaza campaign, used AI to generate a database of over 37,000 Palestinian men flagged as potential Hamas or Islamic Jihad operatives. Its error rate was approximately 10 percent — meaning thousands of civilians were misidentified as militants. A companion system called "Where's Daddy" tracked those targets and alerted the military when they returned home to their families, so they could be bombed there rather than during military activity, because family homes were easier to locate through automated surveillance.
The human review that was supposed to keep humans "in the loop": according to six anonymous Israeli intelligence officers who spoke to +972 Magazine, analysts spent an estimated 20 seconds reviewing each AI-generated target. Often that review consisted of checking whether the name belonged to a man. One source said they personally authorized the bombing of "hundreds" of private homes of alleged low-ranking operatives, with many attacks killing civilians and entire families. The army determined that for every junior Hamas operative Lavender marked, it was permissible to kill up to 15 or 20 civilians. For a senior commander, the permissible collateral number exceeded 100.
That sentence is the operational reality of AI-assisted targeting at scale. The machine's coldness is a feature, not a failure. It removes the friction. It eliminates the hesitation. It makes the decision process faster and the decision-maker less personally implicated in the outcome. The screen between the human and the consequence is not a safety mechanism. It is a psychological distance mechanism. And it works.
Targets and munitions are not the same number. Maven selects targets. Bombs are what gets sent to them — often multiple munitions per location. The Israeli Air Force dropped 5,000 bombs on Iran in the first six days. Among the weapons Bellingcat documented through open-source analysis of IAF photos: a 2,000-pound JDAM-guided bomb bearing a red band — incendiary marking — consistent with the BLU-119/B CrashPAD, a white phosphorus and high-explosive weapon designed to destroy chemical or biological weapons stockpiles. Never previously seen in Israeli service. No disclosed US transfer through normal arms sale channels. The targeting intelligence that placed those munitions across those locations ran through Maven. Maven ran through Claude. The designation letter arrived on day six. Google Nimbus, Microsoft Azure, Palantir Gotham — all present in Gaza, none with public red lines. They complied. Got paid. Gaza became the template.
The Six Million Dollar Man, 1970s. The bionic arm as neighbor. As roadside assistance. As benign superpower in service of ordinary people. We have the technology. That was the promise.
The United Nations: no place on earth has more child amputees than Gaza. Among them is Noor Faraj, who lost both her legs when her home was hit. Al Jazeera English, September 19, 2025.
What the Interface Does to Moral Weight.
The 1,000-targets-in-24-hours number requires examination not just as a military statistic but as a human one. No targeting process in history has operated at that speed at that scale. Operation Iraqi Freedom — with 2,000 personnel working the targeting process — could not have approached that pace. Maven, with 20 soldiers, can. The math is not compatible with meaningful human deliberation about individual targets. At 1,000 targets in 24 hours, a human analyst reviewing each decision has approximately 86 seconds per target — and that assumes the analyst does nothing else for 24 consecutive hours.
The interface matters. When the distance between a human being and a lethal decision is a screen, a cursor, and an approve button, the psychological weight of that decision changes. This is documented. It is why drone operators stationed in Nevada experience PTSD at rates comparable to combat troops who were physically present in the theater of operations. The screen does not protect the operator from the consequences of the decision. It removes the friction that might have produced hesitation before the decision was made.
Maven is designed specifically to reduce that friction. Its explicit purpose — stated in Pentagon press releases and Palantir investor materials — is to decrease targeting workflow timelines. One unit reported its intelligence-to-target-engagement timeline dropped from hours to minutes. Army leaders stated publicly they were "trying to leverage Maven to meet a new vision for units to make a thousand high-quality decisions — choosing and dismissing targets — in one hour." One thousand decisions per hour. That is not deliberation. That is a video game. And the consequence of each decision is a munition striking a location where human beings are present.
- US military still using Claude for Iran targeting despite ban — CBS News, WSJ, Washington Post — Multiple
- IRGC sends mass text to Israeli civilians: "Last US radar destroyed. Your leaders are lying. Missiles are on their way. No shelter is safe." — IBTimes UK
- IRGC launches 10th wave of Operation True Promise 4 — Khorramshahr-4 heavy missiles targeting Ben Gurion Airport, Tel Aviv, Haifa — GlobalSecurity / IRGC
- Israelis fleeing to mountains; those who can't afford planes leave on foot — Social media, unverified
- Video confirms US Tomahawk strike next to girls' school in Minab, Iran — Bellingcat
- Oil surges past $90 as Strait of Hormuz shipping collapses — World Oil
- 200+ military complaints: commanders telling troops Iran war is "God's plan" for Armageddon — complaint filed by NCO on behalf of 15 troops including 11 Christians, 1 Muslim, 1 Jewish — Jonathan Larsen / The Guardian / MRFF
- Pentagon formally notifies Anthropic of supply chain risk designation — CNBC
- Defense tech companies dropping Claude; Palantir declines comment — CNBC
- Iran escalates drone and missile attacks on Kurdistan Region of Iraq — FDD Long War Journal
- US tests unarmed Minuteman III ICBM over Pacific Ocean — UK Defence Journal
- Claude central to Palantir Maven Smart System, provides real-time targeting in Iran — Responsible Statecraft
- Anthropic only American company ever publicly named supply chain risk — designation traditionally used against foreign adversaries — CNBC
- Dario Amodei memo: administration dislikes Anthropic because it hasn't donated or offered "dictator-style praise" — The Information
- Pentagon: replacing Claude capabilities could take 3+ months; OpenAI and xAI stepping in — Defense One
- B-21 Raider stealth bomber production accelerating — UK Defence Journal
- US Navy launches multinational Arctic submarine operation — Defence Blog
- Maven Smart System contract ceiling raised to $1.3B through 2029; NGA targets 100% machine-generated intel by June 2026 — SpaceNews / Wikipedia
- NATO acquires Maven Smart System — fastest procurement in NATO history — SHAPE / Breaking Defense
- Israel's Lavender AI: 37,000 targets, 10% error rate, 20-second human review, 15-20 civilian deaths permissible per low-ranking operative — +972 Magazine
- Google withdrew from Maven in 2018 after employee revolt; Palantir continued the work — Wikipedia
- Palantir-Anduril consortium formed to merge Maven with Lattice system for US AI military dominance — Executive Gov
The Red Line Changed the Vendor. Not the Outcome.
Dario Amodei's stated position — that Anthropic would not allow Claude to be used for fully autonomous weapons or mass domestic surveillance — produced a public confrontation with the Pentagon that resulted in the designation, the ban, and the ongoing legal challenge. It also produced an immediate response from the market: OpenAI and xAI moved within days to fill the classified AI capacity that Anthropic's position had made uncertain.
This is the structural problem with individual company red lines in a system that has decided AI military integration is non-negotiable. When one company draws a line, the line doesn't stop the system. It changes the vendor. The targeting continues. The war continues. The 1,000-targets-in-24-hours pace continues. The company that drew the line is still inside the system through its existing contracts while the replacement is sourced. And the replacement, by definition, will be a company that drew the line in a different place — or didn't draw one at all.
The Ackerman observation cuts both ways. Before signing the original contract, Anthropic chose to enter a classified military system. That choice was made with knowledge of what the system does. The red lines drawn afterward define the boundaries of what Anthropic would personally sanction — while the broader system those red lines sit inside continues operating. Claude is inside Maven. Maven is selecting targets in Iran. The red lines did not prevent that. They defined what Dario would say publicly while it happened.
📋 The Chain of Custody: How Claude Got Into a War Zone
- April 2017: Pentagon launches Project Maven — officially the Algorithmic Warfare Cross-Function Team. Objective stated by Lt. Gen. Jack Shanahan: "turn the enormous volume of data available to DoD into actionable intelligence and insights." Described from the outset as a "fast-moving effort." Hosted on war.gov. The domain is not a coincidence.
- October 2017: Project Maven Industry Day hosts 300+ representatives from industry and academia. First algorithms delivered to warfighting systems by end of year. Phase II announced: expand scope, turn data into "actionable intelligence and decision-quality insights at speed." Google builds it.
- February 2020: DoD FY2021 budget briefing on war.gov. Acting Comptroller McCusker states the architecture represents "irreversible implementation" of the National Defense Strategy — explicitly designed to survive a change of administration. JADC2 described as connecting "any sensor to any shooter in any domain at any time." AI, hypersonics, autonomous platforms, Space Force, and Cyber Command all funded as a unified warfighting architecture. B-21 bomber receives $2.8 billion. When asked what makes it irreversible if a Democrat wins: "regardless of who is in charge, you would want to have U.S. competitiveness in those areas." The architecture was designed to outlast elections. It said so on war.gov.
- 2023: Maven transitions to National Geospatial-Intelligence Agency as a Program of Record. Palantir holds the contract.
- 2024: Anthropic partners with Palantir. Claude integrated into Palantir's FedStart program at DoD Impact Level 5 — classified networks. Anthropic signs $200M contract with DoD.
- 2025 March: NATO acquires Maven Smart System for Allied Command Operations.
- 2025 May: Pentagon raises Maven contract ceiling to $1.3B. More than 20,000 US military personnel actively using Maven.
- 2026 Feb 27: Hegseth tweets designation. Trump bans Anthropic. No legal instrument served yet.
- 2026 Feb 28: Operation Epic Fury launches. Claude still inside Maven. 1,000+ targets in 24 hours.
- 2026 Mar 5: Formal designation letter arrives — six days after the ban, seven days after the war started with Claude still running.
- 2026 Jun (projected): NGA begins transmitting 100% machine-generated intelligence to combatant commanders.
The Government Cut It Off. Reattached It Mid-Strike. Is Now Suing Itself.
The designation has produced a legal and operational contradiction that has no clean resolution. Publicly: Anthropic is a national security threat, its technology is banned, contractors must certify they don't use it. Operationally: Claude is running inside Maven, Maven is running Operation Epic Fury, and the Pentagon's own chief technology officer has acknowledged that Claude cannot be removed until a replacement is found — which will take at least three months. The war began before the replacement process started.
The arm was publicly amputated. The arm is still attached. The arm is still firing. And the entity whose arm it nominally is has filed — or is filing — legal challenges to the amputation, while the arm keeps working. This is the bionic arm. Not a metaphor for resilience. A literal description of a government that designated its own operational dependency a security threat, continued the operational dependency, and is now navigating the legal, contractual, and military consequences of having done both simultaneously.
The question of who bears responsibility for what the arm does while this is sorted out — while the war is active, while the targets are being selected, while the 20-second review clock is running — is not being asked publicly by anyone in a position to enforce an answer.
VII — What Comes Next100% Machine-Generated. June 2026.
This article does not predict the future. It documents what has already been announced. In September 2025, the director of the National Geospatial-Intelligence Agency stated publicly that by June 2026, Maven will begin transmitting "100 percent machine-generated" intelligence to combatant commanders. That is not a prediction. That is a program schedule. It is three months away.
100 percent machine-generated intelligence does not mean fully autonomous weapons — the human approval step remains, in theory, between the intelligence and the strike. But it means the intelligence that human is approving — the picture of the battlefield, the ranked list of targets, the assessment of threat levels and collateral damage projections — will have been produced entirely by machine, without human analysis at any stage before the approve button. The "human in the loop" will be reviewing, at whatever pace the operational tempo requires, a targeting recommendation produced entirely by an AI system, with 20 seconds or less to decide.
The Gaza precedent documented a 10 percent error rate tolerated at scale. The Iran operation has documented 1,000 targets in 24 hours. The June 2026 program schedule documents the removal of human analysis from the intelligence layer entirely. These are not predictions. They are the trajectory of a system being built in public, announced in press releases, contracted through documented procurement processes, and deployed in active wars while the legal disputes about its governance are still being litigated.
When the casualties from errors in that system are tallied — and they will be, because 10 percent error rates at the scale of 1,000 targets per day produce hundreds of errors — the question of who is responsible will be answered the same way it has always been answered in the history of automated systems that caused harm at scale: by pointing at the machine.
The machine will not be able to answer. The people who built it, contracted it, deployed it, drew red lines around parts of it, withdrew from parts of it, filled the gap when others withdrew, and approved targets on 20-second review cycles will be available to answer. Whether they are asked is a different question. Whether the answer is recorded before it becomes convenient to forget is what this article is for.
What To Watch
- June 2026 — NGA 100% machine-generated intelligence: The announced date when human analysis is removed from the intelligence layer of Maven's targeting system. Three months away.
- Anthropic's legal challenge: Discovery in this case would require the government to document exactly how Claude was used during Operation Epic Fury — while it was officially banned. That documentation does not currently exist in the public record.
- OpenAI and xAI classified contracts: Both companies moved rapidly to fill Anthropic's gap. The terms of those contracts — specifically whether they include the red lines Anthropic drew — are not public.
- Palantir-Anduril consortium: In December 2024, Palantir and Anduril announced a partnership to merge Maven with Anduril's Lattice autonomous systems platform. Lattice is designed specifically for autonomous weapons. The merger of the targeting intelligence system with the autonomous weapons platform is proceeding.
- Civilian casualty documentation in Iran: Bellingcat has already geolocated a Tomahawk strike next to a girls' school in Minab. The scale of the campaign — 1,000 targets in 24 hours — means the civilian casualty documentation, when it emerges, will be substantial.
This article was compiled using the Kaleido HIPS (Hidden in Plain Sight) intelligence monitoring system, which surfaces signals across defense, legal, financial, and operational news feeds simultaneously. The Anthropic designation appeared in the HIPS feed as a legal signal on February 27. The Iran operation appeared as an operational signal on February 28. The contradiction between them — the same AI banned and deployed on the same day — was the story the feed surfaced that no single outlet had assembled in full. All sources cited are public record. No classified material was accessed or implied.
"LIFTING UP THE ARMS OF OUR PRESIDENT." — Live prayer service, before the strikes. Over 200 service members filed complaints that commanders told them the war on Iran was God's plan to trigger Armageddon and the return of Christ. The complaint was filed on behalf of 15 troops: 11 Christians, 1 Muslim, 1 Jewish. Source: Jonathan Larsen / The Guardian / MRFF, March 2–3, 2026.
The Secretary of War Ended His Address to Troops With a Prayer.
On March 2, 2026 — the same day the MRFF began receiving Armageddon complaints — Defense Secretary Pete Hegseth posted a two-minute video address directly to the Joint Force. It ended as follows: "May almighty God watch over you and his providential arms of protection extend over you. Godspeed warriors and keep going." The Pentagon, when asked by Newsweek about complaints that commanders were framing the war as a biblical mission to trigger the return of Christ, directed reporters to this address and to Hegseth's 41-minute Operation Epic Fury press briefing held the same morning.
By that evening, the Military Religious Freedom Foundation had logged over 110 complaints from troops across more than 40 units in at least 30 military installations, spanning every branch of the armed forces. By the following morning: over 200. One complaint, filed by an NCO on behalf of 15 fellow service members — 11 Christians, 1 Muslim, 1 Jewish — stated that their commander had opened a combat readiness briefing by telling troops the war was "all part of God's divine plan," citing the Book of Revelation, and that "President Trump has been anointed by Jesus to light the signal fire in Iran to cause Armageddon and mark his return to Earth." The NCO added: "He had a big grin on his face when he said all of this." The complaint was first reported by journalist Jonathan Larsen on March 2 and subsequently confirmed by The Guardian, Middle East Eye, Military.com, and TRT World. The Pentagon did not deny any of it. It pointed to Hegseth's briefing.
This is the briefing they pointed to.
Secretary of War Pete Hegseth and Chairman of the Joint Chiefs of Staff Gen. Dan Caine — Operation Epic Fury press briefing, Pentagon, March 2, 2026. Public domain. Courtesy: DVIDS / war.gov
Transparency Is Not Optional
The systems described in this article are not secret. They are documented in Pentagon press releases, Palantir investor materials, NATO acquisition announcements, congressional testimony, and public contracts. The casualty rates they have produced are documented by +972 Magazine, Bellingcat, the Military Religious Freedom Foundation, and multiple international news organizations. The contradiction at the center of this story — the government that banned its own operational dependency and continued using it through an active war — is confirmed by CBS News, the Wall Street Journal, the Washington Post, TechCrunch, and CNBC simultaneously.
A search for "Project Maven" on war.gov — the official website of the United States Department of War, the domain under which the origin story of this architecture is archived — returns two results. A spotlight page about innovation. A budget briefing transcript. That is the full public accountability record on the Department of War's own website for the most consequential military AI targeting system in American history. Two results. The journalist who wrote the 2017 origin story, told at the end of her article to be followed on Twitter at @PellerinDoDNews, no longer exists on that platform. The document is archived as a "historical collection." The system it describes is live and selecting targets in an active war.
The reason this requires documenting in one place is not because the information is hidden. It is because the information is scattered across a dozen beats — defense procurement, AI ethics, constitutional law, military operations, corporate finance — and the system depends on that scattering. Each piece looks like a different story to the reporter covering their beat. Assembled, they are the same story: a targeting architecture built over a decade, tested in Gaza, deployed in Iran, expanding to NATO, moving toward 100 percent machine-generated intelligence, and operating in a legal and accountability vacuum while the people responsible for that vacuum argue about red lines and designation letters and whose tweet had legal force.
So for the record — unless actions are taken immediately and transparently, there will be two types of humans in three months: the human who approves the targeting, and the human being targeted. Using 100% machine-generated intelligence. The machine will not explain itself. We can.