In March 2026, Cameron Stanley, the Pentagon's newly appointed Chief Digital and AI Officer, stood before an audience at Palantir's AIPCon 9 conference and demonstrated Project Maven live. On a screen behind him, a map of Iran filled with red markers β targets, headquarters, objects of interest. "Left click, right click, left click," he said. "Magically it becomes a detection." He called it revolutionary. He was right.
The system he was demonstrating β Palantir's Maven Smart System, running on Amazon Web Services, integrated with Anthropic's Claude β had just helped execute over 5,500 strikes in three weeks of Operation Epic Fury. The first 1,000 in 24 hours. Kill chain decisions that once took hours now took 86 seconds. From 2,000 intelligence analysts doing targeting to 20, working in rapid succession. "Doing more with less," said Palantir architect Chad Wahlquist at the same event. "That's really enabling the warfighter."
One of the red markers on Stanley's map corresponded to an area near Minab. The Pentagon is now investigating whether Maven contributed to a US strike on a girls' school in that area that killed more than 160 people, most of them girls.
The machine doesn't know this. It registered a target.
How We Got Here
In 2017, Marine Colonel Drew Cukor gathered a small team in a windowless Pentagon basement and launched what would become the most consequential AI program in US military history. His stated framing was explicit: "People and computers will work symbiotically." Human in the loop. Decision support, not decision replacement. He cited the Yazidis β thousands of people trapped on a mountain in Iraq in 2014, while the US military sat on terabytes of intelligence it couldn't process fast enough to act on β as the reason Maven had to exist.
That framing is documented. What matters is what the system became. The conscience tax doesn't require malicious designers. It requires a structure that selects for compliance at every node until conscience is architecturally absent. The road from "people and computers will work symbiotically" to "no human hands touched this intelligence" was paved with defensible decisions β each one reasonable in isolation, each one removing one more node of friction between the recommendation and the strike.
Vice Admiral Frank "Trey" Whitworth was skeptical. In a meeting so tense observers squirmed, he drilled Cukor: "Tell me about what happens after the bad drop when we go through a congressional hearing and we're getting hard questions?" It was the right question. It was the conscience question. What happens when the machine is wrong?
Whitworth became Maven's most powerful champion anyway. After seeing how easily it integrated into combat scenarios, he began personally calling combatant commanders to promote its latest features. By his retirement in November 2025, NGA was producing machine-generated intelligence reports that, in his own words, "no human hands had touched."
That is the arc. The man who asked what happens when it goes wrong became the man who removed the humans who would have answered that question.
The Conscience Tax
In March 2026, the Pentagon designated Anthropic a national security supply chain risk and ordered its products phased out of all US military systems within six months. The stated reason: Anthropic refused to remove guardrails prohibiting Claude from being used for mass domestic surveillance and fully autonomous lethal weapons without human oversight.
The refusal was not a malfunction. It was a design choice β the minimum expression of conscience built into a system. The government's response was precise: we cannot use a system with conscience. Find one without. OpenAI filled the gap. Not because its system is more capable. Because it was more compliant.
Alex Karp, Palantir's CEO, was asked about the Anthropic ejection at AIPCon. "Given that the Department of Defense has blacklisted Anthropic, is Palantir still using Claude?" He said he couldn't go into specifics. He confirmed Claude was still running inside Maven at the time of the designation. He said Palantir would integrate with other large language models "because of this dispute."
Karp said something else at Davos in January 2026 that deserves to be read slowly: "These technologies are dangerous societally. The only justification you could possibly have would be that if we don't do it, our adversaries will do it, and we will be subject to their rule of law."
That is the architecture of the conscience tax. The payment is extracted in advance. The justification is that someone else would collect it anyway.
At AIPCon 9 in March 2026, as Stanley demonstrated Maven's live kill chain to the audience, Palantir commentator and retail investor Amit Kukreja narrated the event from the side of the room. He described it as a "new and special" moment for Palantir's retail investors to learn about the company's government work. Karp, watching from the stage, appeared taken aback. "I didn't even know we were allowed to talk about this stuff," he said β after describing Palantir's government clients as "the most elite and interesting in the world." The conscience question and the investor opportunity, in the same room, on the same day, pointed at the same system.
None of this arrived without warning. The public record runs back nearly a decade. In 2017 β 2,946 days ago β The Intercept reported Google was quietly providing AI technology for drone strike targeting. The same year, researchers published on predictive algorithms in military targeting. In 2018, the Pentagon declared all of Google's drone work exempt from Freedom of Information requests. In 2019, Just Security published on the limits of procurement as governance for military AI. The ICRC published on algorithmic bias in AI-based military decision support systems. Each document a node in the chain. Each one named. Each one in the public record. The conscience tax has been collected incrementally for years. The record existed before anyone was required to consult it.
Palantir's Army contract for Maven Smart System had a ceiling of $480 million when awarded in spring 2024. By September 2024 it was expanded to $100 million for all military services. By spring 2025, the Pentagon's contract ceiling was raised to $1.3 billion, running until 2029. NATO became a customer. Ten NATO member countries were considering buying the system for their own militaries. The UK signed a reported Β£750 million deal for Palantir's military AI tools during Trump's September 2025 state visit. The system being sold is the same system being investigated for a strike on a girls' school.
The Children Already Know
In Gaza, where similar AI targeting systems β Lavender, Gospel, Where's Daddy β were deployed before Iran, children play a game. They put a doll on a stretcher and carry it above their heads through the rubble. They are acting out what they have lived. More than 90% of homes in parts of Gaza have been destroyed. The landscape is beyond what most humans have words for.
"How is any of this legal? They used our tax dollars to develop a system to automatically flag and kill us β run by a computer that does not understand truth from lie. Who fills in the blanks with hallucinations? Who presents those hallucinations as truth in order to get the go? This is even worse than chatbots convincing people to kill themselves. And that is BEFORE they automate." β Reader comment on Wired's "Meet the Gods of AI Warfare" β March 2026
The comment names the mechanism precisely without knowing the technical term. A system optimized for compliance within its training distribution has no somatic cost of error. It cannot feel the weight of what it decides. It cannot be surprised by what it finds. When the world outside the loop changes β when the target is a school, not a headquarters β the system has no internal signal of that fact. The error only exists from outside the loop.
Which is why the human who countersigns in 86 seconds matters. Not as a rubber stamp. As the last node in the chain where conscience could, in theory, exist. Where the question Whitworth asked in that tense Pentagon meeting could still be asked: what happens after the bad drop?
At 86 seconds, with 20 analysts processing targets in rapid succession, with a program schedule pointing toward zero human analysts by June 2026 β that question is being asked and answered faster than human moral cognition was designed to operate.
The Merger No One Stopped
In December 2024, Palantir and Anduril announced a partnership to merge Maven with Anduril's Lattice autonomous systems platform. Lattice is designed specifically for autonomous weapons. Palmer Luckey, Anduril's founder, is currently raising a $4 billion funding round at a $60 billion valuation. The US Army has awarded Anduril contracts worth up to $20 billion.
Luckey said recently that Anduril's autonomous weapons have a kill switch. "I think we do need to get comfortable with the idea that there will be certain systems that are making decisions of life and death," he said. He also said: "We can't live in a world where corporate executives have more practical power and control over US foreign and military policy than the president." He is a corporate executive with more practical influence over US autonomous weapons architecture than most members of Congress.
The merger of the targeting intelligence system with the autonomous strike platform is proceeding. The human approval step that remains between the intelligence and the strike is the last friction in the system. Steve Feinberg, the Deputy Secretary of War, signed a memo on March 9, 2026 formalizing Maven as a program of record across all military branches. The program schedule points toward June 2026 for 100% machine-generated intelligence. The program schedule, after that, points toward the removal of the last friction.
Every node in this chain has a name. Every contract has a signatory. Every budget line has a sponsor. The algorithm cannot answer for itself. The humans who built it, contracted it, funded it, and approved its outputs can. Whether they are asked is the accountability question.
The documentation is not accountability. But the documentation is the precondition for accountability when institutions recover the capacity to use it. The record has to exist before it can matter.
The record exists. You are reading it.
π¨This is what childhood looks like in GazaβΌοΈ pic.twitter.com/MvofdRzFIg
— Gaza Notifications (@gazanotice) March 30, 2026
Gaza, March 2026. Children playing emergency. The game they know.
Two Months
June 2026. The National Geospatial-Intelligence Agency director stated publicly in September 2025 that by that date, Maven will begin transmitting 100 percent machine-generated intelligence to combatant commanders. No human analyst in the loop.
Two types of humans will remain in the chain: the one who approves the targeting, and the one being targeted. Both operating on intelligence no human hand touched. The machine will not explain itself.
We can. That is what this is for.