On March 24, 2026, a federal court found that the United States government had designated an AI company a national security threat not to protect security, but to punish it for drawing public attention to how its technology was being used. The ruling established something that had not previously appeared in federal case law: that a company can be retaliated against for transparency about its own product's military use. That finding is the opening of this article because it names the mechanism everything that follows depends on — the accountability gap isn't incidental to how these systems operate. It is enforced.
FOIA was written sixty years ago for a world of paper records and file cabinets. The systems now making consequential decisions about citizens — who gets surveilled, whose face gets stored, whose immigration status gets flagged, whose home gets targeted — operate inside classified digital networks that no transparency law was designed to reach. What the public record shows, assembled from federal court rulings, internal government emails, academic research, and documented journalism, is that every available accountability mechanism fails to reach these systems — and for a different reason each time.
Test Cases
The Anthropic case stated plainly: the government designated its own operational dependency a national security threat the day before using it to strike targets in an active war. A federal court found the designation was First Amendment retaliation — punishment for transparency, not protection of security. That ruling establishes that the accountability gap isn't incidental. It's enforced.
Anthropic had drawn two documented red lines in its July 2025 contract with the Pentagon: no mass domestic surveillance of Americans, no fully autonomous weapons. The Department of War demanded removal of those restrictions — "all lawful purposes" without limitation. When Anthropic refused, it was designated a supply chain risk — a designation previously reserved for Chinese equipment like Huawei, never before applied to an American company. The designation was the first of its kind. The court found it was retaliation. OpenAI and xAI signed "all lawful purposes" agreements with the Pentagon within hours. The lesson every AI company watching received: stated red lines produce punishment. Opacity is the rational corporate survival strategy.
Where FOIA Fails
Classified operational use is exempt from FOIA by design. What isn't classified can be requested — but the law as written has no mechanism for what happens when individually harmless datasets are combined at machine speed to reconstruct something sensitive. This is known as the mosaic theory: data points that each seem harmless reveal something they were never meant to disclose when aggregated and cross-referenced by AI.
Melanie Simmons, a statistician at US Immigration and Customs Enforcement, documented this in her graduate thesis at the Center for Homeland Defense and Security. Her research found that a single nonprofit used strategic FOIA requests over years — requesting database design schemas, data dictionaries, expert testimony, and large-scale datasets — to reconstruct a law enforcement sensitive database. FOIA's blind requester principle means agencies cannot consider who is asking or why. The law that was designed to let citizens see what the government is doing to them is being used to build surveillance infrastructure on citizens — and the agencies receiving the requests have no legal mechanism to stop it.
🔍 The Mosaic Problem
"FOIA was written in a paper-based era long before big data, open data portals, internet accessibility, lower cost of computing storage, and of course, artificial intelligence existed." — Melanie Simmons, ICE Statistician, Center for Homeland Defense and Security. Her research found "no clear mechanism for agencies to address aggregation and inference risks created by bulk dataset releases." The machine-readable formats Congress now requires for public records increase rather than decrease this risk. AI performs data linkages at scale that no human could match — and FOIA was not written for that world.
Leaving the FOIA Window Open: Implications for US Homeland Security in the Age of Artificial Intelligence — Melanie Simmons, Center for Homeland Defense and Security
The Inversion: When FOIA Buries What It Was Built to Surface
In December 2024, DHS ordered its Privacy Office to begin labeling completed Privacy Threshold Analyses — required compliance forms documenting how government surveillance systems collect personal data — as "drafts" exempt from public release. The order followed a FOIA officer lawfully releasing a redacted privacy assessment for Mobile Fortify, a previously hidden face recognition app that captures faces and fingerprints without consent, stores every image for up to 15 years, and inevitably photographs US citizens and lawful permanent residents.
The internal email obtained by WIRED is precise: "PTAs are NOT supposed to be released at all," wrote the department's deputy FOIA chief. DHS publicly denied the policy. The internal emails contradict the denial. The CBP's top privacy officer, a privacy branch chief, and the director of CBP's FOIA office were all removed after objecting to what career officials called a legally incoherent order. A completed compliance form cannot be simultaneously signed and considered a draft. DHS additionally blocked release of a PTA concerning face recognition tool Clearview AI.
The inversion is now documented: the officials who tried to comply with FOIA were removed. The records that would tell citizens they are being surveilled are being buried. The law designed to protect citizens is being used to hide from them.
Standing Up To Invisibility
Litigation requires standing. Standing requires documented harm. Documented harm requires visibility into systems that are classified. The Anthropic lawsuit is the first case where discovery might compel the government to document how Claude was used during Operation Epic Fury while officially banned. That documentation doesn't exist in the public record yet. It may never. The accountability gap incentivizes its own perpetuation — transparency about red lines produces punishment, so the rational response is no red lines stated publicly, no classified integration, no visibility.
Nondisclosure
The engineer designs before knowing what the actual use will be. Once an engineer knows more than they should about how their product is being used, they become something closer to a potential whistleblower than an employee — and professional consequences follow. The Google engineers who walked out in 2018 over Project Maven didn't stop the program. They accelerated it by creating space for Palantir, which had no ethical objections and called Google's withdrawal treasonous. The rational response to the Anthropic case for every other AI company is to avoid classified network integration entirely — which produces opacity as the corporate survival strategy. The punishment for conscience is the removal of conscience from the system.
📋 The Engineer Suppression Timeline
- 2018: 4,000+ Google employees petition against Project Maven. Google declines contract renewal. Palantir takes over — no ethical objections, sole source designation.
- 2024: Google engineers protest Project Nimbus — $1.2 billion cloud contract with Israeli military. Dozens fired following "No Tech for Apartheid" walkouts.
- February 2026: DHS removes CBP's top privacy officer, privacy branch chief, and FOIA office director after they object to orders to mislabel records and block public release.
- February 27, 2026: Anthropic designated a supply chain risk after refusing to remove restrictions on mass surveillance and autonomous weapons use.
- March 24, 2026: Federal court finds designation was First Amendment retaliation. OpenAI and xAI have already filled the gap — with no stated red lines.
The Infrastructure Is Being Dismantled
The CBP privacy officer removals are not an isolated case. American Oversight, a nonpartisan nonprofit, has documented systematic dismantlement of FOIA infrastructure across the federal government throughout 2025. The CDC FOIA office was placed on administrative leave as of April 1, 2025. American Oversight submitted 15 requests after the reported closure and received acknowledgment of only one. In FOIA litigation brought by the Center for Constitutional Rights, the government confirmed that HHS's Administration for Children and Families had paused all document productions because its "entire FOIA office was placed on administrative leave." Productions only restarted after court intervention.
The scale: in Fiscal Year 2024, more than 1.5 million records requests were submitted to federal agencies. Fiscal Year 2025 is on track to surpass that. The demand for information is at its highest point. The infrastructure to respond to it is being systematically removed.
The paper trail itself is also being destroyed. USAID staff were instructed to shred critical documents — internal communications specified to "reserve the burn bags for when the shredder becomes unavailable or needs a break." The CIA failed to preserve any messages from the Houthi group chat, and someone proactively deleted those messages even after a court ordered agencies to make best efforts to preserve them. NSC members used personal Gmail accounts for discussions of military positions. DOGE teams worked out of Google Docs to avoid preserving certain communications or drafts.
What this means for algorithmic accountability specifically: if the records of how these systems operate are not preserved — if the communications about targeting decisions, surveillance approvals, and AI deployment authorizations are conducted on ephemeral platforms and deleted before they can be requested — then FOIA litigation, when it eventually succeeds, will surface an empty archive. The accountability mechanism will have been defeated not by classification but by deletion. As Just Security's Amanda Teuscher documented: FOIA's power is only as strong as the paper trail it can access.
What Modernization Actually Requires
Simmons proposes modernizing FOIA to address mosaic risks, integrate existing data protection standards, and create controlled access platforms where sensitive data can be viewed without being extracted. The Anthropic preliminary injunction creates legal ground for a distinction the current framework cannot make: how an algorithm makes decisions affecting citizens requires accountability. How that system is architecturally structured may require protection. These are different questions. Current law collapses them into one. Until it can separate them, neither FOIA nor discovery nor whistleblower accounts will reach what actually needs to be seen.
Why This Isn't More Actionable — And Why That Matters
The legislative route requires congressional oversight. Oversight requires committee members willing to investigate systems their campaign contributors built or their agencies operate. Defense contractor contributions to Armed Services and Intelligence Committee members are documented in public financial disclosure records. The specific hearings that haven't happened — on Maven's expansion, on the Anthropic designation, on the June 2026 machine-generated intelligence date — are observable in the public record of what Congress has and has not chosen to examine. The absence of investigation is itself documented information.
The FOIA route is being dismantled faster than it can be used. Offices are closed. Records are deleted before they can be requested. The paper trail that FOIA depends on is being removed by the same administration whose actions are the subject of the requests.
The litigation route — the Anthropic legal challenge — is the one active accountability mechanism. It is one company, filing in two federal courts, against a national security claim the government can assert to slow or stop discovery indefinitely. It is proceeding. It is not guaranteed to surface what needs to be seen before June 2026.
Naming why accountability is structurally blocked is not the same as accepting that it is permanently blocked. It is the prerequisite to building something that works. The relationships between oversight bodies and the systems they are supposed to oversee — financial, institutional, contractual — are public record. Following them is the next layer of this investigation.
What Is Established — Not Opinion, Record
- The designation was retaliation: Northern District of California, March 24, 2026 — the supply chain risk designation against Anthropic was found to be First Amendment retaliation for bringing public scrutiny to the government's contracting position. First ruling of its kind.
- FOIA has no mosaic mechanism: Melanie Simmons, ICE statistician — FOIA provides no clear mechanism for agencies to address aggregation and inference risks from bulk dataset releases. AI performs data linkages at scale FOIA was never designed to govern.
- DHS buried its own surveillance records: Internal DHS email, obtained by WIRED — "PTAs are NOT supposed to be released at all." Privacy officers who objected were removed. Mobile Fortify and Clearview AI PTAs blocked from public release.
- The engineer is the first to lose: Google 2018, Google 2024, CBP 2026 — the pattern is documented across three separate institutions. The people who object to how the systems are used are removed. The systems continue.
- The governance window is closing: June 2026 — Maven begins transmitting 100% machine-generated intelligence to combatant commanders. That is a program schedule, not a prediction. The accountability framework does not exist yet.
This article was developed during a research conversation about algorithmic transparency and how unaccountable systems affect human behavior and decision-making. The investigation was activated by the interview "How Palantir, Google & Amazon armed Israel's genocide in Gaza" with Antony Loewenstein on The Middle East Eye Podcast, and further grounded by the Anthropic-DoD legal dispute documented in our prior investigation Bionic Arm. The Simmons research surfaced within 48 hours of publication. The WIRED CBP Privacy Officer reporting provided the domestic inversion case. All sources cited are public record.
Actual vs Assumed National Security Risks
The absence of governance is itself a national security risk. That's not a civil liberties argument. That's what the Oxford Programme for Cyber and Technology Policy said on the record about the Iran operation. The accountability framework needs to be built before the June 2026 date when Maven transmits 100% machine-generated intelligence to combatant commanders. That date is documented. It is not a prediction. The window is specific and closing.
Knowledge should be a requirement, not an afterthought. Conscience, something being done "in good faith," and "just war" all remain part of the ongoing conversation — on the battlefield, in boardrooms, and in federal court. The Anthropic preliminary injunction is the first time a court has intervened in that conversation. It will not be the last. Whether what follows is governance or further capture depends on whether the public record is assembled before it becomes convenient to forget.