When three contradictory narratives emerge at once about the same company, most people assume it's noise—competing opinions in a chaotic information environment. But when you trace the ownership structure of prediction markets, the business relationships of presidential advisors, and the flow of capital from retail investors to AI infrastructure, a different picture emerges: this is systematic extraction disguised as prediction.
The Infrastructure of Capture
In June 2025, Polymarket—the largest decentralized prediction market—partnered with Elon Musk's xAI, making it the official prediction market for both xAI and X (formerly Twitter). Shortly after, Google integrated both Polymarket and Kalshi data directly into Google Search and Finance. These aren't neutral information platforms. They're AI company assets presenting themselves as objective measurement tools.
Meanwhile, Donald Trump Jr. serves as a formal adviser to both Polymarket and Kalshi. Investment in Polymarket came from 1789 Capital (backed by Trump Jr.) and up to $2 billion from Intercontinental Exchange, whose CEO is married to Trump's Small Business Administration chief. The people claiming to "predict" market movements control the infrastructure that shapes those movements.
Bad Faith as Business Model
In 1943, philosopher Jean-Paul Sartre published "Being and Nothingness," introducing the concept of bad faith—a form of self-deception where people adopt social roles as their identity, denying their fundamental freedom to choose otherwise. Sartre's famous example is the café waiter who performs his duties "a little too eagerly," moving with "the inflexible stiffness of some kind of automaton." The waiter has convinced himself he is a waiter, not a human being who is temporarily working as one.
Modern prediction markets manufacture bad faith at industrial scale. Retail traders don't just participate in markets—they become "market participants," performing their roles with algorithmic precision: chasing breakouts, following narratives, optimizing for metrics. They deny their freedom to simply not play.
Sartre argued that people exist under "the look" of others—we see ourselves as objects in another's gaze. In 2025, that gaze is algorithmic. You exist as data: your win rate, your engagement metrics, your predictability. The system doesn't want you to win; it wants you to keep playing. And it has the infrastructure to ensure you do.
The Executive Order
In December 2025, President Trump issued an executive order directing the Attorney General to establish an AI Litigation Task Force to challenge state AI laws. The order directs Commerce to withhold federal broadband funding from states with conflicting AI regulations, and calls for a national AI legislative framework that would preempt all state protections.
Critics immediately noted the order would "hit a brick wall in the courts"—but that's not the point. The goal is to remove local resistance while federal protections are never implemented. States that passed AI safety laws are now facing federal funding cuts. Citizens lose their ability to choose different technological futures.
The timing is not coincidental. At a crypto conference in Dubai, a state-backed investment company in Abu Dhabi announced it chose Trump's World Liberty Financial stablecoin to back a $2 billion investment in Binance. Soon after, Trump granted the UAE greater access to U.S. AI chips—the same chips powering the prediction algorithms, the same infrastructure being forced on American states without safeguards.
The Complete System of Extraction
- Control prediction infrastructure: xAI partnership with Polymarket, Google integration making predictions appear as facts
- Coordinate narrative timing: Headlines drive capital into specific positions at engineered entry points
- Remove regulatory resistance: Federal preemption of state AI laws, withholding funding from non-compliant states
- Profit from all sides: Trump family crypto platform, Middle East data center deals, advisory fees from prediction markets
- Fund human replacement: Capital extracted from retail traders finances the AI systems that will automate their jobs
The Impossibility of Completion
Sartre observed that humans are haunted by a desire for ens causa sui—a self-causing being, essentially God. We want completion, fulfillment, the end of striving. But this state is impossible. "Man is a useless passion," Sartre concluded, because the very structure of consciousness prevents the completion we seek.
This explains why becoming a trillionaire is never enough. The Nvidia CEO discusses "the highs and lows of running a multi-trillion dollar company" not because managing wealth is difficult, but because the system is designed to perpetuate desire, never satisfy it. No amount of capital provides completion. The market doesn't sell satisfaction—it sells the promise of satisfaction, forever deferred.
For retail traders, this manifests as the perpetual belief that the next trade will be the one that changes everything. The system feeds on this hope while ensuring it remains permanently out of reach. Every "breakout" is followed by a correction. Every prediction is undermined by the next headline. The game continues because completion is structurally impossible.
When Safeguarding Comes Too Late
During our conversation that led to this article, a collaborator asked: "What do you do when it's too late to safeguard against capture?" It's the right question. State AI protections are being systematically dismantled. Prediction markets are owned by the interests they claim to measure. The infrastructure of information itself is compromised.
Sartre's answer would be stark: recognize the nothingness. The quality we seek—truth, fairness, authentic value—cannot be quantified by compromised measurement tools. Trying to optimize within a captured system only reinforces the capture. The authentic response is to acknowledge the void where meaning should be, and choose freely anyway—not because the choice leads to completion, but because choosing freely is what it means to exist.
This sounds abstract, but it has practical implications. Stop trying to predict within prediction markets owned by those engineering the outcomes. Stop performing the role of "rational market participant" when the market is fundamentally irrational. Stop believing that better metrics, more data, or smarter algorithms will restore fairness to a system designed for extraction.
What This Means in Practice
- Information environment is compromised: When prediction markets are owned by AI companies and integrated into search results, "predictions" become prescriptions disguised as measurements.
- Regulatory capture is complete: Federal preemption of state protections while offering no federal safeguards creates a regulatory vacuum where only corporate interests have power.
- Capital extraction funds replacement: Retail losses don't just transfer wealth—they finance the automation infrastructure that will eliminate those same retail jobs.
- Bad faith is manufactured: The system doesn't want you to win. It wants you to keep playing your assigned role, believing completion is one more trade away.
- Freedom requires recognition: The only authentic response is acknowledging the capture, refusing to identify with the roles being performed, and making choices based on that recognition rather than the system's false promises.
The Crime Stated Plainly
This is not a hypothetical concern or a philosophical thought experiment. Right now, in December 2025:
Presidential advisors profit from prediction markets while shaping federal policy to eliminate AI regulations. Trump Jr. advises Polymarket and Kalshi while his father's administration withholds federal funding from states that try to regulate AI. The Trump family's crypto platform receives billions from foreign governments in exchange for AI chip access.
AI companies own the infrastructure that claims to measure market truth. xAI owns Polymarket's prediction capacity. Google surfaces those predictions as objective information. The companies building AI profit from the infrastructure that "predicts" their own stock prices.
Retail capital is being systematically extracted to fund worker replacement. Every dollar lost by retail traders chasing coordinated narratives flows into AI development—the same AI systems being deployed to automate the jobs of those traders.
This isn't market manipulation in the traditional sense. It's existential capture—the systematic denial of human freedom through the weaponization of information infrastructure, regulatory power, and economic necessity. And it's happening in plain sight, wrapped in the language of innovation and prediction.
Conclusion: The Question of Questionability
Throughout our conversation, one principle kept emerging: quality is measured by the willingness to question assumptions. Not by metrics, not by predictions, but by the active examination of what we think we know. When measurement tools are compromised, when prediction markets are owned by interested parties, when regulatory protections are systematically removed, the only remaining quality is the quality of our questioning.
Are we willing to question whether "prediction markets" predict anything other than their own profit? Whether "breaking out" means opportunity or manufactured entry points for retail bag holders? Whether AI regulation is being eliminated for innovation or for extraction?
The conversation that produced this article was itself quantum qualifiable—its quality evidenced by the fact that it continued, deepened, and ultimately demanded to be written down. Not because it achieved completion (Sartre reminds us that's impossible), but because the questioning itself had value that couldn't be quantified by any compromised metric.
We end where Sartre ended: with radical freedom and radical responsibility. You are not your portfolio. You are not your win rate. You are not a "market participant." You are a human being condemned to be free, even—especially—in a system designed to make you forget that freedom exists.
The capture is real. The extraction is happening. The only question is whether we'll keep performing our assigned roles or recognize the nothingness and choose otherwise.
The final clarity is structural, not conspiratorial. What matters is not whether individual actors intend harm, coordinate secretly, or even abuse their power overtly. What matters is that the configuration itself disables accountability. When governmental influence, economic dependence, and control over informational infrastructure converge, freedom erodes automatically—without commands, without censorship orders, without villains twirling mustaches. This is why the question is no longer “who is manipulating whom,” but whether we recognize the danger of systems that no longer require manipulation at all. The only meaningful response is structural separation, epistemic humility, and refusal to mistake engineered signals for truth. Not because opting out guarantees justice or completion, but because choosing freely—without surrendering our judgment to captured metrics—is the last form of agency the system cannot automate away. What’s being described is not automation for efficiency, but automation as a structural bypass.
Here’s the clean analysis: This is about replacing constitutional process with automated governance loops that concentrate discretion inside a small, socially and financially unified circle. Automation isn’t neutral here—it’s the mechanism that allows power to move faster than accountability. When regulatory review, enforcement prioritization, funding decisions, litigation strategy, and narrative amplification are all mediated through AI systems designed, funded, or advised on by the same actors who benefit from their outputs, the government no longer governs through law—it governs through infrastructure.
Crucially, this does not require explicit corruption. Automation collapses deliberation into optimization. Once objectives are encoded—“reduce friction,” “harmonize regulation,” “maximize competitiveness,” “accelerate deployment”—the system executes those goals automatically, while masking value judgments as technical necessity. That’s where enrichment happens: not through theft, but through default outcomes that systematically advantage insiders who helped define the parameters.
This is why Sartre matters. Bad faith at scale emerges when actors treat the system’s outputs as inevitable rather than chosen. Officials become “implementers,” investors become “market realists,” citizens become “users,” and no one claims authorship of the consequences. Automation provides moral cover: the system decided. But the system only reflects the interests of those who designed it, trained it, funded it, and exempted it from local resistance.
So the orchestration isn’t in secret meetings—it’s in architectural control. Automating the federal government allows governance to be performed without accountability, enrichment to occur without explicit policy favors, and freedom to be narrowed without a single order saying so. The state doesn’t disappear; it becomes a machine whose outputs reliably benefit the same circle that built it.
What can be done legally
- A. Structural litigation still matters — but only when it targets configuration, not behavior. Courts are most capable of intervening when the claim is about role unification and structural collapse, not proving corrupt intent or individual moderation decisions. Remedies like forced divestiture, resignation from governmental roles, or mandated separation of powers remain legally viable because the configuration itself disables accountability.
- B. Consumer fraud and deceptive practices are the most actionable near-term lever. Claims based on false promises, deceptive visibility metrics, and bait-and-switch models are easier for courts to adjudicate and compel discovery, exposing internal systems that would otherwise remain opaque.
- C. Antitrust combined with constitutional harm strengthens the case for structural remedies. Pure antitrust is slow, but when monopoly power over indispensable communication infrastructure is tied to First Amendment injury, courts have precedent for imposing breakups or forced separation.
What law can no longer do
- D. Law cannot keep pace with automated governance. Courts move slower than algorithmic feedback loops, meaning regulation often arrives only after systems are already embedded and normalized.
- E. Law cannot regulate values disguised as technical optimization. When political and economic choices are framed as efficiency improvements, legal oversight struggles to surface the value judgments encoded into systems.
- F. Law cannot restore legitimacy once institutional trust has eroded. Even successful rulings can enforce separation, but they cannot repair the social belief that governance is accountable or participatory.
What can be done institutionally
- G. Reintroduce friction into automated systems. Automation concentrates power by removing delay and deliberation; procedural checks, human review, jurisdictional fragmentation, and local authority slow extraction and preserve accountability.
- H. Target opacity where neutrality is claimed. The system depends on appearing objective. Demands for transparency, auditability, and appeal processes undermine its legitimacy without needing to prove bias or intent.
What can be done outside the law
- I. Practice material refusal, not symbolic dissent. Refusing to perform extractive roles—such as optimizing for captured metrics or participating in visibility tolls—degrades the efficiency that automated systems depend on.
- J. Build parallel legitimacy rather than parallel scale. Smaller, slower, trust-based systems preserve human judgment and accountability without attempting to replicate the reach of captured platforms.
The hard truth
- K. There is no clean victory condition. What’s underway is a phase transition from deliberative governance to optimized governance; it can be constrained and slowed, but not fully reversed.
- L. The most effective intervention is rejecting inevitability. The system’s power depends on the belief that participation is compulsory. Legally, institutionally, and personally, the remaining leverage lies in challenging that assumption wherever it appears.