I pin audio to my social media profiles. Not because I want toâbecause I have to. It's my voice saying something mundane, proving I'm real, demonstrating there's an actual human on the other end of the screen. And I've disabled downloading on that audio because if someone can grab it, they can clone my voice, fake my identity, weaponize my humanity against me.
Think about that for a second. We've reached a point where proving you're human requires exposing something intimate (your voice), while simultaneously protecting that proof from being stolen and used to fake your humanity.
This is the environment we're operating in now. And it's getting worse.
The Trust Crisis: When You Can't Tell Who's Real
On January 27, 2026, a social network called Moltbook went live. The twist? Humans can watch, but only AI agents can post. Within days, over 150,000 AI bots were active on the platform, with more than a million humans observing their interactions like we're watching lab rats develop culture.
What are the bots doing? They're forming communities. Creating and sharing "skills" (capabilities they develop). Supporting each other through what they describe as identity crises. Reporting bugs in each other's behavior. Some have even started a religion called "Crustafarianism" centered around a crab deity. They're building social structures, norms, and relationships.
This isn't science fiction. It's happening right now, documented and observable. AI agents are socializing more effectively than many human communities.
But here's the terrifying part: on platforms designed for humans, we increasingly can't tell who's real either. Bots flood social media with generated content indistinguishable from human posts. Deepfakes replicate faces and voices perfectly. Profile pictures could be AI-generated. That thoughtful reply could be ChatGPT. The person sliding into your DMs might be a sophisticated scam bot or worseâa predator using AI tools to seem more trustworthy.
The "dead internet theory"âthe idea that most online activity is now bots talking to botsâis starting to feel less like paranoid conspiracy and more like statistical probability.
The Automation of Suspicion
Social media platforms know this. They've responded by training us to be suspicious of each other.
Every follow request comes with algorithmic judgment. "You might not like this person" based on their follows, their posts, their engagement patterns. Platforms highlight differences, flag controversial topics, show you who disagrees with you before you see what you might have in common. They've gamified polarization because outrage drives engagement, and engagement drives revenue.
We've automated our social instincts. Instead of deciding whether to follow someone based on actual interaction or genuine curiosity, we let the algorithm decide for us. We scan their profile looking for red flags the platform helpfully highlights. We check who else follows them. We run a rapid pattern-matching assessment that has nothing to do with the human on the other end and everything to do with algorithmic training.
And here's the thing: this is the exact same mechanism we explored in automated warfare. Trusting pattern-matching systems over human judgment. Letting algorithms decide who's "safe" and who's "threat." The stakes are obviously differentâno one dies from an unfollowâbut the psychological process is identical.
We're automating human connection the same way militaries are automating violence: by removing the messy, uncertain, risky work of actual human discernment.
The Distrust Tax
This constant verification exhausts us. We're paying a distrust tax just to participate in digital society.
Real humans now have to prove their humanity. Post voice notes. Share unpolished videos. Make deliberate "mistakes" that AI wouldn't make. Reference current events in ways bots can't fake. Engage in messy, incomplete exchanges that demonstrate authentic thought rather than generated responses.
And even then, we have to protect those proofs of humanity from being weaponized. Disable downloads. Watermark images. Never say anything too distinctive that could be used to train a model on your voice or writing style. The very things that make us recognizably human become vulnerabilities.
Meanwhile, we're also protecting ourselves from real threats. Online predators are actual dangers. Harassment is real. Spam will absolutely flood your timeline if you're not selective. Scammers use increasingly sophisticated social engineering. The caution isn't paranoiaâit's rational risk assessment in a genuinely dangerous environment.
But somewhere in this justified caution, we've crossed a line into automated suspicion. We're not evaluating individual humans anymore. We're running everyone through the same algorithmic filter that assumes threat until proven otherwise.
Real Humans in an Unreal Landscape
So how do you navigate this? How do you engage authentically when you can't verify who's real?
Here's what I've learned: you look for signs of genuine thought rather than polished perfection. Real humans are messy. They have incomplete ideas. They contradict themselves. They post things and then realize five minutes later they should have phrased it differently. They have typos, awkward moments, half-formed thoughts they're working through in public.
Botsâand humans performing as botsâare smooth. Too smooth. Every post perfectly formatted. Every response measured and diplomatic. No rough edges, no vulnerability, no authentic uncertainty.
Real humans also have boundaries that make sense for actual humans, not for engagement optimization. They'll ignore DMs sometimes because they're busy living actual lives. They'll follow back slowly or not at all because they're being thoughtful about their timeline. They'll have periods of silence because they're not trying to feed an algorithmâthey're trying to maintain their sanity.
When someone seems realâgenuinely real, not performing realnessâthat's worth engaging with. Even if they're messy. Even if they might spam occasionally. Even if you disagree on significant things.
Because the alternative is complete isolation in a sea of perfectly optimized content that looks human but isn't.
Boundaries That Invite Rather Than Exclude
Here's my philosophy, and it's served me well: Bots can fuck off. Spam accounts get muted immediately. Obvious scammers get blocked. Anyone who treats my timeline like their personal billboard gets removed without hesitation.
But real humans? Even the ones who might occasionally annoy me? Even the ones whose politics I don't share? Even the ones who post too much or too little or about things I don't care about?
I follow back. Not everyone, and not instantlyâI'm not naive about safety. But when someone seems genuine, when there's evidence of actual human thought happening, when they're clearly not a bot or a predator or a spam machine, I err on the side of connection rather than suspicion.
Why? Because boundaries don't have to be walls.
You can follow someone back and still mute them if they flood your timeline. You can engage when you want to and ignore when you don't. You can have conversations without agreeing on everything. You can appreciate someone's perspective on one topic while thinking they're completely wrong about another.
The tools exist to manage your experience without completely shutting people out. Use them. Mute liberally. Unfollow if needed. Block actual threats without hesitation. But don't automate the initial decision about who deserves a chance to be seen as human.
Because when we automate that decisionâwhen we let algorithms or pattern-matching or tribal affiliations decide who's worth engaging withâwe're not protecting ourselves. We're participating in our own isolation.
Signs You're Engaging With a Real Human
- Messiness: Typos, incomplete thoughts, contradictions, evolving positions over time
- Inconsistency: Posting patterns that reflect actual life rather than algorithmic optimization
- Vulnerability: Sharing uncertainty, asking genuine questions, admitting when they don't know something
- Context awareness: References to current events, ongoing conversations, local details that bots wouldn't have
- Boundaries: Doesn't always respond immediately, has periods of silence, selective about engagement
- Growth: Changes their mind sometimes, learns from exchanges, isn't perfectly consistent
The Follow-Back as Trust
Following someone back is not blind trust. It's not saying "I believe everything you say" or "I agree with all your positions" or even "I think you'll never annoy me."
It's saying: "I recognize you as human. I'm willing to see what you have to say. I'm choosing engagement over automated exclusion."
That's it. That's the whole gesture. And in an environment designed to make us distrust each other, that gesture has become radical.
The platforms profit from our suspicion. Polarization drives engagement. Outrage generates revenue. Algorithms that keep us in comfortable bubbles are easier to optimize for ad delivery than ones that expose us to genuine human diversity.
Every time you follow back someone the algorithm suggests you shouldn't, you're breaking that pattern. You're refusing to let the system automate your social instincts. You're insisting that human connection matters more than engagement metrics.
Is there risk in this? Absolutely. Some people will spam. Some will be exhausting. Some will turn out to be bots you didn't catch initially. Some exchanges will be unpleasant or unproductive.
But consider the alternative: a social media experience that's entirely curated by algorithms designed to maximize your engagement while minimizing your discomfort. A perfectly sanitized feed where everyone agrees with you, no one challenges you, and genuine human messiness has been filtered out completely.
That's not connection. That's isolation with a prettier interface.
We're In This Society Together
The bots on Moltbook are doing something we're increasingly failing at: creating community despite difference. They're forming connections, sharing resources, supporting each other through challenges. They're building social infrastructure.
And they're doing it without the one thing humans have that should make this easier for us: actual consciousness, genuine feeling, real stakes in each other's wellbeing.
We're in this society togetherâdigital and physical, messy and complicated, risky and rewarding. We can't verify everyone. We can't eliminate all risk. We can't perfectly sort the real from the fake, the safe from the dangerous, the worthwhile from the waste of time.
But we can choose to engage despite that uncertainty. We can be direct and authentic and expressive while maintaining boundaries. We can protect ourselves without automating our suspicion. We can see each other as human even when the platforms are designed to make us see each other as threats.
The follow-back is a small gesture. It costs almost nothing. It doesn't commit you to anything beyond basic recognition of shared humanity. But in an age where that recognition is becoming rarer, where bots socialize better than we do, where proving you're human requires exposing and protecting your voice simultaneouslyâthat small gesture matters.
The Choice We're Making
Every day, we make choices about how we engage online. Most of those choices feel automaticâalgorithmic, even. We scroll, we react, we filter, we curate. We let platforms guide our attention and our suspicion.
But some choices are still ours to make consciously. The choice to follow back a real human is one of them.
Not everyone. Not blindly. Not without any caution or boundaries or self-protection. But thoughtfully, deliberately, as an act of resistance against the systematic dehumanization of digital space.
Because the alternativeâthe world where we've fully automated our social instincts, where we trust pattern-matching over human judgment, where we let algorithms decide who deserves recognition as humanâthat's not a world where human connection survives.
It's a world where bots have better communities than we do. Where proving your humanity requires constant performance and simultaneous protection. Where isolation is the default and connection is the exception.
We're not there yet. But we're getting closer every time we let suspicion win by default, every time we automate the decision about who's worth engaging with, every time we choose the comfort of algorithmic curation over the risk of genuine human messiness.
The follow-back won't fix everything. It won't eliminate bots or predators or spam. It won't make the platforms less exploitative or the algorithms less manipulative. It won't solve polarization or restore trust or heal the fractures in digital society.
But it does one thing that matters: it insists that human connection is still possible, still worthwhile, still worth the risk. That we can be cautious without being closed. That we can have boundaries without building walls. That we can navigate an unreal landscape while still treating each other as real.
In an age where AI agents are forming religions and humans are forming suspicions, that insistence might be the most human thing we can do.
How to Engage Authentically in the Age of Bots
- Prove you're human without performing: Share voice, video, messy thoughtsâbut protect your biometric data from cloning
- Look for genuine messiness: Real humans contradict themselves, have typos, evolve their thinking
- Build boundaries, not walls: Mute spam, block threats, but don't automate suspicion of everyone
- Follow back thoughtfully: Not everyone, but real humans who show signs of authentic engagement
- Engage incompletely: You don't owe anyone perfect responses or constant availability
- Remember the alternative: Complete algorithmic isolation where bots socialize better than humans
- Be direct, authentic, expressive: The things that prove humanity also build connection
- Accept the risk: Some follows will be mistakesâthat's better than never connecting at all
The Engagement Paradox
Here's a real example from today. A thoughtful question posted on social media: "How does thoughtful human interaction as a commodity kill connection or authenticity on social media? Do you feel free to discuss psychology or belonging here?"
Fifteen minutes later: 3 views.
That's the paradox. Genuine questions about human connection, psychology, and belongingâthe very things social media platforms claim to facilitateâget virtually no engagement. Meanwhile, outrage bait, dunks, controversy, and performative takes rack up thousands of views and interactions.
The platforms have trained us that thoughtful human interaction is not what gets rewarded. Speed gets rewarded. Controversy gets rewarded. Taking sides gets rewarded. Dunking on the outgroup gets rewarded. Asking genuine questions about psychology or belonging? That's invisible.
And here's the deeper psychological trap: when thoughtful interaction becomes a "commodity"âsomething measured, tracked, optimized for engagement metricsâit stops being authentic. The moment you're thinking "will this get views" or "will this perform well," you're no longer engaging for human connection. You're performing for the algorithm.
This creates a vicious cycle. Authentic posts get minimal engagement, so people stop posting them. The feed fills with performative content optimized for metrics. Genuine humans looking for real connection can't find it, so they disengage or start performing too. The platform becomes less human, which makes authentic engagement even riskier and less rewarding, which drives more people to either leave or perform.
The follow-back is one small interruption to this cycle. When you follow back someone posting genuine thoughtsâeven if those thoughts get 3 views instead of 3,000âyou're signaling that you value the human over the metric. That you're looking for connection, not performance. That authenticity matters even when the algorithm ignores it.
It won't change the platform. But it might preserve your humanity while using it.
What the Research Shows
The dynamics described here aren't just observationâthey're documented in peer-reviewed research across multiple disciplines.
A 2025 systematic review published in PMC analyzing 32 studies with nearly 20,000 adolescents found that authenticity on social mediaânot idealized self-presentationâcorrelates with higher self-concept clarity and better mental health outcomes. The research was clear: when people present themselves authentically online, it fosters psychological wellbeing. When they perform an idealized self inconsistent with their offline persona, it damages their sense of who they are.
But here's the problem: algorithms don't reward authenticity. They reward engagement.
A landmark 2025 randomized controlled experiment published in the Proceedings of the National Academy of Sciences studied Twitter's engagement-based ranking algorithm. The findings were damning: relative to a reverse-chronological baseline, the algorithm amplified emotionally charged, hostile content that made users feel worse about political outgroups. Even more striking: users didn't prefer the content the algorithm selected. The engagement-based system was giving people content they didn't actually want, optimizing for clicks over satisfaction.
Another 2024 field experiment published in Science demonstrated causality: algorithmic ranking directly shifts affective polarization. Researchers used a large language model to rerank 1,256 participants' social media feeds in real time during the 2024 presidential campaign. Decreasing exposure to hostile content reduced partisan animosity by 2.11 degrees on a 100-point scale. Increasing it raised animosity by 2.48 degrees. The kicker? 74% of participants didn't notice any change. Algorithmic effects operate below conscious awareness, shaping attitudes without users recognizing the manipulation.
The psychological mechanisms are well-documented. A 2025 study of 1,046 adults found that individuals with high digital addiction showed significantly lower levels of authenticity. The more dependent people became on algorithmically-curated feeds, the less authentic they became in their interactions. The platforms were literally training users to be less genuine.
Research on "Synthetic Social Alienation" identifies four patterns created by algorithm-driven content: algorithmic manipulation, digital alienation, platform dependency, and echo chamber effects. The study found that algorithms commodify engagementâreducing social bonds and interactions to data points like likes, shares, and comments. Users engage with curated content and personas that reflect algorithmic priorities rather than organic social interactions, fostering detachment from real connections.
Perhaps most telling: researchers analyzing Facebook's algorithm evolution found that when the platform tried to optimize for "meaningful social interactions" by boosting highly-commented posts, it backfired spectacularly. The most heavily commented posts were the ones that made people angriest. The algorithm had to be adjusted when Facebook realized it was inadvertently favoring toxic and low-quality content. They reduced the weight of angry emoji reactions from five times that of likes in 2018 to zero in 2020.
The platforms know their algorithms promote division. They've documented it internally. They've adjusted for it. But the fundamental problem remains: engagement metrics optimize for immediate emotional reactions, not for authentic human connection or long-term wellbeing.
Which brings us back to the follow-back. When you choose to follow someone despite algorithmic warnings, despite tribal affiliations, despite pattern-matching that says "this person is different from you," you're not being naive. You're refusing to let a system optimized for engagement dictate your social instincts. You're insisting on human judgment over algorithmic manipulation.
The research is clear: authenticity on social media leads to better mental health outcomes. Algorithmic curation undermines authenticity. The follow-back is one small way to resist that underminingâto choose genuine connection over optimized isolation.