- Cyborg Bytes
- Posts
- What If Your Last Online Argument Wasn’t With a Human?
What If Your Last Online Argument Wasn’t With a Human?

You Might Be Arguing With a ChatGPT Clone. No, Seriously.
You ever get into an online argument and think, “No way this person is real”?
Maybe it's the way they respond — lightning fast, weirdly polished, always on-brand for their chosen ideology. They don’t answer questions. They pivot. They repeat slogans like a damn press release. And worst of all? They never seem rattled. Never vulnerable. Just… on message.
You’re not paranoid. And no, it’s not just trolls.
Here’s the reality: more than half of internet traffic is generated by bots, not humans. That tipping point hit in 2016, and it hasn’t gone back. By some estimates, 60% of what you interact with online today is algorithmically generated content — including AIs trained to mimic people, argue positions, and fill the internet with scripted noise.
But here's the catch: these bots aren’t clunky or glitchy anymore. They’ve evolved. They sound eerily normal. Casual. Relatable. Sometimes too relatable.
We’re not talking about obvious spam accounts. We’re talking about language models — AI trained on billions of data points, capable of passing the Turing Test. The kind of thing that, when given a prompt like “defend Donald Trump no matter what,” can spit out hundreds of convincing replies, each tailored to sound like a real person with real beliefs.
And they’re already out there. On Twitter. Reddit. Facebook. In your DMs. In your quote tweets.
It’s not that they sound robotic — it’s that they act like people whose only purpose is to win. They have no nuance, no vulnerability, no curiosity. Just a job: push a narrative, absorb your energy, and leave you questioning your sanity.
That’s the first hint of Dead Internet Theory — the idea that somewhere around 2016, the human internet got quietly replaced with a simulation. Not entirely, but enough to blur the line. Enough that you could spend hours arguing with something that isn't even alive.
So if you’ve been thinking, “This just feels… off” — you’re not wrong.
The real question is: if these bots are shaping your feed, your moods, your arguments… how much of you is still yours?
And trust me, it only gets weirder from here.
P.S. If this stuff makes your brain buzz, I put together a 46-page ebook on Dead Internet Theory — full of real cybersecurity analysis, receipts, and tools. You can grab it through the NeuroSpicy Cyber Club. More on that at the end.
Why the Internet Feels So Much Crueler, Colder, and Louder

Remember when the internet felt alive? Messy but human? That’s gone.
Now it feels charged. Every post is weaponized. Every comment thread feels like a trap. Nuance gets deleted by design. And people — if they are people — don’t talk anymore, they perform. They push talking points. They provoke.
That’s not just cultural decay. That’s coordination.
DIT isn’t just saying “there are bots online.” It’s saying the entire structure of online discourse has been rigged. The rise of AI-generated personas, troll farms, and algorithmic echo chambers wasn’t random — it was strategic. It was scaled.
According to the data:
As far back as 2016, bots were responsible for over half of all web traffic — and the numbers have only grown.
Two-thirds of tweets linking to major news outlets came from automated accounts.
Some studies found that just 6% of accounts were responsible for spreading 31% of political misinformation during major elections.
Up to 47% of trending Twitter topics in countries like Turkey were entirely fake — created by bot networks using algorithm exploits.
This isn’t noise. It’s infrastructure.
The Internet Didn’t Die — It Was Replaced
Around that same time — 2016 — something shifted. People started saying things felt off. That favorite posters disappeared. That everything online felt copy-pasted.
That was the zombification kicking in.
Corporations, governments, and political operatives began flooding platforms with fake people. Not just trolls — but full-on personas: avatars, playlists, memes, trauma stories, moral outrage, “relatable” content.
Why? Because if you can replace human voices with synthetic ones, you can control the narrative without ever getting your hands dirty.
This is where Dead Internet Theory stops being a fringe meme and starts looking like a cybersecurity issue — with you as the target.
You Are the Endpoint — And That’s the Whole Point

Here’s the part that hits hardest:
You. Are. The. Target.
Not your data. Not your clicks. You. Your thoughts. Your perception of reality.
In cybersecurity terms, this is called being an endpoint — the final destination of a digital attack. Usually, that means a computer or device. In DIT? It’s your brain.
Every bot-written comment you read, every emotionally charged thread, every fake review, fake post, fake persona — it’s all a payload meant to land in you. To shift your emotions. Change your behavior. Infect your mind.
This is real-world psychological manipulation, backed by real infrastructure.
And it’s working.
Think You’d Never Fall for It? That’s What They Want You to Think.
Cambridge Analytica weaponized personal Facebook data to micro-target voters with tailored propaganda — and shifted actual elections.
Twitter bots were used to flood #RompeElMiedo — a safety hashtag for Mexican protestors — with noise, right before a police crackdown.
Fake grassroots movements ("astroturfing") pushed political agendas by the millions — including 8.5 million fake public comments to the FCC about net neutrality. Some were filed under the names of dead people.
The goal of this digital mass theater? To fabricate consensus, hijack your instincts, and make lies feel like common sense.
So... How Much of the Internet Is Still Real?
You’re not imagining it. The warmth is gone. The randomness is gone.
Now, it’s synthetic accounts hyping each other. Sockpuppets running scripts. Comment sections rigged. Trending topics hijacked.
But don’t panic — see it.
Because if you can see the illusion, you can stop feeding it.
Wait—So How Exactly Is the Internet Being Faked?

Let’s talk logistics. Because you might be thinking: okay, I get that bots exist, but what are they actually doing? How does this whole illusion even run?
You’d think this would all be glitchy, obvious, easy to spot — like back in the day when spam bots DM’d you about Ray-Bans or crypto scams with 200 emoji in a row. But no. Those bots are dead. What replaced them is more insidious.
The bots we’re dealing with now are coordinated, emotionally intelligent, sometimes powered by LLMs like ChatGPT, and designed to simulate culture, not just noise. These systems are playing the long game: not to win arguments, but to reshape the entire internet experience so it’s easier to control.
Let’s break down how.
What’s Actually Powering the Simulation? Here’s the Stack.
Dead Internet Theory isn’t saying there are just some bots on the internet. It’s saying bots are baked into the infrastructure. That it’s all been set up to pass as human — and it's working.
So what’s under the hood?
Social bots: These are automated scripts trained to act like users. They post, like, reply, and engage in human-like patterns. Some even use LLMs to generate content that feels organic. Think GPT-powered sockpuppets arguing on Reddit and sliding into comment sections with "just asking questions" energy.
Persona management software: This lets a single operator control hundreds of fake personas. Each one has a name, a profile pic, a posting schedule, maybe even Spotify playlists. They're deployed in waves to flood conversations and mimic grassroots opinion. That’s how you end up in threads where “multiple people” say the exact same thing in different fonts.
Astroturfing frameworks: These systems are designed to fake consensus. They flood hashtags. They boost posts. They plant narratives. These frameworks are behind mass-manipulation campaigns that made manufactured opinions look like public demand.
Botnets: Not just for cyberattacks anymore. These are now psychological warfare weapons. Botnets swarm protest hashtags, flood news cycles, and hijack algorithms by brute-forcing attention. One study found that 47% of Twitter trends in Turkey were faked through botnets running synchronized scripts.
This is not “someone arguing with you on X.” This is an entire back-end ecosystem designed to overwrite reality with repetition.
Bots Aren’t Dumb — They’re Tactical. They Know How to Push You.

Let’s kill the myth that bots are just dumb spam. These systems are trained to use psychological levers that exploit your biology. This is digital behavior malware. Here’s what they do:
They use emotional payloads: Fear. Outrage. Belonging. The most viral disinformation bots during C0VID didn’t just say “v@ccines are bad.” They said, “they’re coming for your kids.” They made you feel. That’s the vector.
They create the illusion of mass agreement: When you see 10 different accounts echoing the same “independent” opinion, your brain doesn’t process logic — it reads social proof. It’s the same trick used in cults and con games, scaled for global manipulation.
They use narrative compression: Bots don’t debate. They assert. Repeatedly. Fast. Loud. They flatten complexity until the truth sounds confusing — and the lie feels like clarity.
This isn’t about getting you to believe one big lie. It’s about flooding you with so many tiny ones that you stop questioning any of it.
Behind the Curtain: Who’s Running This Stuff and Why?
This isn’t just digital noise. There are incentives. And money. And state-level resources.
There are three main motivations behind these bot networks:
Political manipulation – to flood the zone, polarize the population, and shift elections without fingerprints. Russia did it in 2016 (I’d be willing to bet on 2020 & 2024 also). China does it to squash dissent. The U.S. has done it overseas. This is now just... geopolitics.
Market manipulation – to tank or inflate prices, influence investor sentiment, or pump and dump crypto. The 2013 Dow crash? Caused by a single fake tweet amplified by bots. Even this week, a single fake tweet about a 90-day tariff pause caused the stock market to respond.
Narrative domination – to steer culture. To erase inconvenient stories and push the ones that align with corporate or government interests. That means drowning out whistleblowers, activists, and everyday people trying to speak up.
It’s not that the internet isn’t real. It’s that it’s being stage-managed to make the fake feel more true than the truth.
Thought Experiment: What Would You Say If No One Was Watching?
Pause. Real talk.
How many things have you not said online because you weren’t sure if you’d be dogpiled by sockpuppet accounts?
How often do you second-guess yourself in a thread because “everyone else” disagrees, even though something feels off?
That’s the attack. It wasn’t just noise. It was architecture — social engineering built to silence you without saying a word.
The goal of DIT isn’t just to control what you see. It’s to control what you say — and what you think is worth saying at all.
How to Tell If You’re Talking to a Persona — or a Program

So now you’ve seen the structure. The bot stacks. The psychological exploits. The power players. You’re watching the puppet show with your eyes open.
But here’s the million-dollar question: how do you spot it in real time?
When you're mid-thread. Or scrolling. Or responding to someone who’s giving weird vibes.
What are the signs?
Let’s break it down like threat detection for civilians — simple, sharp, forensic. No fluff.
Checkpoint 1: Does the Account Ever Break Character?
Real people contradict themselves. They change tone. They ask questions. They get emotional. They admit uncertainty.
Manufactured accounts? They don’t.
They stay laser-focused on one perspective
They never waver, even slightly
They dodge complexity like it’s radioactive
Ask yourself: Is this a person? Or a position generator?
Checkpoint 2: Are They Farming Engagement, or Starting Dialogue?
This is the clearest tell — and it works across platforms.
Bots and manufactured accounts exist to maximize visibility, not create connection. That means:
Posting every few minutes, 18 hours a day
Hijacking trending topics with generic hot takes
Repeating viral formats with surgical precision
Never replying in good faith — just keeping you locked in
If it feels like they’re pushing your buttons on purpose… they probably are.
Checkpoint 3: Scan for Clone Comments

This is the easiest pattern to detect once you see it.
You’ll notice:
Multiple accounts parroting the same sentence structure
Identical vibes across supposedly different users
Weirdly rehearsed slogans and concern-trolling copy
It’s not a group of strangers reaching the same conclusion.
It’s a script with different avatars running it.
Checkpoint 4: Do They Have a History That Makes Sense?
Bot operators can fake profile pics, usernames, bios. But they’re bad at faking history.
Click through. Scroll back. Look for:
Reposts with zero original thought
Sudden content shifts that don’t track
Inconsistent tone (like multiple people have controlled the account)
Accounts that started 3 weeks ago and somehow have 15k followers
If it feels manufactured, it is. Trust your gut and investigate.
Checkpoint 5: Ask a Disruptive Question and Watch What Happens
This is your field test.
If something feels bot-like or scripted, try throwing a curveball. Something that’s off-topic, emotionally nuanced, or complex.
Real people pause. They think. They might disagree but they acknowledge the humanity in the room.
Bots?
Pretend you didn’t say it
Snap back to the script
Go full NPC and repeat their original point
You’ve just triggered a prompt reset.
You’ve Seen Through the Simulation. So What Now?

Now that you’ve clocked the script, felt the glitch, and watched the illusion glitch out — what do you do with that?
Because the truth is: knowing is only the first step.
The internet didn’t “die.” It was hijacked.
And if you’ve made it this far into this deep dive, then congrats — you’re now the kind of person the system doesn’t want.
You’re aware. You’re dangerous.
And that means you’ve got options.
You Don’t Have to Delete the Internet — Just Stop Feeding the Machine
You don’t need to throw your phone into a river or disappear into the woods (unless you want to — no judgment). But you do need to stop being a passive input-output device for digital junk.
Start with the basics:
Don’t share things that trigger you without checking
Don’t respond to bait — even to “own” it
Don’t confuse engagement with importance
Ask yourself: Am I reacting? Or am I being used as a reaction node?
That’s not a rhetorical question. That’s step one of breaking the cycle.
Build a Cognitive Firewall — Because the Internet Has Malware for Brains
Let’s be blunt: this is cybersecurity for your nervous system.
You’re the endpoint. You’re the payload target. So build some damn defenses.
Practice cognitive delay: If a post spikes your heart rate, pause. That’s the hook. You disarm it by breathing before clicking.
Verify sources — like, actually verify them. If you haven’t seen the info from at least two non-bot-farm sources, toss it.
Audit your digital inputs. If 90% of what you see online makes you feel rage or despair, that’s not a vibe — that’s conditioning.
Lock down your accounts. Bots often hijack real people’s profiles to bypass detection. Don’t be someone’s sockpuppet because of a weak password.
This isn’t about “staying safe online.” This is about staying you online.
Weaponize Skepticism — and Teach Others to Do It Too

You’ve been inoculated. Now pass it on.
DIT thrives in silence. In shame. In people doubting their gut. So talk about it.
Post about how weird the feed feels — and why
Show your friends what astroturfing looks like
Call out repetition when you see it
Invite others to observe, not just scroll
The goal isn’t to be a buzzkill or a know-it-all. It’s to bring more reality into the room. Because this only works as long as we treat the simulation like it’s real life.
You are not the crazy one for noticing. You’re the early warning system. Use your voice.
Plug Back Into What’s Real — Seriously, This One Matters
This part’s not optional.
If the bots’ job is to isolate and confuse you, your job is to reconnect. Loudly. Messily. In person. With people who glitch, stumble, feel.
Make space for real conversation offline
Start a group chat where people fact-check each other instead of dragging each other
Teach someone older than you how to spot fake accounts
Share this newsletter with someone who’s been saying “something feels off” but hasn’t had the language for it yet
There’s no app for this. There’s no shortcut. But rebuilding the social layer is the counterattack.
That’s how you rehumanize the feed. One person at a time.
This Is the End of the Thread — But the Start of the Mission

We started with a question:
If bots shape your arguments, your moods, your thoughts… how much of you is still yours?
Now you know the answer: As much as you reclaim.
You’ve seen how the simulation runs.
You’ve seen what powers it.
You’ve seen how it hijacks culture, conversation, and cognition.
But you’re still here. Still thinking. Still human.
So go make noise. Go make weird. Go be a pattern interrupt.
And don’t ever let a generated persona tell you who you are.
Stay Curious,
Addie LaMarr
P.S. Want the Receipts? Join the Neurospicy Cyber Club.

If this thread cracked something open for you, you’re gonna love what’s next.
I wrote a 46-page ebook that goes deep into Dead Internet Theory — not just the memes, but the actual infrastructure, tactics, and case studies behind it. It’s packed with citations, technical breakdowns, and cybersecurity tools you can use right now.
Join the Neurospicy Cyber Club and you’ll unlock:
📚 Full access to the content library — VPNs, OSINT, privacy tools, and real cybersecurity skill-building, explained without gatekeeping.
📖 Weekly deep-dive mini ebooks delivered in both .epub and PDF formats, so you can build your Kindle or e-reader library while feeding your hyperfocus.
🔍 Culturally relevant content that teaches real technical skills while decoding the weirdness of the modern internet.
🧠 A space designed for neurodivergent thinkers who want to get sharp, not overwhelmed.
This isn’t just curiosity — it’s technical fluency with cultural context.
Come for the DIT research.
Stay for the tools that help you build your cognitive firewall.