• Cyborg Bytes
  • Posts
  • What Would You Do If a Deepfake Ruined Your Life?

What Would You Do If a Deepfake Ruined Your Life?

Content Warning: discussions of suicide and self harm.

You wake up one morning to something off.

Maybe it's the strange looks your co-workers give you as you walk through the office doors.

Or the half-smiles that disappear as soon as you catch someone's eye.

You brush it off.

Maybe it's just one of those days.

But then you start hearing it—the whispers. Faint, but definitely about you.

At lunch, no one wants to sit with you.

Someone snickers.

Another person mutters under their breath. “She’s so gross.” Your heart sinks, but you still don’t know why.

The day drags on, and it feels like the world has shifted, but no one will tell you what’s going on.

You catch people glancing at their phones, then looking at you with disgust.

Your stomach turns.

You feel exposed, naked somehow, even though you have no idea what’s happening.

By the time you get home, the anxiety has wrapped around your chest so tightly you can barely breathe.

And then your phone buzzes. It’s your mom.

She doesn’t even let you say hello before she’s crying.

“How could you? What is this?!” Her voice is trembling, broken.

You rush to open the link she sent, and there it is.

A video. Explicit. Humiliating.

Your face, your voice—doing things you’d never do.

You freeze. Your skin turns to ice as your mind spins.

That’s not me. But it looks like you. It sounds like you. And now your family has seen it.

Your mom doesn’t believe it’s fake—how could she, when it looks so real?

She hangs up, sobbing.

You sit there, staring at the screen, your hands shaking.

Hours pass, but the dread doesn’t.

The video spreads like wildfire.

It’s all over social media. People you haven’t spoken to in years are tagging you, leaving disgusting comments, asking how you could do something so vile.

Every notification is a new reminder of your nightmare.

Friends stop responding to your texts. Even your closest circle is eerily quiet.

Your phone is full of messages, but no one is calling to check on you.

And it’s not just online.

At work, things get worse.

Your boss asks you to step into his office. You can see the disappointment on his face.

He doesn’t even give you a chance to explain before saying, “We need to protect the company’s image.”

You’re put on leave “until things calm down.” But deep down, you know what that really means.

The humiliation isn’t just digital anymore—it’s everywhere. You can’t escape it.

You try explaining to people that it’s fake, but no one listens. People already made up their minds.

Then, the real terror begins.

Strangers start showing up at your house.

At first, it’s just knocks on the door. You ignore them.

But then the messages come in, and your blood runs cold.

They have your address, your phone number—details you never shared publicly.

“I know where you live.”

“I’m coming for you.”

The threats pour in, and you don’t know how to stop it.

You don’t even know how they got your info.

Turns out, the person who made the video didn’t just want to ruin your reputation. They wanted to ruin your life.

They shared your personal information with strangers. And now you’re trapped in your own home, terrified to even go outside, because you never know who’s watching.

All of this… because you rejected someone. Because you said no to a date.

The isolation suffocates you.

Every time you close your eyes, you see that fake video of yourself. It plays on loop in your nightmares.

It’s out there now, and there’s no way to take it back.

You try to convince yourself that people will forget, but they don’t.

The comments, the messages, the stares—they just keep coming.

The shame eats you alive. You avoid mirrors because you can’t bear to see your own face anymore.

You feel dirty, used, even though you did nothing wrong.

Your world gets smaller, until it’s just you, alone with the weight of humiliation and despair.

And you start to wonder… how much longer you can take this.

This is what deepfakes do. They strip you of your dignity, your identity, your life. And for so many, the pain becomes unbearable.

This is the terrifying reality of deepfakes, and it's happening right now.

Deepfakes are AI-generated videos that take someone’s face, voice, or likeness and manipulate them into doing or saying things they never did.

The AI stitches together images and videos of the person from multiple angles, crafting a disturbingly convincing fake.

It looks real.

It feels real. But it’s nothing more than a lie—a lie that ruins lives.

And here's the thing: deepfakes disproportionately target women, especially young women.

Explicit deepfakes—fake pornographic videos—are being weaponized as tools for revenge, harassment, and extortion.

Most of the time, these videos are created out of spite, much like the story you just read.

A rejected date turns into someone’s worst nightmare. A personal grudge becomes an all-out assault on a person’s identity and dignity.

But it doesn’t stop there.

Most of these deepfakes are shared in private groups or chat rooms where, alongside the video, the creators share personal details like addresses and phone numbers, inviting others to harass and stalk their victims.

This isn’t just harmless fun. It’s premeditated. It’s calculated sexual violation, and it’s destroying lives.

Young people are being driven to despair and even suicide because of deepfake videos they didn’t even know existed until it was too late.

These folks are trapped in a nightmare, trying to prove their innocence while the world believes a fabricated reality.

And the worst part? The technology is only getting better and easier to use.

The Skyrocketing Threat of Deepfakes: What Happens If We Don’t Stop This?

What if I told you there will come a time, sooner than you think, when you won’t be able to trust any video you see? Not the news. Not a politician’s speech. Not even your closest friend on a Zoom call.

Deepfakes are advancing so quickly that soon, every piece of digital media could be a lie.

We’re hurtling toward a future where our most basic sense of reality—what we see with our own eyes—will no longer be reliable.

And when that happens, everything changes.

Imagine watching a viral video of a world leader declaring war on another country. What if that video wasn’t real, but it ignited global panic before anyone could prove otherwise?

Picture yourself attending a court hearing where video evidence—supposedly the strongest form of proof—has been manipulated to make someone look guilty.

Or what if you receive a video message from a loved one, but it turns out they never sent it?

This is the kind of digital dystopia we’re creeping toward if deepfakes continue to evolve unchecked.

Reality itself could become as malleable as a movie script.

The danger isn’t just theoretical—this is already happening. 

Take, for example, the viral video of Taylor Swift “endorsing” Donald Trump.

Or the pornographic deepfakes targeting her.

It made its way across the internet before it was finally pulled down, but not before convincing thousands of people that it was real.

That endorsement video sparked outrage, confusion, and heated arguments, all based on a total fabrication. And now imagine that happening on a global scale—elections swung by fake endorsements, reputations ruined by false confessions, or personal relationships torn apart by lies you can’t disprove.

Real Lives Are Already Being Destroyed by This Technology.

We’re not just talking about celebrities or politicians being the targets of deepfakes.

The technology is seeping into the lives of everyday people, often with devastating consequences.

High school students, for example, are becoming prime targets for explicit deepfakes created by their classmates.

Imagine being a teenager and waking up to discover that a fake, sexually explicit video of you is being circulated around your school.

The shame, the humiliation—it’s unbearable. Some kids, unable to escape the torment, have taken their own lives as a result.

In countries like South Korea, this crisis has reached epidemic levels.

Deepfakes of young girls are being shared in private online chat rooms, used to blackmail and humiliate them.

And don’t think it’s staying confined to one corner of the world.

This phenomenon is spreading fast, and soon, anyone with a grudge and an internet connection will be able to create a deepfake in a matter of minutes.

We’re not talking about a distant future—this tech is getting easier, faster, and more dangerous by the day.

What’s stopping your face from being the next one misused?

You Can’t Spot a Deepfake—And That Should Terrify You.

You probably think you could spot a deepfake if you saw one, right?

Wrong.

The reality is, deepfakes have become so advanced that even the experts who study them for a living often can’t tell what’s real and what’s fake.

People assume they’ll just “know” when something’s off, but let me tell you, that instinct is becoming useless.

Fighting Back: The Real Solutions for Combating Deepfakes

The good news?

We’re not completely defenseless. But there’s no silver bullet.

Combating deepfakes requires a multi-layered approach, from using cutting-edge technology to arming individuals with the knowledge to protect themselves.

Let me break it down for you.

MIT's Deepfake Detection Tactics
First, let’s talk about the practical steps you can take today to spot a deepfake.

This advice comes straight from the experts at MIT Media Lab, and it’s designed to help you identify the subtle flaws in even the most convincing deepfake videos.

  1. Pay Attention to Faces: Most deepfakes focus on facial manipulation. Look closely at the skin—is it too smooth or too wrinkly? Deepfakes often struggle to match the natural texture of skin with other features like hair and eyes.

  2. Check the Lighting: Does the glare on the person’s glasses shift naturally when they move? Deepfakes frequently get lighting wrong—shadows and reflections might not behave the way they should.

  3. Watch the Eyes: Are they blinking too much or too little? Deepfakes can mess up the natural blink rate of a person or fail to replicate the physics of how light reflects in their eyes.

  4. Facial Hair and Moles: Does the mustache or beard look pasted on? Deepfakes often fail to make facial hair transformations fully natural. The same goes for moles or freckles—if they seem "off," it might be a fake.

AI-Powered Detection Tools

But spotting a deepfake with the naked eye is becoming harder by the day, which is where technology comes into play.

Companies like ID R&D are developing advanced AI tools that can detect deepfakes by analyzing the digital fingerprints they leave behind.

But even with this tech, we’ve got a problem—95% of current facial recognition systems can’t reliably detect deepfakes.

That’s a huge vulnerability, and it means we’re still in the early stages of building effective defense mechanisms.

The Government Is Lagging Behind
Then, there’s the issue of legislation.

If you’re waiting for the government to step in and regulate deepfakes, don’t hold your breath.

Right now, we’ve got a patchwork of state and federal laws attempting to tackle the issue, but there’s no comprehensive federal regulation in place.

Deepfakes have already caused an uproar in the public sphere, but lawmakers are moving at a snail’s pace while the technology is accelerating like a rocket.

Until governments catch up, we’re relying on tech companies and individual vigilance to fight back.

This isn’t good enough.

The more we delay, the more lives will be ruined by deepfake blackmail, harassment, and fraud.

When People Fight Back

So, what happens when someone does manage to fight back against a deepfake? The power of knowledge and awareness can be life-changing. Let me share a real story.

The Story of the CFO in Hong Kong
One of the most mind-blowing cases involved a Hong Kong-based CFO who was tricked into transferring $25 million after a criminal used a deepfake of his boss in a video call.

The video was so convincing that not a single red flag was raised during the call.

The criminals got away with millions—and to this day, they still haven’t been caught.

But here’s what changed: After that incident, the company immediately implemented rigorous identity-verification processes, setting up security questions and safe words for critical communications.

Their employees are now trained to question everything.

They’ve even started using deepfake detection software to screen calls.

This is the kind of transformation that can prevent future disasters.

Awareness Is the First Line of Defense
When individuals or organizations become aware of deepfakes, they start asking questions, double-checking their sources, and paying attention to the details.

You can’t fight what you don’t understand—but once people get educated about the threat, they’re better equipped to stop it in its tracks.

Companies and communities that understand the danger of deepfakes are also more resilient against other forms of misinformation.

The Time to Act Is Now
This is the critical moment—deepfakes are evolving at lightning speed, and we’re at a crossroads.

Either we get educated, spread awareness, and start implementing the right defenses, or we allow this technology to destroy lives unchecked.

Fighting back isn’t just possible—it’s necessary. And it starts with you.

We can’t stop deepfakes from existing, but we can stop them from ruining lives.

“I’ll Know a Deepfake When I See It”

Believing you’ll be able to spot a deepfake is not just overconfidence—it’s delusion.

The technology behind deepfakes is advancing at breakneck speed, outpacing even the experts’ ability to consistently detect them.

The idea that you’re somehow smarter than AI algorithms designed to deceive is rooted in epistemic arrogance—the mistaken belief that you have superior knowledge or ability.

David Hume’s theory of perception reminds us that humans rely heavily on sensory data, and if that data is manipulated convincingly enough, our ability to distinguish between truth and falsehood collapses.

Deepfakes exploit this very vulnerability by creating digital realities so close to the truth that even trained professionals struggle to detect them.

“I Didn’t Make It, I’m Just Watching—What’s the Harm?”

If you think you're off the hook for watching or sharing a deepfake just because you didn't make it, you're dead wrong.

You're not innocent—you're complicit.

By consuming or sharing deepfakes, you're exploiting someone’s humiliation for your own entertainment. You didn’t hit 'record,' but your consumption drives the demand for this harmful content.

That makes you a predator.

You're not a bystander; you're an accomplice in someone’s violation, fueling a system of objectification.

Immanuel Kant said we should treat people as ends, not as tools for our own gratification.

Ask yourself: are you living up to that?

By watching deepfakes, you're not just reducing a person with their own life and dignity into an object for your consumption—you’ve already dehumanized them to justify it.

You’ve stripped away their personhood, turning them into a spectacle just so you can cope with your actions and still believe you're a good person.

But here's the hard truth that you need to hear: if you watch people’s deepfakes, you're the reason people are taking their own lives.

This should scare you.

You’ve rewired your brain to excuse this behavior, to make dehumanizing someone feel normal. That’s not just objectification—it’s a total moral breakdown.

You're not in a gray area—you’re in a moral freefall.

Socrates said the unexamined life isn’t worth living, so ask yourself: if you’re willing to exploit someone’s pain for your own entertainment, who are you becoming?

Seek help before it’s too late.

It’s Time to Take Deepfakes Seriously—Here’s How

Whether you think deepfakes impact you or not, you can take action. Here’s how:

1. Stop Being a Bystander—Educate Yourself and Spread Awareness

The more people know about deepfakes, the better we can protect ourselves. Start by learning what deepfakes look like and how they’re used. Then, spread the word. Ignorance is the enemy, and the simple act of raising awareness can go a long way in building defenses against this threat.

2. Demand Accountability from Lawmakers and Tech Companies

Laws are lagging behind the technology. Push for legislation that makes deepfake creation and distribution a serious crime, especially when used for exploitation or fraud. Tech companies must also be held accountable. They should be leading the fight by detecting and removing deepfakes, not turning a blind eye. Without public pressure, the situation will only get worse.

3. Publicly Call Out Deepfake Creators and Sharers

Since there’s no legal protection against deepfakes yet, we need to turn to the court of public opinion to protect ourselves.

If someone you know is making deepfakes of others, they’re likely also making them of the people you love—your daughters, nieces, siblings, friends, maybe even you.

They’re so hooked on extreme content that they need something more disturbing just to feel a thrill. It’s not like there’s a shortage of pornography out there—deepfakes are made specifically to violate people non-consensually.

This is sexual violence. And if you stay quiet when you know someone is doing this, you’re enabling their predatory behavior.

We can't pretend this isn’t happening. If you know someone like this, warn the people around them—spread the word. This isn’t gossip; it’s how we protect ourselves when the law won’t.

Silence only allows these violations to continue.

Those who create and share deepfakes are wrecking lives. This isn’t just “creepy”—it’s revolting and harmful.

If you know someone involved, call them out. Publicly shame them.

They aren’t “content creators”; they’re predators. Social pressure is a powerful tool when the legal system lags behind.

Speak up before more lives are ruined.

4. Protect Yourself and Your Loved Ones

Stop shaming deepfake victims—believe them.

We need to shift the shame onto those who create, consume, and share these deepfakes. That’s where the real disgrace lies.

If someone you know becomes the target of a deepfake, be there for them. Don’t abandon them like so many others might.

Support them publicly, stand up for them, and defend their dignity.

And talk to your children before something like this happens, so they know they can always come to you for help in situations that seem overwhelming.

Pay special attention to your male children’s friends, and make sure they understand just how harmful and wrong this behavior is.

Deepfakes can happen to anyone. Protect yourself by setting up safe or duress words with your loved ones, so they know when something isn’t right. If a message or video feels off, take the time to verify it.

While we can’t stop the technology, we can control how we react to it. Be vigilant, support victims, and don’t let this predatory behavior go unchecked.

The Bottom Line: Take Action, or You’re Complicit

Deepfakes are more than just a tech problem—they’re a moral crisis.

Doing nothing isn’t neutral; it’s complicit.

If you consume, share, or stand by while others are targeted, you’re enabling exploitation.

The Stoics taught that virtue is action in the face of harm. Are you going to stand up or sit back and watch?

This is your line in the sand.

The world is watching. What kind of person will you be?

Stay Curious,

Addie LaMarr