• Cyborg Bytes
  • Posts
  • The Secret Operating System Beneath Your OS

The Secret Operating System Beneath Your OS

Note: The video version is more beginner-friendly, while the newsletter version has more technical detail. 🙂 

Why Would Anyone Design a Machine You Can’t Fully Shut Down?

You close your laptop. The screen goes dark.
You think it’s off.

But something inside stays awake —
a second computer, with its own processor, memory, and operating system.
And it never asked for your permission.

So why was modern computing built this way?

The IT Problem That Justified a Hidden Computer

In the early 2000s, corporate IT was chaos.
Machines bricked out of nowhere.
Passwords were forgotten minutes before big meetings.
And if a system wouldn’t boot, IT had to be there — physically — to fix it.

Intel’s solution was the Management Engine:
a microcontroller baked into the chipset that stayed active even when the main CPU was off.

Not sleep mode. Not hibernation.
Totally independent.
Its own firmware. Its own networking stack. Its own CPU.

It could reinstall the OS, reset a password, or wipe a drive —
remotely, even if the machine wasn’t fully powered on.

For enterprise fleet management, it was a breakthrough.
But the tradeoff?
Users lost the ability to fully shut down or fully control their own hardware.

How Other Vendors Followed Intel’s Lead

Intel wasn’t alone for long.

In 2013, AMD launched the Platform Security Processor
a subsystem that runs before your CPU and decides whether your firmware is trusted enough to boot.
If it says no, the CPU doesn’t even turn on.

Apple built the Secure Enclave Processor,
a chip that stores your biometric data, your encryption keys, and Apple Pay credentials —
all isolated from iOS, still in control even if the OS is compromised.

ARM added TrustZone
a “secure world” inside nearly every phone chip that runs alongside Android.
It handles PINs, cryptographic operations, and DRM.
Android doesn’t control it — it has to ask.

These weren’t bonus features.
They’re full operating systems, invisible to users — but mandatory for the device to work.

Who Do These Chips Really Serve?

They were sold as security enhancements.

Isolate the most sensitive functions. Keep them safe from malware.

But these chips don’t just protect.
They control.

They decide what firmware boots.
They enforce signed code.
They lock down the encryption keys that your entire system depends on.

And unlike your OS —
they can’t be inspected.
They can’t be disabled.
And they can’t be bypassed.

Your system runs on top of something you don’t control.
And you don’t get to opt out.

So if that hidden chip makes a decision you don’t agree with —
what are you supposed to do?

What Happens When the Most Powerful Computer in Your Device Doesn’t Answer to You?

Your operating system doesn’t boot first.
It’s not even second in line.

Before anything shows up on your screen, a hidden processor has already made decisions — about trust, access, and control — without you.

And it’s not just on one device.
This invisible gatekeeping layer is now a permanent part of the hardware you rely on every day.

So… who is it really working for?

The Rise of Uninspectable Firmware

The Intel Management Engine, AMD PSP, Apple SEP — they all run before your OS.

They load first.
They decide whether your firmware is authorized.
And if it fails their cryptographic checks, your system simply doesn’t start.

This is control enforcement, not malware prevention.
If your code isn’t vendor-signed, it doesn’t run — even if you wrote it yourself.

And you can’t just delete these subsystems.
They’re soldered in, with their own power domains and execution environments.
Even if your device looks off, they may still be running.

You’re Locked Out by Design

Try to flash custom firmware on a motherboard?
You’ll need a manufacturer key.

Try to bypass TrustZone on your Android phone?
It will soft brick the device.

Want to audit the Secure Enclave’s behavior?
You can’t.
Apple doesn’t publish the firmware source.
No one outside the company knows how it handles your face scan, your wallet, or your keys.

In the name of security, you’ve been made a guest on your own hardware.

What Are You Actually Using?

Your laptop, your phone, your tablet — these aren’t single computers.
They’re stacks of machines, each more privileged than the last.
And the one you interface with — the one you think you own — is the least powerful.

At the top of the chain is you.
But beneath you are layers that don’t answer to you — and don’t ask your permission.

So if your device now contains a second brain…
what happens when that brain starts thinking for itself?

What Happens When Security Becomes a Censorship Tool?

You don’t get to choose what your device trusts.
Your vendor already decided for you.

And once they have that power — to deny your firmware, block your OS, revoke your access — they can use it for more than just security.

So what happens when “untrusted” becomes “unauthorized”?

Secure Boot Isn’t About You

Secure Boot sounds like a feature to protect you.
It isn’t.

It protects the vendor’s vision of what your machine should run.
Windows Secure Boot requires Microsoft-signed binaries.
UEFI firmware often checks for specific vendor certificates before launching your OS.
If you dual-boot, customize, or run an experimental distro — you’re the anomaly.

If you install a Linux OS without the right keys?
Blocked.
If you flash coreboot on a ThinkPad without vendor permission?
Soft-bricked.
If your system is tampered with by a nation-state?
No alert.
Because these protections weren’t made for you.
They were made for the supply chain.

The Subtle Shift From “Security” to “Control”

It started with malware protection.
Then came piracy enforcement.
Then came regional firmware locks.
Now your BIOS might refuse to boot if you downgrade.

iPhones will reject third-party repair parts — in the name of safety.
Some Android phones won’t boot after rooting — in the name of integrity.
And platforms like ChromeOS make verified boot mandatory — you can’t turn it off.

This isn’t theoretical.
A Brazilian Supreme Court ruling once ordered all iPhones in the country to support sideloading.
Apple refused.
Because Secure Enclave isn’t just about storing your biometrics.
It’s about enforcing Apple’s rules — from the silicon up.

And you don’t get a say.

Whose Machine Is It, Really?

If the chip inside can deny your firmware…
If the OS won’t boot without the right signature…
If your repairs are rejected by the security stack itself…

Then what are you left with?

A machine that’s secure against you.
But not for you.

And if every device you buy enforces someone else’s policy…
how long before that policy has nothing to do with you at all?

What Happens When the Hidden Computer Gets Hacked?

You were told these chips made your device safer.

But what happens when the subsystem you can’t see gets compromised?

Intel Management Engine: SA-00086

In 2017, Intel disclosed a critical vulnerability: SA-00086.

Attackers could exploit a bug in the ME firmware to gain privileged access — below the operating system, below antivirus, and undetectable by most tools.

The vulnerability had been sitting in Intel systems for years. Millions of machines were affected. There was no way to remove ME — only patch it, if your vendor even offered an update.

And since ME runs even when your computer is “off,” some researchers began asking:
What’s stopping a remote attacker from hijacking a powered-down laptop?

AMD’s “faulTPM” Vulnerability

In 2023, researchers exposed faulTPM: a vulnerability in AMD’s fTPM (firmware-based Trusted Platform Module).

This chip handled cryptographic keys — the core of secure boot, drive encryption, and identity.

But its timing functions were flawed. Attackers could exfiltrate keys using a side-channel attack, bypassing the very trust anchor AMD had advertised as secure.

This wasn’t a theoretical attack. It was demonstrated on consumer-grade Ryzen CPUs.

Your master key to the system? It could be guessed, slowly and silently.

Apple’s SEP: Jailbroken

Apple’s Secure Enclave Processor is supposed to guard the holy grail — biometric data, payment keys, and personal identity.

But in 2020, the SEPROM (Secure Enclave ROM) was dumped and analyzed.

Hackers exploited a hardware vulnerability in the Apple A10 chip to extract and reverse-engineer SEP firmware. This didn’t mean instant pwnage — SEP still encrypts data and resists tampering — but the supposedly opaque black box was now visible and dissectable.

Apple called SEP “secure by design.” Turns out, design can be broken.

TrustZone: Trust Broken

ARM TrustZone powers the “secure world” on billions of smartphones.

But Qualcomm’s implementation — QSEE (Qualcomm Secure Execution Environment) — has been riddled with bugs.

In 2017, Google’s Project Zero exposed multiple QSEE flaws that allowed attackers to escalate from Android into TrustZone and steal sensitive data like DRM keys and fingerprint templates.

These weren’t low-severity issues. They broke the boundary between “secure” and “normal” worlds — exposing how flawed vendor code turned trusted hardware into a liability.

These processors were never meant to be user-auditable. So when they're compromised?

You might never know.

And you can’t disable what you can’t see.

What happens when the most trusted part of your system becomes the least inspectable — and the most dangerous?

Can You Ever Fully Take Back Control?

You turn off your device.
Unplug it.
Remove the battery.
Even wrap it in a Faraday bag.

But that chip still runs.
And you can't remove it — not without killing the machine entirely.

So what can you do when the leash is buried in the hardware?

Why the Usual Countermeasures Don’t Cut It

You can use a VPN.
Run Linux.
Encrypt your drive.
Switch to a de-Googled phone.

All solid moves — but they operate above the black box. Your OS can’t see into these chips. Your tools don’t reach them. Privacy isn’t just about software choices anymore. It’s about hardware you can’t audit.

When a chip like Intel ME runs below the operating system, it can see everything your OS sees. And if it’s compromised — or simply designed to obey vendor interests — it can leak data or sabotage your defenses before your tools even boot.

The “Turn It Off” Illusion

Let’s say you’re being watched. You get spooked. You turn off your phone.

You’re still traceable.

That’s because most modern phones — from Apple to Android — contain secondary processors like the Baseband CPU and TrustZone, which continue running after shutdown.

They handle voice, GPS, Wi-Fi, mobile broadband — and they rarely ask the main OS for permission.

And worse? In some devices, voice and sensor access remains possible even while powered down. Apple’s T2 chip kept the microphone on unless explicitly disabled at the hardware level — something only Apple can do.

You're never truly “offline.” You're just not looking at the parts that stayed on.

Real-World Abuse of “Assistive” Chips

These chips weren’t designed for abuse — but that hasn’t stopped them from being used that way.

  • Amazon Alexa stores voice snippets for “training” — but employees were caught reviewing thousands of hours of recordings, sometimes sharing them internally [(Bloomberg Report)].

  • UIDH Supercookies — Verizon's “Unique Identifier Headers” — tracked mobile users at the carrier level even after they opted out.

  • Law enforcement now relies on smartphone metadata — Bluetooth pairings, power state logs, app usage — to build timelines of events. “Turned off” devices still leak device behavior that becomes evidence.

Each of these scenarios relied on subsystems running below or beside your awareness. You thought you’d opted out. You hadn’t.

Is There Any Way to Fight Back?

Yes — but it requires changing the game entirely.

🔹 Use Hardware Designed for User Control

  • Purism Librem laptops and MNT Reform attempt to disable or exclude ME and PSP entirely.

  • Raptor Talos II and Blackbird are open-source PowerPC machines with no hidden microcode. They’re expensive, underpowered, and incompatible with most software — but they offer a rare thing: actual autonomy.

🔹 Minimize Trust

Don’t assume a single machine can be “trusted.”
Instead:

  • Use compartmentalization (Qubes OS, VM sandboxes).

  • Isolate sensitive workflows (journalism, legal, research) to air-gapped devices.

  • Keep communication tools and file storage physically separated.

🔹 Disable What You Can

Not all chips are removable — but some can be neutered:

  • Flash neutralized firmware on Intel ME (if supported).

  • Kill Wi-Fi and mic hardware with switches (or physically).

  • Use custom ROMs with no vendor telemetry (GrapheneOS, Calyx).

🔹 Policy, Not Just Products

  • Push for transparency laws in firmware and hardware.

  • Pressure vendors to allow audits, not NDAs.

  • Support right-to-repair and right-to-flash legislation.

This isn’t just a technical fight. It’s a power struggle.

So... Can We Ever Really Be Free?

That depends on what you mean by free.

We may never return to a world where we control every transistor. But we can build islands of autonomy. We can choose tools that bleed less. We can fight for systems that answer to us — not the vendors.

But the real question is this:

Will enough people realize how deep the rabbit hole goes — before the ability to opt out disappears entirely?

If the User Isn’t the Customer, Who Is?

Your machine serves someone.
But if it’s not you — then who’s really in charge?

The Rise of Vendor-Loyal Devices

Modern hardware doesn’t just run your software.
It runs theirs — baked into the firmware, sealed in signed blobs, locked into update pipelines you can’t refuse.

A GPU that won’t boot without Nvidia’s signature.
A CPU that won’t execute code unless Intel approves the firmware.
A bootloader that refuses to load anything “unauthorized,” even if it’s yours.

And if those systems reject your OS?
Tough. You don’t have root here.

Your computer is no longer a neutral platform.
It’s a client in a vendor-controlled network, designed to prefer their software, their services, and their rules.

Surveillance and DRM Masquerading as Security

“Secure boot” sounds like protection.
But it’s just as often used to enforce digital restrictions — not stop actual threats.

Can’t dual-boot?
Can’t jailbreak?
Can’t install custom firmware?

That’s not always about malware.
That’s about controlling what you’re allowed to do with what you own.

And when vendors control what code runs, they can also control what gets recorded
or reported.

If your laptop can silently wake, log, and transmit…
If your phone can scan nearby devices even when “off”…
If your TV can send watch history back to the manufacturer…

Then “security” starts to look a lot like surveillance with a better PR team.

When the Platform Acts in Its Own Interests

If the user can’t audit the system, then the vendor becomes the system’s primary beneficiary.

The Smart TV that can’t disable telemetry?
The laptop that auto-updates your BIOS without asking?
The phone that installs third-party apps remotely “for your convenience”?

These aren’t bugs.
They’re design decisions — serving stakeholders who aren’t you.

You’re not the owner.
You’re the product the platform defends itself from.

How Do You Reclaim Ownership in a System Built Against You?

When “your” device can deny your input, reject your OS, and report your activity…

What does ownership mean anymore?

Stay Curious,

Addie LaMarr