Security Theatre in Large Organisations — and Why It Persists

Security Theatre in Large Organisations — and Why It Persists

I’ve spent enough time in large organisations to recognise a pattern: security practices that exist primarily to satisfy audits rather than reduce risk. They’re designed to demonstrate compliance, not to make systems more secure. And because they pass audits, they survive.

This isn’t about negligence or incompetence. It’s about incentives, structure, and the gap between what security looks like on paper and how systems actually behave under pressure.

The password handoff ritual

Here’s a real example I’ve seen multiple times in regulated environments.

An engineer needs a service account credential to configure a CI/CD pipeline. In theory, this should take minutes. In practice, it works like this:

The engineer submits a ticket to a third-party vendor who manages credential provisioning. The ticket waits in a queue. Days pass. Eventually, someone picks it up and creates the service account. But the credential can’t be sent via the ticket system — that would be “insecure.”

So another ticket is raised to retrieve the credential. More days pass. Eventually, the vendor schedules a video call. During the call, the vendor remotely accesses the engineer’s machine and manually types the password into Azure DevOps or whatever tool needs it.

The credential is never written down. The engineer never sees it. The process is documented, audited, and considered compliant.

And it is, by most definitions of compliance, secure.

Except it isn’t.

Why this fails

The problem isn’t that someone typed a password over a remote session. The problem is that this process treats humans as the trust boundary, when humans are actually the weakest link.

The credential now exists in the tooling, unrotated, with no expiry, accessible to anyone with access to that pipeline. The engineer who requested it probably has no idea how to rotate it, and the third-party vendor certainly won’t do it proactively. If the engineer leaves the company, the credential stays.

The video call added friction, not security. The ticket queue added delay, not control. The manual typing added theatre, not protection.

What it did do is create a paper trail. And in regulated environments, the paper trail is often more important than the actual security posture.

Why audits reward this behaviour

Audits check whether processes exist, not whether they work. They verify that approvals were logged, that tickets were filed, that someone signed off. They don’t ask whether the resulting system is more or less vulnerable than it was before.

Manual processes are easier to audit than automated ones. A human clicking “approve” is legible. A Vault policy or IAM role is not — at least not to someone checking a box on a compliance spreadsheet.

So organisations optimise for what gets measured. They add layers of human approval, not because it reduces risk, but because it produces artifacts that auditors recognise.

And because engineers learn that fighting these processes is exhausting, they stop trying. They work around them, or they tolerate the delays. Either way, the process survives.

The hidden costs

Security theatre doesn’t just waste time. It degrades actual security.

Every manual step in a credential handoff increases the number of people who touch the secret. Every ticket creates another place where something can go wrong. Every delay incentivises workarounds — shared accounts, long-lived tokens, credentials stored in Slack threads.

Engineers stop rotating credentials because the process is too painful. They stop requesting least-privilege access because getting any access at all takes weeks. They stop asking questions because they’ve learned that the answer is always “submit a ticket.”

And when something eventually breaks — when a credential leaks, when a service account is compromised — the response is usually to add another approval step, another ticket queue, another human checkpoint.

The problem compounds.

What works instead

Real security doesn’t look like this. It looks like systems where credentials are short-lived, automatically rotated, and scoped to the minimum necessary access. It looks like Vault, cloud IAM, workload identity — tools that have existed for years.

It looks like engineers being able to provision infrastructure without waiting for tickets, because the guardrails are in the platform, not in a ticketing system. It looks like policy-as-code, where access rules are version-controlled, auditable, and enforced by machines, not meetings.

It looks like security being treated as infrastructure, not as a bureaucratic layer.

The tools exist. The patterns exist. What’s missing is trust — trust that engineers can operate responsibly within well-defined boundaries, trust that automation can enforce policy better than humans, trust that the most secure systems are often the least visible.

Why this is hard to fix

Changing this requires more than better tools. It requires changing incentives.

Security teams in large organisations are often measured by their ability to demonstrate compliance, not by their ability to reduce actual risk. Engineering teams are measured by delivery speed, not by their security posture. And third-party vendors are measured by SLA adherence, not by whether the systems they manage are genuinely more secure.

No one is incentivised to ask whether the video call actually helped, or whether the ticket queue made the system safer. Everyone is incentivised to make sure the audit passes.

And because regulated environments face real consequences for non-compliance, there’s little appetite for experimentation. If a manual process has been blessed by Legal and has survived three audits, changing it feels risky — even if the current process is objectively worse.

So it persists.

A different way to think about it

Security should reduce risk, not move it around. If a process requires engineers to wait weeks for a credential that will never be rotated, it hasn’t reduced risk. It’s just redistributed it in a way that’s harder to see.

If a system requires heroics to operate securely, it isn’t secure. It’s fragile. And fragility is the opposite of resilience.

The most secure systems are often the least visible. They don’t have approval workflows because the policies are already encoded. They don’t have manual credential handoffs because the credentials are ephemeral. They don’t require tickets because the boundaries are clear and the guardrails are automatic.

That’s not idealism. It’s how systems work when security is treated as infrastructure.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top