4 min read

How We Learned “No Secrets” Didn’t Mean What We Thought

We thought we were doing security right. The policy was clear: no secrets in plain text anywhere. Everything was encrypted and injected securely. But once we read the code closely, it was obvious—we hadn’t removed secrets, we had just distributed them.
How We Learned “No Secrets” Didn’t Mean What We Thought
What looks secure on paper doesn’t always survive contact with real systems.

There was a point where we were convinced we were doing security right. We had the policy. It was clear, written, approved, and repeated often:

No secrets in plain text. Anywhere.

And to be fair, people followed it.

Connection strings weren’t sitting in config anymore. Credentials weren’t just there for anyone to read. Everything important was encrypted, keys were handled “securely,” and pipelines injected what they needed. From a distance, it looked clean. Compliant. Even mature.

Then you start reading the code. Not at a high level, not through a diagram—actually reading it. And you realize something doesn’t add up.

The connection string is encrypted, which sounds good, but the application still needs it in plain text at runtime. That means it has to decrypt it somewhere. So you start asking the obvious question: where does the key come from?

You keep digging. Sometimes it’s in the code. Sometimes it’s passed through configuration. Sometimes it’s pulled from somewhere else at runtime—shared folders included. Different implementations, same pattern. The application can decrypt the secret, which means the secret is still there. Just rearranged.

At some point, you stop calling it secure and start calling it what it is: we didn’t remove secrets, we distributed them.

The uncomfortable part is that none of this was accidental. Everyone thought they were doing the right thing. They were following the rule, they just weren’t solving the problem. And it went on long enough that it became normal, long enough that nobody questioned it.

What broke it wasn’t a new policy—we already had the policy. What broke it was when the environment changed around it. Repositories became easier to access, code reviews became more transparent, and people outside the immediate team could actually see what was going on. Once you can see it, you can’t unsee it.

That’s when the conversations started getting uncomfortable—not the usual “are we compliant” conversations, but the real ones.

Security and compliance had the rules. Developers had the implementation. Server admins had to run the systems and be accountable for them. For the first time, those three perspectives were forced into the same conversation.

Up until then, security mostly lived in documents—PDFs, guidelines, requirements, endless lines of do’s and don’ts. They were correct, but they were abstract. The moment those rules had to survive contact with actual code, things got messy.

To their credit, security and compliance didn’t double down. They stepped in—not as auditors, but as participants. They got into the code, asked questions that weren’t checklist-driven, and started understanding how things were actually wired together.

At the same time, one of the developers—someone who understood both sides—started walking through the implementation with them. Not defensively, but patiently, explaining what was happening, why it was happening, and where the real gaps were.

And then there were the server admins—the ones who usually only get pulled in when something breaks. They’re the ones who actually enforce compliance in practice, whether anyone admits it or not. Instead of just saying “this isn’t allowed,” they leaned in. They listened, they challenged, and they collaborated.

That’s when it finally clicked—not as a policy update, but as a shared realization: if your application can decrypt the secret, then the secret still exists. And if the secret still exists, the problem isn’t solved—it’s just hidden better.

That was the moment the illusion broke.

Now, to be fair, there’s nothing wrong with hiding secrets. It’s a good practice. But obscurity is not security. And once you see that clearly, the next question becomes obvious: what’s better than hiding secrets? Not having to deal with them at all.


On-prem, that meant shifting to identity. Not stored credentials, not encrypted blobs—actual identity. Things like Group Managed Service Accounts. No passwords to manage, no secrets to inject, just services running as real identities in the environment.

In the cloud, the same idea applies. Managed identity, federated identity—letting the platform handle what we were previously trying to simulate manually.

Hybrid, as expected, was messier. Sometimes you still needed a secret, but the rule became strict: if it can use a managed identity, it must use a managed identity. Otherwise, it comes from a secure store at pipeline time, and it never lives in code, never gets checked in, and never becomes “just part of the app.”

This wasn’t a clean transition. It broke things, forced redesigns, and made some solutions we liked no longer viable. Sometimes it even forced framework upgrades. A lot of people called it a problem. I called it added benefits.

More importantly, it was the first time the implementation actually matched the intent of the policy.


What’s interesting is that none of this came from “doing DevSecOps right.” It came from stopping the illusion that we already were.

A lot of DevSecOps advice assumes you can just integrate security into your pipeline and move on. That only works if your constraints allow it. In government, they usually don’t.

Security isn’t something you add at the end. It’s a boundary you operate within. And if you treat it like a gate, you’ll keep running into it over and over again.

The shift happens when you stop asking, “how do we secure what we built?” and start asking, “what can we build that is secure by design under these constraints?” That’s a very different question, and it leads to very different systems.

In our case, it also led to a different dynamic. Security stopped being just the team that says no, developers stopped trying to work around it, and server admins stopped being only the enforcement layer.

For a moment, everyone was solving the same problem—from different perspectives, but aligned.

That’s when things started moving again. Not fast, but forward.

And in government, that’s the real goal. Not moving fast from the start, but learning how to move at all—without pretending you already are.

DigitalOcean Referral Badge