
Security isn’t a checklist you add at the end — it’s a set of architectural constraints. This month is about threat modeling that fits real teams, and defaults that prevent whole classes of incidents.
Axel Domingues
Most teams don’t “ignore security”.
They postpone it.
And then reality shows up in a very predictable way:
Security failure is rarely one dramatic bug.
It’s usually a chain of reasonable decisions that never had a chance to be safe.
This post is about breaking that chain early — with two habits that scale:
The goal is practical: reduce blast radius, remove whole classes of bugs, and make “doing the safe thing” the default path.
The goal this month
Turn security into architecture constraints, not a late-stage audit.
The mindset shift
Security is mostly about boundaries + defaults + blast radius, not hero debugging.
What you should walk away with
A threat model template, hardening checklists, and deploy-time guardrails.
What “done” looks like
Safe defaults + least privilege + auditable changes + fast rollback.
If you remember one concept, make it this:
A system is secure when every trust boundary is explicit and defended.
A trust boundary is where you change assumptions:
Most incidents are boundary failures:
So threat modeling is just… mapping boundaries and asking:
“what if the assumption is wrong?”
Threat modeling has a reputation for being heavyweight.
It doesn’t have to be.
For most products, a small, repeatable exercise beats a perfect model you never run.
Start by naming the assets that would hurt to lose:
If you protect these five, you’re already ahead of most systems.
This is the cadence I’ve seen work: small, consistent, and tied to real changes.
Create a diagram with 6 boxes max:
Mark where trust changes (Internet → edge, service → DB, etc.).
For the feature you’re shipping, pick one primary asset:
If it touches more than one, you probably need stronger controls.
Typical entry points:
For each entry point, ask:
Use a consistent menu so you don’t invent controls every time:
Write down what remains:
Threat modeling that never ships is just theater.
When it’s easy to run, it happens. When it’s “a workshop”, it won’t.
“Secure by default” means:
Deny by default
New endpoints, new permissions, new buckets: default to not accessible.
Least privilege
Every identity (human/service) gets only what it needs — and nothing more.
Fail closed
On error, prefer “request denied / action blocked” over “best effort proceed”.
Short blast radius
Partition data and permissions so one compromise doesn’t become full compromise.
These patterns show up everywhere — in code, infrastructure, and operations.
Let’s make them concrete.
Most “big” security failures are authorization failures.
Because auth is visible (“login worked”), while authz is subtle (“should this user access that resource?”).
accountId=123, treat it as a request, not truth.accountId from request ❌When in doubt, centralize the rules logically (shared library / policy module),
but enforce them physically at every service boundary.
You don’t need a 200-page policy.
You need three classes and consistent handling.
Public
Marketing pages, docs, public datasets. No secrets. No PII.
Internal
Normal business data. Protect with auth, backups, and controlled access.
Sensitive
PII, payment-related data, credentials, keys. Strongest controls + auditing.
Critical
Admin powers, key material, identity systems, money movement. Treat as “if compromised, company-level incident”.
It reduces blast radius (disk stolen, snapshot leaked), but authz + auditing are what stop “legitimate” access from becoming abuse.
A surprising number of vulnerabilities come from the same root:
“We interpreted input in a way the attacker controlled.”
Builder-friendly rules:
Modern apps are dependency graphs wearing a logo.
Supply chain failures are common because they exploit defaults:
But “we have no idea what versions we run” is not survivable at scale.
Security isn’t just “more scanners” — it’s making unsafe changes harder to ship.
A few deploy-time controls have outsized impact:
It doesn’t stop you from driving.
It stops small mistakes from becoming fatal.
Security incidents are debugging problems with worse consequences.
At minimum, make sure you have:
If it changes money, identity, or access… it must be observable.
These are meant to be copy/paste-able into tickets.
OWASP Top 10
A practical map of the most common web app risk classes — great for building shared vocabulary with your team.
OWASP ASVS (Application Security Verification Standard)
A “what good looks like” standard you can use as a maturity ladder, without turning security into pure compliance.
No. You need repeatable habits and good defaults.
A security team helps scale and depth. But most wins come from:
Only if you treat it as a workshop.
If it’s a 45-minute ritual tied to high-risk changes, it’s faster than incident response — and far cheaper.
Authorization drift.
Teams get authentication “working”, then features ship quickly, and authz logic becomes inconsistent across routes and services.
Centralize the logic (policy module), enforce at boundaries, and log decisions.
Start with the highest blast radius items:
Security is mostly about constraints.
The next month is about the other place constraints matter:
data and workflows across boundaries.
In May:
Distributed Data
Because once you split systems apart, correctness becomes a design choice — and security depends on correctness more than people admit.
Distributed Data: Transactions, Outbox, Sagas, and “Eventually Correct”
Once your system crosses a process boundary, “a transaction” stops being a feature and becomes a strategy. This post is a practical mental model for distributed data: what to keep strongly consistent, what to make eventually consistent, and how to do it safely with outbox + sagas.
Observability that Works: Logs, Metrics, Traces, and SLO Thinking
Observability isn’t “add dashboards.” It’s designing feedback loops you can trust: signals that answer real questions, alerts tied to user pain, and tooling that helps you debug under pressure.