Blog
Apr 24, 2022 - 16 MIN READ
Security for Builders: Threat Modeling and Secure-by-Default Systems

Security for Builders: Threat Modeling and Secure-by-Default Systems

Security isn’t a checklist you add at the end — it’s a set of architectural constraints. This month is about threat modeling that fits real teams, and defaults that prevent whole classes of incidents.

Axel Domingues

Axel Domingues

Most teams don’t “ignore security”.

They postpone it.

And then reality shows up in a very predictable way:

  • a leaked API key
  • a public bucket
  • a dependency with a CVE you didn’t know you shipped
  • a prod admin endpoint that was “temporary”
  • an auth bug that turns “one user” into “everyone’s data”

Security failure is rarely one dramatic bug.

It’s usually a chain of reasonable decisions that never had a chance to be safe.

This post is about breaking that chain early — with two habits that scale:

  1. Threat modeling you can actually run (without a security team)
  2. Secure-by-default architecture (so mistakes don’t become incidents)
This is an engineering post, not a compliance post.

The goal is practical: reduce blast radius, remove whole classes of bugs, and make “doing the safe thing” the default path.

The goal this month

Turn security into architecture constraints, not a late-stage audit.

The mindset shift

Security is mostly about boundaries + defaults + blast radius, not hero debugging.

What you should walk away with

A threat model template, hardening checklists, and deploy-time guardrails.

What “done” looks like

Safe defaults + least privilege + auditable changes + fast rollback.


The Only Security Model That Matters: Trust Boundaries

If you remember one concept, make it this:

A system is secure when every trust boundary is explicit and defended.

A trust boundary is where you change assumptions:

  • Internet → your edge
  • edge → backend
  • backend → database
  • service A → service B
  • your app → third-party API
  • user input → business logic
  • CI → production

Most incidents are boundary failures:

  • “we assumed this request was internal”
  • “we assumed this token was valid forever”
  • “we assumed this bucket was private”
  • “we assumed retries wouldn’t duplicate side effects”

So threat modeling is just… mapping boundaries and asking:
“what if the assumption is wrong?”


Threat Modeling That Fits a Real Team

Threat modeling has a reputation for being heavyweight.

It doesn’t have to be.

For most products, a small, repeatable exercise beats a perfect model you never run.

A pragmatic scope: “the 5 assets”

Start by naming the assets that would hurt to lose:

  • Identity (sessions, tokens, MFA, password resets)
  • Data (PII, invoices, policy docs, trade secrets)
  • Money / side effects (payments, refunds, credits, emails, webhooks)
  • Admin powers (support tools, impersonation, feature flags, backdoors)
  • Availability (DoS, cost blow-ups, queue overload, dependency outages)

If you protect these five, you’re already ahead of most systems.


A 45-Minute Threat Model You Can Run Every Sprint

This is the cadence I’ve seen work: small, consistent, and tied to real changes.

Step 1 — Draw the “napkin diagram”

Create a diagram with 6 boxes max:

  • clients (browser / mobile)
  • edge (CDN/WAF/API gateway)
  • API/backend
  • async workers / queues
  • data stores
  • third parties

Mark where trust changes (Internet → edge, service → DB, etc.).

Step 2 — Name the asset touched by this change

For the feature you’re shipping, pick one primary asset:

  • identity, data, money/side effects, admin, availability

If it touches more than one, you probably need stronger controls.

Step 3 — List entry points (the real ones)

Typical entry points:

  • public API routes
  • webhooks
  • admin routes
  • background jobs
  • queue messages
  • file uploads
  • “internal” endpoints
  • CI/CD deploy hooks

Step 4 — Ask four questions (fast)

For each entry point, ask:

  • Can an attacker reach it? (public, internal, “accidentally public”)
  • Can they do it as someone else? (authn/authz issues)
  • Can they do it repeatedly? (retries, brute force, DoS, cost)
  • Can they change state? (money, permissions, data deletion)

Step 5 — Pick mitigations from a small menu

Use a consistent menu so you don’t invent controls every time:

  • authentication + session hardening
  • authorization (deny-by-default)
  • input validation + allowlists
  • rate limiting + quotas
  • idempotency for side effects
  • audit logs for admin actions
  • secrets management + rotation
  • dependency isolation + timeouts
  • safe defaults (private by default)

Step 6 — Record residual risk and move on

Write down what remains:

  • what you accept now
  • what you defer (with a date or milestone)

Threat modeling that never ships is just theater.

Make this a template in your repo.

When it’s easy to run, it happens. When it’s “a workshop”, it won’t.


Secure-by-Default: The Architecture Patterns That Prevent Incidents

“Secure by default” means:

  • the safe behavior happens without remembering
  • mistakes fail closed
  • “unknown” does not become “allowed”

Deny by default

New endpoints, new permissions, new buckets: default to not accessible.

Least privilege

Every identity (human/service) gets only what it needs — and nothing more.

Fail closed

On error, prefer “request denied / action blocked” over “best effort proceed”.

Short blast radius

Partition data and permissions so one compromise doesn’t become full compromise.

These patterns show up everywhere — in code, infrastructure, and operations.

Let’s make them concrete.


Identity: Authentication Is Not Authorization (And Both Need Boundaries)

Most “big” security failures are authorization failures.

Because auth is visible (“login worked”), while authz is subtle (“should this user access that resource?”).

Builder’s rules of thumb

  • Authenticate at the edge, authorize in the service.
    • Edge can verify tokens.
    • Service must enforce resource-level authorization.
  • Never trust a client-provided identifier.
    • If the client says accountId=123, treat it as a request, not truth.
  • Prefer “scoped tokens” over “god tokens”.
    • Short lifetimes, specific audiences, specific permissions.
  • Make admin different.
    • Separate domain, separate auth, separate logs, separate rate limits.
A common failure mode:
  • “Users can only see their own data” was implemented as:
    • validate token ✅
    • query by accountId from request ❌
If you don’t bind the request to the identity, you don’t have authorization.

A simple authorization shape that scales

  • global role (employee, user, partner)
  • resource ownership (this policy belongs to this account)
  • capability checks (can:view, can:edit, can:refund)
  • policy enforcement at service boundaries (not in the UI)

When in doubt, centralize the rules logically (shared library / policy module),
but enforce them physically at every service boundary.


Data Protection: Classify It, Then Protect It

You don’t need a 200-page policy.

You need three classes and consistent handling.

Public

Marketing pages, docs, public datasets. No secrets. No PII.

Internal

Normal business data. Protect with auth, backups, and controlled access.

Sensitive

PII, payment-related data, credentials, keys. Strongest controls + auditing.

Critical

Admin powers, key material, identity systems, money movement. Treat as “if compromised, company-level incident”.

Practical controls that matter

  • Encrypt in transit: TLS everywhere (service-to-service too).
  • Encrypt at rest: disk/db encryption + key management.
  • Centralize keys: use a KMS, not “a secret in an env var forever”.
  • Backups are part of security: ransomware is a security incident.
  • Audit access to sensitive data (who accessed what, when).
Encryption is not a magic shield.

It reduces blast radius (disk stolen, snapshot leaked), but authz + auditing are what stop “legitimate” access from becoming abuse.


Input Boundaries: Validate, Normalize, and Reduce Ambiguity

A surprising number of vulnerabilities come from the same root:

“We interpreted input in a way the attacker controlled.”

Builder-friendly rules:

  • Prefer allowlists over blocklists.
  • Validate at the boundary, not deep inside business logic.
  • Normalize before validating (case, unicode, whitespace).
  • Treat every parser as an attack surface (JSON, XML, CSV, image processing).

Safer API design reduces your security workload

  • Use typed contracts (OpenAPI/JSON schema) and validate requests.
  • Avoid “polymorphic” endpoints that do too many things.
  • Make unsafe operations explicit (DELETE is different than UPDATE).
  • Require explicit idempotency for side effects (payments, emails, webhooks).

Supply Chain Security: You Ship Other People’s Code (So Own It)

Modern apps are dependency graphs wearing a logo.

Supply chain failures are common because they exploit defaults:

  • dependencies float to new versions
  • build systems run with broad permissions
  • images come from “latest”
  • secrets end up in build logs

What “good” looks like (without boiling the ocean)

  • Pin dependencies and update intentionally.
  • Scan dependencies (SCA) and block known critical issues.
  • Use minimal base images and patch regularly.
  • Generate an SBOM for your releases (what did we ship?).
  • Sign artifacts or use provenance where possible (who built this? from what?).
The “perfect” approach is not required.

But “we have no idea what versions we run” is not survivable at scale.


CI/CD Controls That Actually Improve Security

Security isn’t just “more scanners” — it’s making unsafe changes harder to ship.

A few deploy-time controls have outsized impact:

  • Separate build and deploy identities
    • build can read source; deploy can change prod
    • ideally no single identity can do both
  • Policy-as-code for infrastructure
    • public buckets, open security groups, wildcard IAM → block at PR time
  • Secret scanning
    • prevent the leak from entering git or logs in the first place
  • Two-person rule for dangerous changes
    • identity, money movement, network exposure, production admin
A good pipeline is a “seatbelt”.

It doesn’t stop you from driving.
It stops small mistakes from becoming fatal.


Observability for Security: You Can’t Defend What You Can’t See

Security incidents are debugging problems with worse consequences.

At minimum, make sure you have:

  • audit logs for admin actions (who changed what, when)
  • authentication logs (failed logins, token anomalies)
  • permission denials (so you can detect probing)
  • rate limit events (so you can detect brute force and cost attacks)
  • high-signal alerts (not 200 “CPU high” alerts)

If it changes money, identity, or access… it must be observable.


Checklists You Can Actually Use

These are meant to be copy/paste-able into tickets.


Resources

OWASP Top 10

A practical map of the most common web app risk classes — great for building shared vocabulary with your team.

OWASP ASVS (Application Security Verification Standard)

A “what good looks like” standard you can use as a maturity ladder, without turning security into pure compliance.

Microsoft STRIDE threat modeling (overview)

A classic threat modeling approach that’s still useful — especially for structuring the “what could go wrong?” conversation.

NIST Secure Software Development Framework (SSDF)

A solid reference for building security into engineering workflow: governance, implementation, verification, and response.


FAQ


What’s Next

Security is mostly about constraints.

The next month is about the other place constraints matter:

data and workflows across boundaries.

In May:

Distributed Data

Because once you split systems apart, correctness becomes a design choice — and security depends on correctness more than people admit.

Axel Domingues - 2026