
Containers aren’t “how you deploy apps” — they’re how you make environments stop being a variable. This is the operational discipline: immutable artifacts, repeatable builds, and runtime contracts you can actually rely on.
Axel Domingues
If you’ve ever heard:
…then you’ve met the real reason containers won.
Not because “Docker is cool.”
Because reproducibility is architecture — and containers are the first mainstream tool that makes it cheap to treat your runtime as a contract.
The goal this month
Replace “environment drift” with an immutable artifact you can promote safely.
The mindset shift
Containers are a contract: build once, run anywhere… as the same thing.
What breaks teams
Treating Docker as “packaging” instead of a reproducible supply chain.
What good looks like
The same image behaves the same in dev/CI/stage/prod — and rollbacks are boring.
A container is:
a normal OS process, isolated by the kernel, running from an immutable filesystem snapshot, with declared metadata about how it should run.
That’s it.
No magic.
And that definition matters, because it kills two common misconceptions:
Before containers, “deployment” often meant reconstructing an environment:
That creates three failure modes you can’t outsmart with process alone:
Containers flip the model:
You’re just wearing Docker as a skin.
Your architecture is only as reproducible as the path from source code → running process.
Here’s the mental model I want teams to internalize:

Everything else is details.
You’re deploying two different artifacts.
Docker made containers popular by bundling a usable developer experience:
But in mature systems, Docker often becomes one part of a bigger stack:
You don’t need to memorize those tools.
You need to understand the boundary:
Docker is a toolchain. The container ecosystem is a standard.
That’s why OCI matters: it lets you swap tools without rewriting the world.
The Dockerfile is your artifact recipe.
If you treat it casually, you get a fragile supply chain.
If you treat it like a build system, you get predictable deployments.
Unpinned base images
FROM node:latest is a time bomb. Pin versions (and ideally digests) so builds don’t drift.
Non-deterministic installs
Unpinned OS packages and floating language deps make the same build produce different results over time.
Giant images
Bigger images ship slower, cache worse, and expand the security surface area.
Secrets in layers
If a secret enters a layer, it may live forever in history. Use build secrets, not ARG hacks.
Containers reduce environment variance.
They do not remove operational concerns:
Containers just make those failures easier to reason about because the runtime is more consistent.
The takeaway:
containers aren’t “security” by default — they’re a mechanism you can harden.
Containers are great at packaging code.
They’re terrible at packaging configuration that changes per environment.
A container-friendly app typically follows these principles:
You will debug production issues.
The trick is: debug without turning prod into a mutable snowflake again.
Pull the exact image digest that ran in production and run it locally with the same env vars.
Check:
Prefer:
If the fix isn’t in source + Dockerfile, it doesn’t exist.
If you ship containers, you own a software supply chain whether you admit it or not.
You want three properties:
.dockerignore aggressivelyContainer readiness checklist
Dockerfile hygiene checklist
FROM is pinned (no floating tags)ARG/ENV misuse avoided)USER app) and explicit ports/entrypoint.dockerignore prevents leaking local junk into buildsDocker Docs (images, Dockerfile, build best practices)
The canonical reference — especially the sections on image builds and Dockerfile best practices.
OCI Image Specification
What an “image” actually is: layers, manifests, config — the portable standard under Docker and friends.
No. Containers share the host kernel.
That’s why they start fast and why kernel-level isolation and policy matter so much. The “VM-like” part is the filesystem snapshot (the image), not a guest OS.
Sometimes, but not reflexively.
Small images are good, but compatibility and security practices matter more than winning a size contest. Pick a base you can patch, scan, and operate reliably — and keep the final image minimal with multi-stage builds.
Build once, promote by digest.
Tags are human-friendly; digests are truth. If you promote artifacts by digest, you eliminate an entire class of “we deployed the same version” lies.
No.
They make the runtime consistent, which makes safety possible. Rollout safety still comes from:
Now that we can ship an immutable artifact, the next question is:
How do we turn deployments into a repeatable, low-risk machine?
In February:
CI/CD as Architecture
Because in real systems, delivery is part of the architecture — and it’s the difference between “we can deploy” and “we can deploy safely.”
CI/CD as Architecture: Testing Pyramids, Pipelines, and Rollout Safety
CI/CD isn’t a DevOps checkbox — it’s the architecture that makes change safe. This month is about test economics, pipeline design, and rollout strategies that turn “deploy” into a reversible decision.
Microservices vs Modular Monolith: The “When” and the “How”
Microservices aren’t a flex — they’re a tax. Modular monoliths aren’t “temporary” — they’re often the best architecture. Here’s the decision framework, the failure modes, and the migration path that doesn’t create a distributed mess.