
The browser is a constrained runtime with a scheduling problem: one main thread, many responsibilities, and users who notice missed frames. This post gives you the mental model to debug “random” UX failures as deterministic timing and contention issues.
Axel Domingues
Last month we treated HTTP as what it really is: a distributed systems contract with ambiguity baked in.
This month we go one level closer to the user.
Because a painful truth hides in most production incidents:
Many “backend bugs” are actually browser scheduling bugs wearing a backend costume.
If you don’t have a browser mental model, you end up debugging by superstition.
Let’s replace superstition with a clear runtime model.
Give you a practical understanding of:
The browser is a highly-optimized runtime with one brutal constraint:
Most of your app’s work competes for the same main thread.
That thread must handle:
If you saturate it, the user experiences it as:
The architect’s reframe
The browser is a scheduler with a user-facing SLA: don’t miss frames.
Web teams often debug only one clock: backend time.
But users experience two clocks:
Network time
How long bytes take to travel and servers take to respond (TTFB, download).
Main-thread time
How long your app blocks input and delays rendering (long tasks, layout, hydration).
Many “slow API” complaints are actually:
This is the root of the “backend costume” phenomenon.
Here’s the version that stays useful under pressure:

Promises run in the microtask queue.
If you create a microtask storm (lots of chained Promise callbacks), you can delay rendering even if each callback is “small”.
So “I’ll just chain a bunch of Promise callbacks” can accidentally block the UI.Microtasks run before the browser can render.
To display pixels, the browser runs something like:
If you force layout repeatedly or paint too much, your UI becomes “slow” even when the server is instant.

At 60Hz, you have roughly 16ms per frame.
That 16ms includes:
If you consistently exceed it, you get jank.
If you occasionally exceed it, you get “random” jank.
The performance truth
Users don’t experience “average speed.” They experience missed frames and input lag.
When a user clicks, your app needs to do at least this:
If step (1) or (2) is delayed by contention, the user perceives:
Two practical implications:
Let’s map common symptoms to the real underlying mechanism.
What’s happening
Typical causes
Fix direction
What’s happening
Typical causes
Fix direction
What’s happening
Typical causes
Fix direction
What’s happening
Typical causes
finally cleanupFix direction
What’s happening
Typical causes
Fix direction
The pattern is consistent:
The browser is a scheduler. Your app is a workload.
Many “bugs” are simply the scheduler doing exactly what it must do.
Frameworks are great at expressing UI.
They are also great at turning “small costs” into “big aggregate costs.”
Render / reconciliation
Compute what should change (pure work). Can be expensive at scale.
Commit
Apply changes to the DOM (side effects + layout consequences). This is where jank appears.
If your UI feels “randomly slow,” what you often have is:
Because unbounded re-renders become unbounded cost.“We treat re-renders like database queries.”
Most teams only look at the Network tab.
But users experience serial dependencies across multiple layers:
Network waterfall
Requests that depend on previous requests. API shape and page boot logic create this.
Data dependency waterfall
UI can’t render “real” content until specific data arrives (auth → profile → permissions → page).
JS execution waterfall
Parsing + executing big bundles, then hydration, then app boot logic.
Rendering waterfall
Layout/paint cost triggered by DOM size, styles, and “measure → mutate” loops.
A page can have a “fast backend” and still feel slow if any of these waterfalls are long.
This is how you stop guessing.
Ask two questions:
If response arrived early but UI updated late, your issue is main-thread contention or render cost.
Use performance profiling to locate:
If you see frequent >50ms tasks, you’ve found your “random freeze”.
Most jank has a trigger:
Attach the symptom to a trigger event and you can fix it.
Options include:
Not because your team is bad—because product pressure always trades tomorrow’s speed for today’s features.
When someone says “the UI is broken,” run this checklist before escalating to backend teams.
finally/cleanup?This checklist is not about blame.
It’s about locating the layer where reality is breaking.
These patterns show up again and again in teams that operate web apps reliably.
“Latest wins” for user-driven queries
Search, typeahead, filters: cancel or ignore stale responses.
Idempotent submits (frontend + backend)
Disable UI + include idempotency keys for critical actions.
Work off the critical path
Defer non-essential computations until after first interaction.
Virtualize large lists
DOM size is a silent tax. Virtualization is the relief valve.
Batch DOM reads/writes
Measure first, then mutate. Avoid forcing repeated layouts.
Budgeted rendering
Treat re-renders as costs. Memoize and structure state to minimize churn.
Instrument long tasks
If you can’t see long tasks, you can’t prevent them from returning.
Make loading states a state machine
Loading → success → error → cancelled. No “in-between forever” states.
Parts of the browser are multi-threaded:
But your UI-critical JS is still effectively constrained by main-thread scheduling.
Web Workers help, but they introduce:
That’s why architect-level thinking matters:
If you came from backend systems, it’s tempting to treat the browser as “just a client.”
But modern web apps put huge responsibility in the browser:
So you need a runtime mental model.
Not for trivia.
For survival.
March takeaway
Most “random UX bugs” are deterministic scheduling problems: contention, ordering, and missed frames.
MDN — Concurrency model & Event Loop
A good baseline explanation of tasks, microtasks, and how JS scheduling works.
web.dev — Rendering performance
Practical guidance on layout, paint, and avoiding expensive work on the critical path.
Because timing is variable:
The underlying mechanics are deterministic, but the conditions vary—so the symptom looks probabilistic.
Because the user experience depends on:
If JS blocks the main thread, the UI can’t update even if responses arrive instantly.
For user-perceived responsiveness, look at:
In other words: “did we miss frames and block input?”
Now that we understand:
…we’re ready to talk about data boundaries.
Because the fastest way to create complexity is to pick a boundary that doesn’t match your team and product.
AJAX → Fetch → GraphQL → tRPC: Choosing Your Data Boundary
Your data boundary is not “how we call the API.” It’s a coupling contract between teams, runtimes, and failure modes. This post gives you a practical decision framework for REST, GraphQL, and RPC-style boundaries (including tRPC) — with the tradeoffs that show up in production.
HTTP as a Distributed Systems API (Without the Buzzwords)
HTTP isn’t just “how browsers talk to servers.” It’s a mature distributed-systems contract with semantics for caching, retries, concurrency, intermediaries, and evolution. If you design APIs without those semantics, production will teach you them anyway.