
Agents are becoming the new integration surface. This is how you go from bespoke tool wiring to an ecosystem: portable connectors, common protocols, and a practical standard called MCP.
Axel Domingues
2024 was the year I stopped treating LLMs as “an API” and started treating them as a runtime.
By December, one thing is obvious:
Agents are becoming the new integration surface.
And integration surfaces always converge on standards—because the alternative is every team reinventing the same adapters, the same auth flows, and the same safety mistakes.
This article is about the “standards layer” that makes an agent ecosystem possible:
It’s an architecture post:
- what needs to be standardized (and why)
- how to design for portability without losing control
- and how MCP fits into a pragmatic production stack
The core problem
Tool integrations don’t scale as one-off code.
The core risk
A tool call is a privilege escalation unless you design a boundary.
The core goal
Portable connectors + common protocols = an ecosystem you can operate.
The 2025 setup
Once tools are standard, workflows become the next reliability battle.
Classic software integrations have a stable shape:
Agent integrations are messier because they include a probabilistic component that chooses when to call tools and how to interpret results.
So the integration surface shifts:
That’s why I keep repeating capability boundaries.
Tool use is not “a feature.”
Tool use is delegation.
And delegation always needs:
Here’s the mental model I use.

A connector is a packaged bridge to an external capability:
A connector owns the ugly parts:
A protocol standardizes how a client discovers and calls capabilities:
Your runtime is where product truth lives:
The runtime decides:
If you treat them as “just APIs”, you will miss the security boundary.
Most discussions jump straight to “tool schemas.”
Schemas matter, but they’re only one layer.
The ecosystem needs agreement on a few boring but load-bearing things.
Capability discovery
Clients need to discover tools/resources dynamically, not hardcode lists.
Identity + consent
Who is calling? Who approved? What scopes? For how long?
Error + retry semantics
Retries without idempotency create duplicate money.
Observability
Trace IDs, audit logs, tool timing, cost, and outcomes must be first-class.
If you want portability and safety, your tool contract needs more than a name and JSON schema.
It needs operational intent.
Here’s a compact contract shape that’s practical in real systems:
{
"name": "create_invoice",
"description": "Create an invoice for an existing customer",
"inputsSchema": "JSON Schema here",
"outputsSchema": "JSON Schema here",
"sideEffects": "writes",
"idempotency": { "supported": true, "keyField": "idempotencyKey" },
"riskClass": "high",
"auth": { "type": "oauth", "scopes": ["invoices.write"] },
"rateLimit": { "burst": 10, "perMinute": 60 },
"costHint": { "unit": "requests", "maxPerCall": 1 }
}
That looks like “extra bureaucracy”… until you’ve operated a tool ecosystem.
Then you realize:
A contract tells you what the tool does.
Let’s be blunt: most “tool integrations” today are copied snippets.
That doesn’t scale, and it’s not governable.
A connector should be a real artifact:
Local runs (developer laptop, desktop app) are great for:
Remote runs (hosted) are great for:
The runtime should be able to answer:
If the connector can’t declare this, you can’t safely automate it.
Every connector should ship:
In production, “uninstall” means:
If connectors are the unit of distribution, protocols are the multiplier.
Without a protocol, every client needs a bespoke adapter per connector.
With a protocol:
That brings us to MCP.
MCP is an attempt to standardize how AI clients connect to external context and tools.
The key idea is simple:
A model shouldn’t be hardwired to every integration.
A client should connect to servers that expose tools and context in a standard way.
MCP introduces a few primitives that map cleanly to real product needs:
It also standardizes discovery and interaction patterns so clients can enumerate capabilities and call them consistently.
Think of MCP as the “connector protocol” between:

Portability
Build a connector once; let multiple clients consume it.
Discovery
Clients can list tools/resources instead of hardcoding them.
Separation of concerns
Tool wiring lives in servers; product policy lives in the runtime.
A real ecosystem shape
Registries, SDKs, and shared conventions become possible.
MCP is not a security boundary.
You still need:
You still have to design reliability and safety.
Here’s a practical checklist I use for any tool ecosystem—MCP or not.
Start with three buckets:
Make this classification part of the tool contract (riskClass).
For high-risk tools:
This is “capability boundary” in product form.
If a tool can write, it should support idempotency keys, or your runtime must provide a safe wrapper:
Treat connector execution like untrusted code:
Minimum evidence bundle:
That evidence is what makes incidents debuggable—and compliance defensible.
Standards only matter if distribution is solved.
In practice, ecosystems converge on three things:
You’ve seen this pattern before:
If agents are the new integration surface, the same pattern is inevitable.
MCP is interesting because it arrives with:
Model Context Protocol (MCP) docs
Core concepts and specs: tools, resources, prompts, and transport details.
Model Context Protocol (MCP) — Specification
The authoritative protocol spec: discovery, tools/resources/prompts, message formats, and the interoperability surface MCP is trying to standardize.
MCP Servers — reference implementations
Real, runnable examples of connector-style servers — useful for understanding the “unit of distribution” and what operational contracts look like in practice.
Not exactly.
Function calling is how a model expresses “I want to call a tool.”
MCP is a way for the host/client to discover and route tool calls to external servers in a standard way.
In a mature stack, you often use both:
OpenAPI is great for describing HTTP APIs.
But an agent ecosystem needs more:
OpenAPI can be part of a connector implementation; it’s not the whole interoperability layer.
No.
Safety comes from:
Protocols mostly give you portability and shared semantics. You still have to operate the system.
Treating tool execution as “just another API call.”
Tool execution is a delegated action chosen by a probabilistic system. If you don’t design for:
…you will ship a system that looks magical in demos and collapses in production.
This is the end of the 2024 sequence.
If 2023 was:
“LLMs are probabilistic components—design reliability.”
Then 2024 became:
“LLMs are runtimes—design tool ecosystems.”
And that sets up the 2025 turn perfectly.
Because once you have tools… the next problem is coordination.
Next month: Computer-Use Agents in Production: sandboxes, VMs, and UI-action safety
Computer-Use Agents in Production: sandboxes, VMs, and UI-action safety
Tool use was the warm-up. Computer-use agents can click, type, and navigate real UIs — which means mistakes become side effects. This article turns “agent can drive a screen” into an architecture you can defend: isolation, action gating, verification, and auditability.
Real-Time Agents: streaming, barge-in, and session state that doesn’t collapse
Real-time agents aren’t “LLMs with voice.” They’re distributed systems with hard latency budgets. This month is about streaming UX, safe interruption (barge-in), and session state you can actually operate.