Blog
Dec 29, 2024 - 16 MIN READ
Standards for the Agent Ecosystem: connectors, protocols, and MCP

Standards for the Agent Ecosystem: connectors, protocols, and MCP

Agents are becoming the new integration surface. This is how you go from bespoke tool wiring to an ecosystem: portable connectors, common protocols, and a practical standard called MCP.

Axel Domingues

Axel Domingues

2024 was the year I stopped treating LLMs as “an API” and started treating them as a runtime.

  • In July, I argued RAG is only “done” when you can evaluate it.
  • In August, I treated tool use as a security problem: capability boundaries and sandboxes.
  • In September, regulation became architecture: controls and evidence.
  • In October, we budgeted reasoning: fast/slow paths + verification.
  • In November, we made agents real-time: streaming + barge‑in + session state that doesn’t collapse.

By December, one thing is obvious:

Agents are becoming the new integration surface.

And integration surfaces always converge on standards—because the alternative is every team reinventing the same adapters, the same auth flows, and the same safety mistakes.

This article is about the “standards layer” that makes an agent ecosystem possible:

  • Connectors: packaged integrations with real services
  • Protocols: a common way for clients and connectors to talk
  • MCP: an emerging protocol that tries to make tool ecosystems composable
This is not a “pick your favorite protocol” post.

It’s an architecture post:

  • what needs to be standardized (and why)
  • how to design for portability without losing control
  • and how MCP fits into a pragmatic production stack

The core problem

Tool integrations don’t scale as one-off code.

The core risk

A tool call is a privilege escalation unless you design a boundary.

The core goal

Portable connectors + common protocols = an ecosystem you can operate.

The 2025 setup

Once tools are standard, workflows become the next reliability battle.


Why “agent integration” is different from classic integrations

Classic software integrations have a stable shape:

  • API calls
  • explicit parameters
  • deterministic outputs
  • known failure modes

Agent integrations are messier because they include a probabilistic component that chooses when to call tools and how to interpret results.

So the integration surface shifts:

  • from “call endpoint X”
  • to “give the runtime safe access to capabilities A, B, C”

That’s why I keep repeating capability boundaries.

Tool use is not “a feature.”
Tool use is delegation.

And delegation always needs:

  • permissions
  • auditing
  • constraints
  • rollback / containment when things go wrong

The layer cake: connector, protocol, runtime

Here’s the mental model I use.

1) Connectors (the integration code)

A connector is a packaged bridge to an external capability:

  • “Search tickets in Jira”
  • “Create a Salesforce lead”
  • “Query a Postgres DB (read-only)”
  • “Send an email (with constraints)”

A connector owns the ugly parts:

  • auth flows
  • retries/backoff
  • pagination
  • rate limiting
  • service-specific edge cases

2) A protocol (the interoperability contract)

A protocol standardizes how a client discovers and calls capabilities:

  • discovery: what tools exist?
  • schemas: what inputs/outputs are expected?
  • transport: how are messages exchanged?
  • errors: how are failures represented?
  • state: what is session-scoped vs tool-scoped?
  • security hooks: where consent / policy decisions plug in

3) The runtime (your product)

Your runtime is where product truth lives:

  • policy
  • telemetry
  • user intent
  • “truth boundaries”
  • budget decisions (latency/cost/risk)

The runtime decides:

  • what connectors are allowed
  • what tools can run in this context
  • how results are verified
  • what gets written to audit logs
If you treat connectors as “just functions”, you will rebuild a protocol accidentally.

If you treat them as “just APIs”, you will miss the security boundary.


What standards actually need to standardize

Most discussions jump straight to “tool schemas.”

Schemas matter, but they’re only one layer.
The ecosystem needs agreement on a few boring but load-bearing things.

Capability discovery

Clients need to discover tools/resources dynamically, not hardcode lists.

Identity + consent

Who is calling? Who approved? What scopes? For how long?

Error + retry semantics

Retries without idempotency create duplicate money.

Observability

Trace IDs, audit logs, tool timing, cost, and outcomes must be first-class.

The “tool contract” I wish everyone shipped

If you want portability and safety, your tool contract needs more than a name and JSON schema.

It needs operational intent.

Here’s a compact contract shape that’s practical in real systems:

{
  "name": "create_invoice",
  "description": "Create an invoice for an existing customer",
  "inputsSchema": "JSON Schema here",
  "outputsSchema": "JSON Schema here",
  "sideEffects": "writes",
  "idempotency": { "supported": true, "keyField": "idempotencyKey" },
  "riskClass": "high",
  "auth": { "type": "oauth", "scopes": ["invoices.write"] },
  "rateLimit": { "burst": 10, "perMinute": 60 },
  "costHint": { "unit": "requests", "maxPerCall": 1 }
}

That looks like “extra bureaucracy”… until you’ve operated a tool ecosystem.

Then you realize:

  • idempotency is how you avoid double‑charges
  • riskClass is how you decide when to require explicit confirmation
  • auth scopes are how you keep connectors from becoming skeleton keys
  • rateLimit + costHint are how you keep agents from melting your platform
A schema tells you what the tool accepts.

A contract tells you what the tool does.


Connectors: the unit of distribution (and the unit of risk)

Let’s be blunt: most “tool integrations” today are copied snippets.

That doesn’t scale, and it’s not governable.

A connector should be a real artifact:

  • versioned
  • testable
  • releasable
  • revocable
  • runnable in a sandbox
  • observable (structured logs + traces)

What “good connector DX” looks like


Protocols: the interoperability multiplier

If connectors are the unit of distribution, protocols are the multiplier.

Without a protocol, every client needs a bespoke adapter per connector.

With a protocol:

  • many clients can talk to many connectors
  • tool discovery becomes dynamic
  • governance can be centralized
  • vendors stop inventing incompatible function-call formats

That brings us to MCP.


MCP: Model Context Protocol (what it is, and why it matters)

MCP is an attempt to standardize how AI clients connect to external context and tools.

The key idea is simple:

A model shouldn’t be hardwired to every integration.
A client should connect to servers that expose tools and context in a standard way.

MCP introduces a few primitives that map cleanly to real product needs:

  • Tools: actions you can execute (with schemas)
  • Resources: data you can fetch/reference (think “read-only context”)
  • Prompts: reusable prompt templates exposed by a server

It also standardizes discovery and interaction patterns so clients can enumerate capabilities and call them consistently.

In other words: MCP is trying to be HTTP for agent tools—not in protocol mechanics, but in ecosystem impact.

Where MCP fits in your stack

Think of MCP as the “connector protocol” between:

  • AI hosts/clients (chat apps, IDEs, desktop assistants)
  • and MCP servers (connectors exposing tools/resources/prompts)

What MCP solves (and what it doesn’t)

What it solves

Portability

Build a connector once; let multiple clients consume it.

Discovery

Clients can list tools/resources instead of hardcoding them.

Separation of concerns

Tool wiring lives in servers; product policy lives in the runtime.

A real ecosystem shape

Registries, SDKs, and shared conventions become possible.

What it doesn’t solve (by itself)

MCP is not a security boundary.

You still need:

  • sandboxing (process/container isolation)
  • capability gating (confirmations, scopes, risk tiers)
  • audit logging and evidence
  • rate limiting and budgets
  • safe retries and idempotency policies
Protocols give you interoperability.

You still have to design reliability and safety.


Designing MCP-style systems safely: the “capability boundary” checklist

Here’s a practical checklist I use for any tool ecosystem—MCP or not.

Classify tools by side-effect and blast radius

Start with three buckets:

  • Read-only (low risk): search, fetch, query
  • Write (medium/high): create/update/delete
  • Irreversible / money-moving (high): payments, refunds, user deletion

Make this classification part of the tool contract (riskClass).

Require explicit confirmation for high-risk actions

For high-risk tools:

  • show a confirmation UI
  • summarize intended action + parameters
  • require user approval (or a policy engine approval)

This is “capability boundary” in product form.

Make idempotency non-negotiable for writes

If a tool can write, it should support idempotency keys, or your runtime must provide a safe wrapper:

  • dedupe requests
  • store tool call intent
  • retry safely

Run connectors in a sandbox by default

Treat connector execution like untrusted code:

  • least privilege filesystem
  • network egress allow-lists
  • secret access via scoped tokens
  • CPU/memory/time limits

Emit structured evidence for every tool call

Minimum evidence bundle:

  • user/session ID (or pseudonymous identifier)
  • tool name + version
  • input hash + output hash
  • timestamps + latency
  • authorization decision (what allowed it)
  • downstream request IDs (provider correlation IDs)

That evidence is what makes incidents debuggable—and compliance defensible.


“Connectors, protocols, registries”: the ecosystem flywheel

Standards only matter if distribution is solved.

In practice, ecosystems converge on three things:

  1. A protocol (how things talk)
  2. SDKs (how things get built)
  3. A registry/package manager (how things get discovered and installed)

You’ve seen this pattern before:

  • HTTP + client libraries + package registries
  • OAuth + SDKs + app stores
  • OpenAPI + generators + API gateways

If agents are the new integration surface, the same pattern is inevitable.

MCP is interesting because it arrives with:

  • a clear server/client split
  • primitives that map to product reality (tools/resources/prompts)
  • a shape that encourages third-party connector ecosystems

Resources

Model Context Protocol (MCP) docs

Core concepts and specs: tools, resources, prompts, and transport details.

MCP Servers (reference implementations)

A grab bag of practical server examples you can run, extend, and learn from.

Hugging Face Agents Course — MCP unit

A friendly walkthrough of where MCP fits in an agent stack.

“What is Model Context Protocol?” (Raygun)

A high-level explainer with context and terminology to orient newcomers.


Resources

Model Context Protocol (MCP) — Specification

The authoritative protocol spec: discovery, tools/resources/prompts, message formats, and the interoperability surface MCP is trying to standardize.

MCP Servers — reference implementations

Real, runnable examples of connector-style servers — useful for understanding the “unit of distribution” and what operational contracts look like in practice.

OpenAPI Specification v3.1

The boring-but-load-bearing standard for describing HTTP capabilities and schemas — often the substrate inside connectors (even when the ecosystem protocol is MCP).

OAuth 2.0 Authorization Framework (RFC 6749)

The backbone for “identity + consent + scopes” in connector ecosystems — the part that turns tool calls from “magic” into delegated, auditable access.


FAQ


What’s Next

This is the end of the 2024 sequence.

If 2023 was:
“LLMs are probabilistic components—design reliability.”

Then 2024 became:
“LLMs are runtimes—design tool ecosystems.”

And that sets up the 2025 turn perfectly.

Because once you have tools… the next problem is coordination.

  • state machines
  • retries
  • idempotency
  • compensations
  • human-in-the-loop checkpoints
  • and workflows that survive outages and model mistakes

Next month: Computer-Use Agents in Production: sandboxes, VMs, and UI-action safety

Axel Domingues - 2026