
Once agents can call tools, connectors become the new platform surface. This month is a playbook for adopting MCP at scale: patterns that work, versioning that doesn’t break customers, and governance that keeps the ecosystem sane.
Axel Domingues
In 2023–2024 we learned a painful truth:
LLMs aren’t “smart APIs”. They’re probabilistic components.
In 2025, the failure mode changes again:
Agents aren’t “cool demos”. They’re integration platforms.
And every integration platform eventually hits the same wall:
Connectors become the product.
Not the model.
This month is about the connector ecosystem that forms around MCP (Model Context Protocol): how teams adopt it, how ecosystems evolve, and what governance you need so you don’t end up with a tool graveyard, broken integrations, and security incidents.
What changes at “connector scale”
You stop integrating “an app”.
You start integrating an ecosystem.
The real problem
Not “can agents call tools?”
Can we do it safely, repeatably, and at scale?
The outcomes we want
Portability, compatibility, incident containment, and predictable change.
The deliverable
A playbook: adoption patterns, versioning rules, and governance contracts.
MCP exists because everyone reinvented the same plumbing:
The MCP specification describes an open, JSON-RPC based protocol with defined roles (hosts/clients/servers), transports, and security/authorization guidance.
That matters because ecosystems don’t scale on “prompt conventions”.
They scale on protocols + contracts.
Most teams think “adopting MCP” is a technical decision.
In practice it’s an organizational decision because MCP changes where integration work lives:
Here are the adoption patterns I see repeating.
Pattern 1 — Personal workstation connectors
A developer runs MCP servers locally (stdio), connects their IDE/agent, and iterates fast.
Pattern 2 — Team-shared internal servers
A team exposes internal tools/data via managed MCP servers (HTTP) behind SSO and network controls.
Pattern 3 — Productized connectors
A platform team publishes vetted connectors to a registry, with versioning, reviews, and support.
Pattern 4 — Ecosystem marketplaces
Third parties ship MCP servers, and hosts consume from a marketplace with governance rules.
This is your “week 1” pattern:
It also creates two predictable traps:
Use it to learn.
Don’t confuse it with a production architecture.
This is when MCP becomes “real”:
If you do only one thing right in this stage:
Once multiple teams depend on connectors:
This is where you need:
Marketplaces multiply value… and multiply risk.
If you don’t have governance, you’ll end up with:
This pattern is where the ecosystem needs adult supervision.
Most connector failures aren’t “bugs”.
They’re missing contracts.
A connector that only says “here’s a tool” is not enough.
A production connector needs a declared contract:
Identity & auth model
Who is calling?
User tokens, service accounts, delegation, or impersonation?
Data boundary
What data can leave the system?
PII rules, retention, and redaction.
Side effects
What changes can this tool make?
Idempotency, approvals, and rollback.
Operational envelope
Rate limits, timeouts, cost caps, and failure semantics.
If you don’t force these contracts up-front, you’ll pay for them later—in incidents.
Connector ecosystems break when versioning is treated as an afterthought.
You need to manage three version surfaces:
The MCP spec itself is versioned and evolves over time. So “we support MCP” is not a stable statement unless you define the compatibility window.
Breaking changes aren’t “schema changes”.
Breaking changes are behavior changes that users (or agents) experience.
Examples:
Treat those as real compatibility events.
This is the policy I’d run in an internal registry.
Pick a window (e.g., “support the last 2 MCP spec versions”) and publish it. Don’t let “latest” be a moving target without notice.
At runtime, hosts should not assume a tool exists. They should discover capabilities and degrade gracefully.
Deprecation isn’t a README warning. It’s:
Every connector version must ship:
The easiest way to force discipline is to require a manifest that declares the contract.
Here’s a compact example (invented format, but practical):
# connector.yaml
name: "acme-jira"
version: "2.3.0"
mcp:
protocol: "2025-11-25"
transports: ["http"]
capabilities:
tools:
- key: "jira.search_issues"
side_effects: "none"
data_classification: "internal"
- key: "jira.create_issue"
side_effects: "writes"
idempotency_key: true
auth:
mode: "delegated-user"
provider: "oauth2"
scopes:
- "jira:read"
- "jira:write"
policy:
rate_limits:
per_minute: 120
timeouts_ms:
default: 8000
cost_controls:
max_calls_per_session: 25
security:
secrets_storage: "vault"
audit_events:
- "tool_call"
- "authz_denied"
- "data_export"
lifecycle:
deprecates: []
support:
owner_team: "platform-integrations"
oncall: true
This manifest becomes:
If July was “coordination contracts for multi-agent systems”…
November is the same idea applied to the ecosystem:
Governance is how you prevent entropy from becoming the default architecture.
In late 2025, multiple major vendors publicly aligned around neutral stewardship for key agent interoperability standards (including MCP) under a foundation model.
That’s not “industry drama”.
It’s recognition that connector ecosystems need:
A lot of teams do (1) and ignore (2) and (3).
That’s how you get breaches.
If you’re building (or adopting) a connector registry, make these rules explicit.
No owner means no fixes, no incident response, and no deprecation plan.
If a connector has no owner, it should be marked unsupported and excluded from production usage.
At minimum, every connector should be reviewed for:
MCP can expose powerful execution paths, so “tool access” must be treated like production access.
Treat connectors like supply chain artifacts:
If you can’t answer “where did this connector come from?” you don’t have a registry—you have a risk engine.
Deprecation needs:
Otherwise, you’ll carry dead versions forever.
Last month (August) was about connector security at the call-site.
This month is about connector security at the ecosystem level.
The main ecosystem lesson:
If connectors are over-privileged, every other control is just damage control.
Common failure patterns in MCP-enabled systems include over-privileged exposure, unvalidated context sources, and instruction precedence confusion.
Runtime governance means:
This is not “extra security”.
This is what makes connectors operable.
Ecosystem problems are rarely visible from a single service dashboard.
You need ecosystem-level observability:
Adoption health
Active installs, usage growth, and “top connectors” by workflow impact.
Compatibility health
Breakage rate after upgrades, deprecated usage, and failed contract tests.
Security health
Denied calls, abnormal tool sequences, and suspicious cross-tenant access attempts.
Reliability health
Timeouts, tail latency, error budgets, and retry storms per connector.
And you need one more metric that people forget:
“Human pain per week.”
How many hours did teams spend:
If that number grows, you don’t have a connector platform—you have an integration tax.
Model Context Protocol (MCP) — Specification & docs
The canonical reference for MCP roles (host/client/server), discovery, transports, and the contract surface you standardize at connector scale.
MCP Servers — reference implementations
Practical server examples you can run and study to see real tool/resource exposure patterns, packaging, and operational concerns.
No. But ecosystems tend to converge on protocols that solve the “many hosts, many tools” problem consistently.
The lesson isn’t “pick MCP”.
The lesson is: pick a protocol + publish governance, or your ecosystem will fragment.
You can, but you’ll reinvent:
Protocols are how ecosystems avoid re-learning the same lessons at scale.
Start with:
Then iterate as usage grows.
This month was about the ecosystem layer: standards, registries, and governance.
Next month is the synthesis:
Reference Architecture v2: the operable agent platform
Where we tie together:
Reference Architecture v2: the Operable Agent Platform
This is the 2025 finale: a practical reference architecture for running fleets of agents with governance—connectors you can trust, traces you can debug, evals you can ship, and humans you can hand off to.
Voice Agents You Can Operate: reliability, caching, latency, and human handoff
Voice turns LLMs into real-time systems. This month is about building voice agents that meet latency budgets, degrade safely, and hand off to humans without losing context—or trust.