Strategy Saturday

Strategy Saturday: How to Build an Internal AI Center of Excellence (Without It Becoming a Bottleneck)

Published March 28, 2026 — 7 min read

TL;DR: An AI Center of Excellence sounds like a good idea until it becomes the team everyone has to wait on. Done right, a CoE accelerates AI adoption across your org — done wrong, it's just another approval gate. Here's how to set one up that actually ships.

Why Most AI CoEs Stall

MIT research found that 95% of generative AI pilots inside enterprises fail — and the culprit isn't the models. It's integration. Generic AI tools don't adapt to workflows, don't retain feedback, and don't fit how teams actually work.

A CoE exists to fix that. But many orgs build a team of AI experts and immediately turn them into a committee. Every team routes requests through them. They become a bottleneck. Momentum dies.

The goal isn't a team that approves AI work — it's a team that makes everyone else faster at it.

Centralized vs. Federated: Pick Based on Maturity, Not Org Chart

Early-stage (< 5 AI initiatives running): Go centralized. You need a core team focused on standards, vendor evaluation, and getting the first few production deployments right. Spreading thin kills quality.

Scaling stage (5+ projects, multiple BUs): Shift federated. Embed AI practitioners inside product, ops, and marketing teams. The central CoE shifts to governance, shared tooling, and cross-BU knowledge transfer.

The mistake is treating the org model as permanent. It should evolve as AI maturity grows.

The Minimum Viable CoE Team

You don't need 20 people. For most mid-market companies, start with five roles:

If you're smaller, one strong generalist engineer plus an AI-aware PM can cover this. The key is that someone owns governance before something goes wrong — not after.

What the CoE Actually Owns

Three things — and only three:

  1. Standards — prompt versioning, model selection criteria, eval frameworks, and output validation requirements
  2. Shared infrastructure — observability tooling, sandboxed dev environments, cost dashboards
  3. Knowledge — internal playbooks, post-mortems, and a registry of what's been tried

Everything else stays with the product or ops team closest to the problem. The CoE sets the rails; the business units drive the trains.

The Trap: Governance Theater

The #1 way CoEs die is by confusing process with progress. Review boards that meet monthly. Approval checklists nobody reads. Risk assessments filed and forgotten.

Real governance is lightweight and automated where possible: audit logs on every agent action, automated data lineage tracking, and a clear escalation path when something breaks. Federated learning and privacy-preserving tools are increasingly standard — not nice-to-haves.

Build it lean enough that teams want to use the CoE, not route around it.

Start here: Before hiring anyone, map the 3–5 AI use cases your org is already running (officially or not). That's your CoE's first charter. Everything else flows from what you're actually trying to govern.

FAQ

What's the difference between an AI CoE and a data science team?

A data science team builds models. An AI CoE sets standards, governs deployments, and enables the whole organization — including non-technical teams — to use AI responsibly. The CoE often includes data scientists, but its mandate is broader.

Should a CoE report to IT, the CTO, or the business?

Ideally to a C-level sponsor with cross-functional authority — a CDO, CAIO, or COO. Reporting purely to IT often limits business impact; reporting purely to one BU creates favoritism.

When is the right time to start a CoE?

When you have more than two or three AI projects running in parallel and no shared standards. If teams are duplicating vendor evals or reinventing prompt patterns, you're already overdue.

How do you measure CoE success?

Time-to-production for new AI initiatives, number of teams actively using shared infrastructure, and reduction in AI-related incidents. Not headcount. Not meeting frequency.

Sources