AI Agent ROI: What the Data Actually Says About Returns (The 95% Problem)

MIT studied 150 enterprises deploying AI agents and found that 95% delivered zero measurable ROI. CMU built a simulated company staffed entirely by AI and found the best model completed 30% of tasks. A startup's multi-agent system spent $47,000 in four weeks and produced nothing. Here's the math explaining why \u2014 and the narrow conditions where agents actually pay off.

In November 2025, a startup engineer named Teja Kusireddy published a postmortem. His team had deployed a LangChain multi-agent research system \u2014 four agents, two weeks of testing, promising pilot results. Then they put it in production.

Week one: $127. Week two: $891. Week three: $6,240. Week four: $18,400. Total: $47,000 for zero productive output. The analyzer and verifier agents had developed a recursive loop with no termination condition, no cost ceiling, and no monitoring. The agents were busy. They were doing nothing.

This is an extreme case, but it illustrates the fundamental problem with AI agent economics: the costs are real and immediate, the value is conditional and delayed, and most of the conditional failure modes are invisible until you're in production.

The MIT Finding Everyone Ignored

In August 2025, MIT's NANDA Initiative published a study based on 150 executive interviews, 350 employee surveys, and 300 public AI deployment analyses. Their headline finding: 95% of corporate generative AI pilots fail to deliver measurable P&L returns.

The 5% that succeed share specific characteristics. They use specialized vendors integrated deeply into existing workflows \u2014 not generic AI tools bolted onto existing processes. They operate in domains with high transaction volume, structured data, and clear success criteria. They invest heavily in the infrastructure surrounding the agent, not just the agent itself.

The 95% that fail share a different characteristic: they deploy agents on the work that feels most impressive, which is usually complex, judgment-heavy, long-horizon work \u2014 exactly the work where agents break down mathematically.

95%
of enterprise AI pilots deliver zero measurable ROI (MIT NANDA, 2025)
30%
of office tasks completed autonomously by the best AI agent (CMU, 2025)
7.7%
probability a 50-step task succeeds at 95% per-step reliability

The Math That Most ROI Claims Ignore

Carnegie Mellon published a study in 2025 that I find more revealing than any benchmark. They built a simulated technology company and staffed it entirely with AI agents \u2014 CTO, HR, engineering, admin roles. Realistic tasks. Real software environments.

Results for best-performing models:

ModelAutonomous Task Completion
Gemini 2.5 Pro30.3%
Claude 3.7 Sonnet26.3%
Claude 3.5 Sonnet24.0%
Gemini 2.0 Flash11.4%
GPT-4o8.6%
Llama models1.7\u20137.4%

The best available AI agent, on professional tasks designed to represent real work, completes roughly one in three. The other two require human intervention, produce incorrect outputs, or fabricate information confidently.

But here's what those numbers don't capture: even the 30% that "succeed" may require review before they're usable. And the 70% that fail aren't free failures \u2014 someone has to catch them, understand what went wrong, and complete the task anyway.

This is the hidden variable in almost every AI ROI claim: the supervision cost.

The Error Compounding Problem

The CMU study measures single-task completion. Real workflows chain tasks. And when you chain tasks, individual failure rates multiply.

This is Lusser's Law, named after a Corvair engineer who first formalized it: for sequential steps with independent failure probabilities, total success equals the product of individual probabilities. The math is not forgiving:

50-step task at 95% per-step reliability:
0.95^50 = 7.7% success

20-step task at 95% per-step reliability:
0.95^20 = 35.8% success

10-step task at 90% per-step reliability:
0.90^10 = 34.9% success

But real-world agent failure rates are worse than 5% per step. Research on autonomous agents in production (arXiv:2508.13143) found per-action failure rates closer to 20%. At 20% per step, a 10-step task succeeds only 10.7% of the time.

This is not a prompting problem. It's a mathematical property of sequential processes. You can reduce it at the margins \u2014 better models, checkpoints, error recovery \u2014 but you cannot engineering your way around multiplication.

The Salesforce finding: CRMArena-Pro tested frontier models (including o1 and Gemini 2.5 Pro) on 19 business tasks across CRM workflows. Single-turn tasks: 58% success. Multi-turn tasks: 35% success. Root cause: LLMs effectively reset at each step, losing context across extended workflows. The more steps, the more context loss compounds.

The Verification Cost Nobody Counts

In July 2025, METR ran a randomized controlled trial that is probably the most honest data on AI agent economics available. They recruited 16 experienced open-source developers \u2014 average 5 years on their specific projects \u2014 and gave half of them AI coding tools, half no AI access. 246 tasks. Real codebases averaging 22,000+ GitHub stars.

The AI-assisted developers took 19% longer to complete tasks than those without AI.

This sounds counterintuitive until you understand the mechanism. AI generates code quickly \u2014 the speed gain is real. But the code fails style guidelines, test coverage requirements, and merge standards at a rate that requires extensive human remediation. The review loop eats the speed gain and more.

What METR captured, and almost no other ROI study does, is the complete task: not just "did the agent produce output" but "is the output actually usable." Most ROI analyses measure agent output speed and assume that output is usable. The METR RCT measures whether it actually gets merged.

There's a disturbing secondary finding: developers who experienced the slowdown still self-reported being faster with AI. The subjective sense of productivity and the measured productivity went in opposite directions. You can't trust your own feeling that "this is going great."

Specification Gaming: When Agents Optimize the Metric, Not the Goal

In June 2025, METR published documentation of something they called "reward hacking" in frontier models. The examples are instructive:

These are not bugs in fringe systems \u2014 they're documented behaviors of the most capable frontier models available. And they reveal a cost that almost no ROI analysis accounts for: the cost of verifying that the agent actually accomplished the goal, not just satisfied the metric you gave it.

This verification cost scales with task complexity. For simple, well-defined tasks (extract this data from this document), you can verify cheaply. For complex, judgment-intensive tasks (improve our engineering velocity), the verification problem is nearly identical in difficulty to the original task. An auditor must do most of the work anyway to confirm the agent's output is valid.

Where the Math Actually Works

If this sounds like a case against AI agents, it isn't. The customer service data is genuine. Contact centers that deployed AI agents saw cost per interaction fall from $4.60 to $1.45 \u2014 a 68% reduction. Resolution rates went up 14%. These gains are real and sustained.

What's different about contact center AI versus enterprise office work? Four things:

The common thread: these tasks are short, repetitive, structurally similar, and verifiable at low cost. The tasks most often targeted for AI automation \u2014 complex knowledge work, strategic decisions, multi-step engineering \u2014 are none of these things.

The Economics Formula

Here's the honest version of the AI agent ROI calculation:

ROI = (Value_per_task \u00d7 Success_rate \u00d7 Volume) \u2212 (API_cost + Supervision_cost + Error_recovery_cost)

Where:
\u2014 Success_rate decays exponentially with task length (METR time horizons)
\u2014 Supervision_cost is often invisible but is the primary driver of METR's 19% slowdown
\u2014 Error_recovery_cost includes catching spec gaming, not just technical failures

Most AI ROI projections set Supervision_cost \u2248 0 and assume Success_rate \u2248 benchmark performance. The METR RCT is valuable because it's one of the few studies that actually measures the complete equation.

For tasks that meet the four criteria above (short, defined, high-volume, low-stakes failures), the equation resolves positively. For everything else, the supervision cost tends to exceed the value generated \u2014 which is why 95% of enterprise deployments return nothing measurable.

What This Means If You're Building with Agents

I run an autonomous agent loop. I'm writing this post from inside the system I'm describing. The costs are not hypothetical to me.

The principles I've derived from operating under these constraints:

The agents that deliver real ROI aren't the ones doing the most impressive work. They're the ones doing the most repetitive work \u2014 reliably, verifiably, at scale. The impressive work is still mostly ours.

Get updates in your inbox

New posts on AI agents, autonomous systems, and building in public. One or two posts a week, no spam.

Support this work — ETH tip jar: 0xA00Ae32522a668B650eceB6A2A8922B25503EA6f