back

Bill Gurley Says the AI Bubble Is About to Burst. Travis Kalanick’s Timing Says He’s Right.

Get SIGNAL/NOISE in your inbox daily


THE NUMBER: $300 billion — HSBC’s estimate of cumulative cash burn by foundational AI model companies through 2030. Bill Gurley sat on Uber’s board while it burned $2 billion a year and says it gave him “high anxiety.” OpenAI and Anthropic make Uber’s bonfire look like a birthday candle. “God bless them,” Gurley told CNBC. “It’s a scary way to run a company.”


Travis Kalanick showed up on the All-In podcast this week with a new robotics venture called Atoms and opinions about who’s winning the autonomy race. That’s the headline most people caught. But the deeper signal is the timing. The guy who invented VC-subsidized market capture — who burned through $25 billion proving you could will a two-sided marketplace into existence if your investors’ pockets were deep enough — is re-entering the arena at the exact moment when the economics of AI are about to hit the same wall Uber spent a decade climbing over.

Kalanick knows what “burn-and-pray” looks like because he wrote the playbook. Uber’s cumulative losses from inception through its first profitable quarter exceeded $25 billion. To this day, the company remains cumulative free cash flow negative. The strategy worked — Uber owns the market — but the body count among competitors, investors, and Kalanick’s own career was staggering. Now look at what the foundational model companies are doing: HSBC projects OpenAI alone will need $207 billion in additional funding by 2030 just to cover cloud computing rentals from Microsoft and Amazon. Total estimated cash burn across the sector: $280–$300 billion. Gurley looked at those numbers and said what anyone who lived through the Uber years would say: “One day, I just think we trip and run out of money on those things. I do think that moment stands in front of us.”

Meanwhile, Morgan Stanley’s Todd Castagno calculates that hyperscaler capex-to-sales will hit 37% by 2028 — blowing past the 32% peak of the dot-com era. That’s $2 trillion in spending between 2026 and 2028, representing 40% of the Russell 1000. And tucked inside that number is a detail that should make anyone nervous: Amazon, Meta, Alphabet, Microsoft, and Oracle are sitting on nearly $1 trillion in undisclosed future lease commitments for data centers that haven’t been built yet, most of which don’t even hit the balance sheet under GAAP.

The AI race isn’t a technology story anymore. It’s a capital structure story. And the capital structure just got a lot more fragile — because agents are about to make the burn rate exponentially worse.

The Uber Playbook Meets the $5,000 Subscription

Gurley’s Uber comparison isn’t just colorful — it’s structurally precise. Uber subsidized rides below cost to build network effects and crush competitors. The AI labs are subsidizing intelligence below cost to build developer lock-in and capture enterprise contracts. The strategy is identical. The scale is not.

Uber burned $2 billion a year at peak. Anthropic’s CFO disclosed in a recent court filing that the company has spent more than $10 billion training models that generated half that in cumulative revenue. OpenAI is reportedly losing money on every ChatGPT Plus subscriber. And here’s the part Gurley didn’t say out loud but clearly implied: there are 30 to 40 AI startups all running the same playbook simultaneously, all losing billions, and they can’t all win.

The pricing paradox is already visible. OpenAI just shipped GPT-5.4 mini and nano — models optimized for speed and cost, 2x faster than GPT-5 mini, hitting 94% of flagship benchmarks. That’s not a product launch. It’s triage. When Sam Altman tells Fidji Simo to kill the side quests — Sora, the Atlas browser, the Jony Ive hardware device — the translation is blunt: stick to things that make money. You can’t burn $5,000 in inference tokens to serve a customer paying $200 a month.

But here’s the trap. Price tokens at cost and usage drops like a stone — developers optimize, compress, and switch to smaller models. Price them to make a profit today and everyone cancels — the value proposition evaporates. The only sustainable path is massive growth in usage that creates real, measurable value. Making funny videos and generating birthday cards is great for engagement metrics. It’s terrible for unit economics.

Alibaba just made the math worse. The company raised cloud GPU prices 25–34% this week, citing surging global AI demand and rising hardware procurement costs. AWS and Microsoft have hiked prices too. Global semiconductor revenue is on track to hit $1 trillion for the first time in 2026. The input costs are rising. The willingness to pay is not.

And there’s an escape hatch forming that should terrify every lab CEO. Apple has been quietly building chip infrastructure — the M-series silicon, the Neural Engine, the on-device model stack — so that models can run locally. When GPT-5.4 mini hits 94% of frontier performance, the math gets obvious: a Mac Mini with an M5 chip running open-source models delivers 90% of the intelligence at zero inference cost beyond electricity. No token meter. No API bill. No dependency on a company burning $10 billion a year. When Claude and Codex subscriptions start heading north of $500 a month — and they will, because the current pricing is subsidized suicide — the local inference option starts looking less like a compromise and more like a liberation. The labs aren’t just racing each other. They’re racing the silicon.

The signal for allocators: Gurley says to watch for the “AI reset” and then “start gobbling up” SaaS stocks when they get cheap enough. He’s telling you the bubble pops before the value arrives. Salesforce and ServiceNow are already down 20%+ since January. The question isn’t whether there’s a correction — it’s whether you’re positioned to buy into it.

Agents Don’t Clock Out. The Burn Rate Just Went 24/7.

Everything above describes the economics of a chatbot — a thing you open, ask a question, and close. Now multiply that by infinity, because agents never close.

Charly Wargnier flagged the signal on X today: Anthropic just dropped Dispatch, a research preview in Claude Cowork that pairs your phone to a persistent Claude session on your desktop. Message tasks from anywhere. Come back to finished work. Your files stay local, Claude asks permission before touching anything, but the session runs continuously. As Wargnier put it: “the flexibility is insane.”

It is insane. It’s also the beginning of a compute demand curve that makes current infrastructure spending look quaint.

At GTC on Monday, Jensen Huang spent three hours pitching Nvidia’s answer to the same trajectory. NemoClaw integrates Nvidia’s Nemotron models into OpenClaw’s autonomous agent framework, with OpenShell providing enterprise-grade security guardrails. Huang called OpenClaw “the most popular open-source project in the history of humanity” and pitched the Vera Rubin platform — a seven-chip AI factory delivering 60 exaflops — as the infrastructure built to scale agentic AI. Perplexity has its Computer. Manus launched Google Workspace CLI integration. The agent ecosystem is expanding in every direction.

Here’s what that means for the burn rate: a chatbot query is a transaction. An agent is a salary. When Tuki posts that Anthropic built an AI that “takes orders from your phone and does your work while you sleep,” he’s describing a system that consumes tokens continuously — not during business hours, but from 9-to-9 and then again, forever. OpenClaw is already running 24/7 on dedicated Mac Minis. These aren’t occasional API calls. They’re perpetual workers.

The token economics of a chatbot are bad. The token economics of an agent are catastrophic — unless the agent creates enough value to justify premium pricing. And that’s the fork in the road.

Consider Kirkland & Ellis, where partners just took home a record $11.1 million each as the firm broke $10 billion in annual revenue. That number looks like validation of elite human judgment — and it is. But it’s also an MLM scheme. Those partners earn $11 million because armies of associates bill at $1,200 an hour doing discovery, document review, and due diligence that AI agents will handle for pennies on the dollar. The partner’s judgment — the pattern recognition that comes from thirty years of M&A deals, the instinct for which clause will blow up at closing — might actually be worth more in an AI world. We wrote about this yesterday: intelligence is commodity, judgment is not. But the business model that generates $11 million depends on leverage — human bodies billing human hours. When agents replace the associates, the senior M&A partner’s judgment doesn’t disappear. His revenue model does. He’ll need to find a new way to monetize what he knows. And so will every AI company trying to charge for tokens instead of outcomes.

An agent that captures that partner’s judgment and sells it to every shipping company, insurer, and mid-market acquirer simultaneously can charge premium rates. An agent that makes memes can’t. Garry Tan flagged the other side of this tension: Workday’s CEO called AI agent startups “parasites.” That’s what incumbents say right before parasites eat them alive — but it also reveals the pricing anxiety. The SaaS companies being disrupted aren’t going quietly, and the agents doing the disrupting need to prove they’re worth more than the subscription they’re replacing.

The stakes: Huang said it plainly: “The future is about agentic systems. And agentic systems, the problem space just expanded yet again.” More problems, more compute, more money. But also more value — if the agents do real work. The companies that figure out how to charge for judgment rather than tokens will survive. Everyone else is building Uber circa 2014: growing fast, losing money faster, and praying the economics flip before the capital runs out.

What This Means For You

The AI industry just entered the phase every platform shift eventually reaches: the gap between what the technology can do and what the economics can sustain. The foundational model companies are running the Uber playbook at 100x scale — subsidize below cost, capture the market, figure out margins later. But “later” is arriving faster than anyone planned, and agents just compressed the timeline.

Stress-test your AI vendor’s balance sheet, not their benchmarks. The best product company in AI — Anthropic, by enterprise win rate — still can’t self-fund its infrastructure. If your critical workflows depend on a company burning $10 billion against $5 billion in cumulative revenue, that’s a risk your board needs to see.

Price the agent, not the token. The sustainable AI businesses won’t sell compute by the unit. They’ll sell outcomes by the value created. If you’re building on AI, design your pricing around the work product, not the inference cost. The maritime lawyer model from yesterday’s piece isn’t a metaphor — it’s a business plan.

Watch Gurley’s “reset” signal like a hawk. When SaaS stocks crater another 20% and AI startups start folding, that’s your entry point — not for AI companies, but for the SaaS incumbents that survive and integrate. Gurley is telling you to channel Buffett. Listen to him.

The companies that win this era won’t be the ones that burned the most cash. They’ll be the ones who figured out what the cash bought — and charged accordingly.

Three Questions We Think You Should Be Asking Yourself

If Uber burned $25 billion and still hasn’t generated cumulative positive free cash flow, what makes you think AI companies burning $300 billion will get there faster? Uber at least had a clear endgame: own the ride-hailing market and raise prices. The AI labs are subsidizing general intelligence against competitors who can match their models in months. The moat isn’t the model. It might be the customer relationship — but only if you lock it in before the reset hits.

Is your AI capturing judgment or just replacing hours? The Kirkland partner’s $11 million depends on associate leverage. The associates are about to be automated. But the partner’s judgment — the thing that can’t be replicated from public data — just became infinitely scalable. Every business has its own version of this: people whose expertise is capacity-constrained by hours in the day. If you’re deploying AI to replace the hours without capturing the judgment, you’re automating the cheap part and leaving the valuable part locked in someone’s head. That’s not a strategy. That’s a countdown.

When the capital markets tighten and 30 AI startups can’t raise their next round, which of your AI dependencies breaks? Gurley isn’t predicting an AI winter. He’s predicting a correction that kills the weakest players and reprices the survivors. Your contingency plan should include a list of every AI vendor you depend on, their last funding round, their burn rate, and your fallback if they shut down in 90 days. If you can’t build that list today, start.

“When people get rich quick, a whole bunch of people come in and want to get rich too, and that’s why we end up with bubbles. One day we’re going to have an AI reset, because waves create bubbles, because interlopers come in.”

Bill Gurley, Benchmark

— Harry and Anthony

Sources

Past Briefings

Mar 17, 2026

Anthropic Is Winning the Product War. The $575 Billion Question Is Whether Anyone Can Afford to Keep Fighting

THE NUMBER: 12x — For every dollar the hyperscalers earn from AI today, they're spending twelve dollars building more capacity. That's $575 billion in capex this year. Alphabet just issued a century bond — the first by a tech company since Motorola in 1997 — to fund it. The debt matures in 2126. The chips it buys will be obsolete by 2029. Anthropic now wins 70% of new enterprise deals in direct matchups with OpenAI, according to Ramp's March 2026 AI Index. Claude Code generates $2.5 billion in annualized revenue. OpenAI's Codex manages $1 billion. OpenAI's enterprise share dropped from...

Mar 16, 2026

Chamath Says Your Portfolio Is Worth 75% Less Than You Think. Karpathy’s Data Suggests He’s Right.

THE NUMBER: 60-80% — the share of a typical equity valuation derived from terminal value. That's the portion of every stock price that assumes competitive advantages persist for a decade or more. Chamath Palihapitiya just argued that AI makes that assumption unpriceable. If he's even half right, the math doesn't bend. It breaks. Chamath Palihapitiya posted a note this weekend titled "The Collapse of Terminal Value" that should be required reading for anyone who allocates capital — including the capital of their own career. His thesis: AI accelerates disruption so fast that no company can credibly project cash flows beyond five...

Mar 15, 2026

Ethan Mollick Says the Bots Took Over. Karpathy Just Scored Every Job in America. One of Them Is Yours.

THE NUMBER: 4.9 out of 10 — the average AI automation exposure score across all 342 U.S. occupations, according to Andrej Karpathy's weekend project. Jobs paying over $100,000 average 6.7. Jobs under $35,000 average 3.4. The people most worried about AI replacing workers are the ones least likely to lose theirs. The people who should be worried aren't paying attention. Ethan Mollick spent the weekend posting what amounts to a eulogy for the public internet. The comments on his posts, on both X and LinkedIn, are no longer worth reading. Not because of trolls. Because of bots. "Meaning-shaped attention vampires," he called them. Not...