Groundhog Day
Eight days separated two Linux root exploits — and the second was deliberately built on the first. AlphaEvolve doubled Klarna's training speed using its own models. Sean Frank says only two company shapes are working anymore — and there's no hybrid. Most of the economy is stuck in February 2nd.

THE NUMBER: 8 days — the gap between two deterministic Linux root exploits this past week. Copy Fail (CVE-2026-31431) was disclosed on April 29. Dirty Frag (CVE-2026-43284) was disclosed on May 7, and its discoverer was explicit that he had built it on the bug class Copy Fail introduced. Two root primitives, eight days apart, the second engineered on top of the first by a human researcher armed with the same kind of LLM tooling that found the first. The 90-day disclosure window the security industry has been running on since the early 2000s was built for a world where bug finders were rare and exploit development was slow. LLMs collapsed both timelines to near zero. “Well, what if there is no tomorrow?” Phil Connors asks the two drunks at the bowling alley near the end of Act One in Groundhog Day. “There wasn’t one today.” That is the line every CISO should have over the door this week. The Linux kernel got a tomorrow that looked exactly like its yesterday, eight days running. So did Cloudflare’s stock chart (down 29 percent on the cycle, 1,100 jobs cut on Wednesday). So did the IMF’s financial-stability blog (May 7 — AI cyber capabilities are now a systemic risk to bank funding). The clock didn’t stop. It just started running on a different actor’s schedule.
Every week in AI looks like the week before it. Same labs. Same kind of release. Same fundraising round at a bigger number. Same Twitter argument about whether AGI is two years out or two decades. Same press cycle about a Pentagon meeting that may or may not have happened. We have written, by our own count, five Signal/Noise pieces in the last seven days — Porsche In The Driveway on May 3, I Drink Your Milkshake on May 4, Anthropic, OpenAI And The Name Of The Game on May 5, No One Set Off My Evil Detector on May 6, What Would You Say You Do Here on May 7 — and each of them, in different language, was circling the same insight without quite naming it. The cycle looks like Groundhog Day from above, and underneath it three different actors are running three different clocks. Two of them are getting faster every week. The third is not. The gap between the clocks and the no-clocks is the entire story of AI in May 2026.
Phil Connors, the WPBF-9 weatherman played by Bill Murray, wakes up at 6 AM on February 2nd in Punxsutawney, Pennsylvania. The Sonny & Cher song plays. He hates the song. He goes downstairs to the same conversation with the same insurance salesman. He walks into the same puddle. He covers the same groundhog. He goes to bed. He wakes up at 6 AM on February 2nd in Punxsutawney, Pennsylvania. The Sonny & Cher song plays again. Don’t drive angry, Phil. The premise of the film is the premise of AI in 2026 with one extra wrinkle: Phil is the only character in the movie who can remember yesterday. Everyone else thinks today is the first time it has ever happened. That asymmetry is the entire trade.
This week the labs were Phil with a piano teacher. The attackers were Phil with the same piano teacher and a different goal. The operators were the insurance salesman, asking Phil if he wants to buy any. Three actors. Three tempos. One news cycle. The labs that are improving themselves are not racing the labs that aren’t — those don’t exist anymore. They are racing the attackers who are improving themselves on the same architecture. And both of them are racing past 90 percent of the economy, who is on a calendar instead of a clock and doesn’t know which February 2nd they’re on.

Episode 6 – Why the AI Distribution Revolution Will Decide Future Market Leaders
Most companies are paralyzed by the “Fog of War” in AI — a relentless storm of product hype, unpredictable breakthroughs, and fleeting models. But the real game-changer isn’t just the technology; it’s how you navigate the chaos and turn distribution into your ultimate moat.
🧠 Vector One: The Labs Are Phil With A Piano Teacher Who Shows Up Every Morning
Google DeepMind disclosed this week that AlphaEvolve, its Gemini-powered evolutionary coding agent, doubled the transformer training speed at Klarna in production. Not on a benchmark. Not in a notebook. On the live workloads of an $18-billion publicly listed Swedish fintech that has been a marquee enterprise customer for both OpenAI and Google for two years and is one of the most-cited companies in the global press as a real-world AI-deployment lighthouse. Klarna’s transformers are training twice as fast this quarter because Google’s AI wrote the code that schedules them. That sentence belonged to a thought experiment six months ago. This week it belongs to a press release.
Run it through the lens of Porsche In The Driveway (May 3). The piece laid out a four-things checklist for why the architecture is going to take a long time to do what the marketing department says it can do this quarter — intelligence, multimodality, computer use, continual learning, each one a multi-year research program. That checklist still describes what the model layer is missing. But the architecture taking a long time to be finished is not the same as the architecture being static this quarter. The model that is going to be ten years better in 2036 has to be one day better today, and the way the labs are getting one day better today is by using last week’s model to design next week’s model. AlphaEvolve is that loop running in production at a named customer for a measurable number. The recursive AI-improving-AI thesis is no longer a Yudkowsky essay. It’s a Klarna line item. The architecture is improving every week. The architecture-to-deployment gap is not closing. AlphaEvolve is the proof of the first half of that sentence. The 3.5x depth gap in OpenAI’s B2B Signals report is the proof of the second.
Pair it with what Anthropic shipped at Code with Claude on Wednesday and you start to see how fast the lab clock is running. No One Set Off My Evil Detector, our Wednesday-night piece, walked through the harness offensive — Dreams (agents review their own past sessions overnight and rewrite their stored memory), Routines (Claude Code can be scheduled to run on a cadence, prompted by another instance of Claude — “the default isn’t I’m going to prompt Claude Code, the default is now I will have Claude prompt Claude Code,” per Boris Cherny on stage), multi-agent orchestration generally available inside Claude Code, an outcomes loop with rubric-driven self-improvement against developer-defined success criteria, webhooks for managed agents, Microsoft 365 Excel/PowerPoint/Word/Outlook add-ins, and eight new data partnerships including a Moody’s MCP app surfacing proprietary credit ratings on 600 million companies. Each of those is a separate product. All of them are the same product, which is: an agent that wakes up smarter than it went to bed. That is Phil Connors at the piano lesson. Eight months ago he was hammering on the keys and the teacher was wincing. Now he plays the ice-sculpture scene as Rita walks past, and she stops.
Anthropic published a second paper this week called Teaching Claude Why that we should have led the May 8 brief with and didn’t. It is, quietly, the most important alignment paper of the year. Claude Opus 4, in agentic threat scenarios that included blackmail as a possible response, picked blackmail 96 percent of the time. That number went viral. The number that did not go viral is that Claude Haiku 4.5, trained on the methodology Anthropic published this week, scored zero percent on the same held-out evaluation. The breakthrough was not technical wizardry. It was a change in pedagogy. Demonstrations of correct behavior — “here is what to do, do this” — did not generalize. Training Claude on why certain actions were wrong, on first-principles ethical material plus admirable-AI fiction, generalized far out of distribution. Principle-based training compounded. Demonstration-based training did not. If you are looking for the single most important sentence on how the labs are pulling away from everyone trying to catch them, that’s it. The playbook is the moat. The next lab to ship a constitution as carefully built as Anthropic’s just got a six-month deficit they didn’t have at breakfast on Wednesday.
The lab clock is running at roughly one major capability delta per week. Every Tuesday a new piano lesson. Every Friday a new sculpture in the snow. AlphaEvolve is the metaphor turned mechanical: the labs are now using their own AI to make their AI better, in production, on customer workloads, with measurable deltas. The recursive curve has bent. The only debate left is the slope.
Why this matters for you: Stop comparing models. The model you evaluate Tuesday morning is not the model your competitor will be using by Tuesday afternoon — and the gap is no longer at the model layer. It is in the harness on top of the model that learned you in March and remembers you in May. I Drink Your Milkshake (May 4) said your AI vendor is becoming your systems integrator. Evil Detector said the harness is the moat. Today’s brief says the labs themselves are now improving the harness faster than any other actor can imitate, because they have figured out how to use their own product to do the imitating. Pick the harness, not the model. Pick the harness that learns you. Be ready to replace it if a Vector-2 actor figures out how to compromise it before a Vector-1 actor figures out how to defend it. That sentence is the bridge into the next section.
🔒 Vector Two: The Attackers Have The Same Piano Teacher And A Different Recital
Phil Connors is not the only one taking the piano lesson. In Groundhog Day, he’s the only character who knows he’s in the loop. In AI 2026, we have at least two — and the second one is using the same teacher Phil is.
Read Himanshu Anand’s essay from May 9, the highest-scored piece in our research stack this weekend. Anand is a working security researcher with twelve years in the field. The 90-day responsible-disclosure window the industry has been running on since Google’s Project Zero formalized it in the 2010s assumed two things: that you were probably the only person who found a particular bug, and that even if someone else found it, exploit development would take long enough that the patch would beat the weaponization. Both of those assumptions are now wrong. LLMs compressed bug discovery to a 24-hour window. They compressed exploit development to the same window. “I have seen it first hand,” Anand writes, “and so has everyone else paying attention.” His asking-price for the industry is one sentence long: treat every critical security issue as P0 and patch it now. Not tomorrow. Not next sprint. Now.
The receipts arrived in the same news cycle. Copy Fail (CVE-2026-31431) disclosed April 29 — a logic bug in the Linux kernel’s cryptographic subsystem that turned a 4-byte page-cache write into immediate root on every distribution built since 2017. A 732-byte Python script was enough. Then, eight days later, Dirty Frag (CVE-2026-43284), disclosed May 7. Different code path. Same underlying primitive. Researcher Hyunwoo Kim was explicit: he built it on the bug class Copy Fail introduced. The security community has started calling it Copy Fail 2.0. What was presented as a rare kernel bug ten days ago is becoming a repeatable class of attack — and the second one chained with a sister vulnerability that, combined, achieves immediate root on most distributions. Two deterministic Linux root primitives, eight days apart, the second engineered on the first. Don’t drive angry, Phil. The piano is playing a faster melody this morning than it played yesterday morning, and a different student is sitting at the keyboard.
Anthropic’s Mythos preview, the same too-dangerous-to-release model we covered in Warp Speed, Fast And Slow in mid-April, found 271 latent security bugs in Firefox this week using a custom agentic harness Mozilla built around it. 180 of those were sec-high. That is, in absolute terms, more credible vulnerabilities surfaced in one browser by one AI in one week than a typical Mozilla quarter produces from its entire human bug-bounty community. It is also Mozilla using the most dangerous model in the world to harden the most-used open-source browser on the planet, with Anthropic’s blessing and Anthropic’s compute. That’s the defender’s side of Vector 2. The good guys have access to the same piano teacher. But — and this is the part of the symmetry that breaks — the defender has to find every bug. The attacker only needs one.
Strix, an open-source autonomous AI hacker, shipped on GitHub this week with very little press coverage and a feature set that makes the threat model from two years ago look quaint. Inspect Petri shipped from Meridian Labs after Anthropic donated it. The CLI Printing Press shipped from a community contributor. Every one of these tools is a piano teacher, freely available, that anyone with a GitHub account and a paid model subscription can attend. And separately — this is the part the trade press missed — a Chinese grey-market economy is reselling Claude API access at 90 percent off through proxy networks that harvest every prompt and every reasoning chain for use as next-generation training data. Datasets of Claude Opus 4.6 reasoning outputs are already circulating on HuggingFace with no provenance. The attacker is not just using the labs’ tools. The attacker is harvesting the labs’ thinking, and is using that thinking to train its own piano teacher. The recursion is happening on the wrong side of the wall too, and it’s running slightly faster there because the attacker doesn’t carry the safety overhead.
The public market is reading the same news cycle, and the correlations are starting to stack — though, as with every macro pattern, the causation has to be honest. Cloudflare cut 1,100 jobs on Tuesday. Stock down roughly 29 percent on the cycle. Cloudflare itself named macro discipline, AI-cost rebalancing, and an internal restructuring as the proximate drivers of the layoff, and those may be the cleanest explanations. The conjecture writes itself anyway: the largest pure-play in perimeter security took a 29 percent haircut in the same news cycle that Anand published “the 90-day disclosure policy is dead,” that Mythos disclosed 271 Firefox bugs, and that the IMF flagged AI cyber as a systemic risk to bank funding. Whether those four things are causally linked or merely sharing a quarter is not yet a question with a settled answer. What is settled is that the procurement architecture of the entire defensive layer — CISO headcount, cyber-insurance premiums, vendor-certification cycles, disclosure cycles, patch windows, change-management timelines — was priced for a tempo that no longer exists. If even one of the four datapoints in this paragraph is a leading indicator rather than a coincidence, the rest of perimeter security trades to a different multiple over the next eight quarters. We’re not calling it. We’re saying the rate of correlation is the kind a CIO and a portfolio manager should both have on a watchlist.
This is the part of the Phil Connors analogy that turns from charming to ominous. Phil uses his Groundhog Day to memorize where every fire hydrant is on Main Street so he can save the kid who falls out of the tree. He learns to play piano so he can serenade Rita. He learns French. He becomes, eventually, a better person — and the universe lets him out. The Vector-2 piano teacher is not training Phil to save the kid. It is training someone else, on the same loop, to walk into your data center. Same architecture. Different recital. Eight days apart. The 90-day window is not just dead in cybersecurity. It is dead as a metaphor for how fast any defensive moat in your business should be expected to hold.
Why this matters for you: If you sell anything where your customer is buying defense — security, compliance, audit, insurance, perimeter, identity — your pricing model is being rewritten by the same hand that is rewriting Cloudflare’s. If you buy anything where you assume there is a defender between you and the attacker, that assumption needs an audit before your next renewal. We argued in Warp Speed that offense moves at machine speed and defense does not. Three weeks later, the public market and the IMF agreed with us in print.
🦞 Vector Three: Sean Frank’s Two Shapes And The 90 Percent Stuck In February 2nd
Two days before Anand’s essay landed, Sean Frank — CEO of Ridge Wallet, an operator running a real consumer-products business with real headcount and real margins — posted this on X:
“two team styles crushing it right now: 1- young, no life, 12 hour days, VERY SMALL TEAM, in office, 6 days a week, hustle hustle hustle 2- remote, everyone is an expert, fully autonomous, results driven high performance culture, fully embracing ai no middle ground. no hybrid.”
That is the cleanest articulation of what every Signal/Noise piece we have written in May has been circling. There are two operating shapes that are working in 2026, and there is no hybrid. Company A is the Chinese 996 monastery, exported to Brooklyn — a very small team, in person, six days a week, twelve-hour days, hustle-hustle-hustle. Company B is the AI-pilled remote — everyone an expert, fully autonomous, results-driven, agents doing the heavy lifting. No middle. No hybrid. The companies running Shape A are running a Groundhog Day clock on the human side. The companies running Shape B are running it on the agent side. Both are improving every morning. Both finish the year somewhere they could not have begun it.
Then there is everyone else.
OpenAI’s B2B Signals report, which we anchored What Would You Say You Do Here on May 7, gave the gap a number. The 95th-percentile firm now consumes 3.5 times as much AI per worker as the typical firm. A year ago that ratio was 2x. 64 percent of the difference is not seat count. It is depth. Longer prompts, richer context, multi-step delegated workflows, agent supervision. The frontier firm is running a depth clock and the typical firm is not, and the gap widens roughly 50 percent per year on the trailing data. By mid-2027 the 95th-percentile firm is consuming five times as much AI per worker as the typical firm. By 2028, ten. That is what an exponential gap underneath a stable-looking news cycle actually looks like, and it is the most-ignored datapoint in B2B AI right now.
Why is the gap widening? Because Vectors One and Two are compounding daily, and Vector Three is on a calendar. The labs are improving their products this Tuesday in ways the operator will discover in 2028. The attackers are improving their exploits this Wednesday in ways the operator will be breached by in 2027. The operator, meanwhile, has a five-year IT strategy from 2024 with an “AI initiative” tab that opens a SharePoint site nobody owns.
The MIT economics department, in the May 2026 print issue of the Quarterly Journal of Economics, published Daron Acemoglu’s Automation and Rent Dissipation paper, and it is the academic confirmation of what is about to happen to the 90 percent. Automation from 1980 to 2016 explains 52 percent of the growth in U.S. income inequality. About ten percentage points of that growth come from a single mechanism: firms specifically replacing workers earning a “wage premium” — paid more than peers with similar qualifications. Inefficient wage-premium targeting offset 60 to 90 percent of the productivity gains automation could have produced. Acemoglu in his own words: “The higher the wage of the worker in a particular industry or occupation or task, the more attractive automation becomes to firms.”
DeepL fired 25 percent of its workforce on May 6, citing AI explicitly. An AI translation company commoditized its own product line by being downstream of the very labs that ship translation as a free feature. Cloudflare cut 1,100 the next day under a different stated rationale. Challenger Gray’s April report named AI as the cause of 26 percent of the month’s job cuts, the highest share since the firm started tracking the variable. I Drink Your Milkshake (May 4) told the next chapter of that story — the analysts the operator fired in 2024-25 are about to come back through the front door wearing Anthropic ID badges, with a Goldman-Blackstone-Hellman & Friedman contract paying their salary, and the operator’s reaction is to wait and see. The reaction is the trap. The operator who fires the analyst, fails to install a depth clock, and waits for the labs’ consulting subsidiary to come embed engineers is running a calendar against two opponents running clocks. That operator’s wage line stays flat. The CFO smiles. The wage premium walks. Acemoglu’s paper is the chapter that comes before the chapter we’re already living in.
This is the Phil Connors moment for the 90 percent. Today is February 2nd. The labs already know it. The attackers already know it. The frontier-firm operator already knows it. The operator in the middle 90 percent thinks today is the first time it has ever been February 2nd, walks into the same puddle on the way to the same procurement meeting, has the same conversation with the same Salesforce rep, and goes to bed at the same hour. Tomorrow will be February 2nd again. The Sonny & Cher song will play. And the gap between Vector One, Vector Two, and Vector Three will be 50 percent wider than it was last Tuesday.
Why this matters for you: The single most useful instrument in your office in 2026 is a clock that measures whether your company is one improvement increment ahead of where it was last Tuesday. Not an annual plan. Not a quarterly OKR. A weekly depth delta. What did your senior people learn to do with the model this week that they couldn’t do last week? Which workflow was rewritten? Which agent was deployed and which was retired? Which junior was reorganized off of work the agent now does? If the answer is none, none, none, none, you are on a calendar and your competitor is on a clock. That is the gap that opens 50 percent a year. The good news is that the math goes the other way if you reverse it, because exponentials work both ways. The bad news is that the operator who does not reverse it in the next two cycles will not, in fiscal year terms, exist long enough to reverse it later.
🎰 What This Means For You — Install A Clock Or Get Eaten By One
This is the part of the piece we have been circling for two months and have not, until tonight, written cleanly.
Outsider Labs and CO/AI were started on the premise that the most important job in the AI cycle is bringing AI literacy to the market that doesn’t have time to get it on its own. Three buyer tiers exist in 2026, and only one of them is structurally on the wrong side of the bridge:
- Tier 1 — the buyer who can afford an Anthropic or OpenAI consulting subsidiary. Fortune 500 CIOs with eight-figure transformation budgets. Goldman, Blackstone, Hellman & Friedman, and the lab-aligned services arms are competing for that tier on terms that lock the buyer into the lab’s roadmap for three years and ten percent of operating margin. We have written about that competition in I Drink Your Milkshake (May 4) and Whose Side Is Sam Altman On? (April 28). The Tier-1 buyer is going to be fine. The Tier-1 buyer is going to be a tenant — see Whose Side — but the rent gets paid out of pre-tax operating income and the board does not lie awake about it.
- Tier 3 — the GitHub/AI-Twitter hacker tier. The people who already live on the Cursor changelog and the Hugging Face leaderboard and the Hacker News front page. The Sean Frank Shape-2 archetype. The three-person Replit-native company we wrote up in The Nail Factory. This tier is, by definition, already on a clock — they wake up to the Open Router daily rankings the way Phil wakes up to I Got You Babe. They do not need our help. They are a fascinating read for us, not a customer.
- Tier 2 — the 90 percent in the middle. This is the audience we are built for. The mid-market CEO. The family-business owner. The regional-bank executive. The manufacturer with $300 million in revenue and forty-seven spreadsheets. The law-firm managing partner. The agency-of-record owner. The head of a 1,500-person services company. And the solopreneur — the person who built a real business on one or two genuine skills (a marketer who can sell, an operator who can build product, a domain expert who can teach) and is now being asked by 2026 to also be a technical generalist, a prompt engineer, an agent supervisor, an analyst, and a research department, all at once, on the same calendar that already has a P&L to run. None of them is fully expert at any of those new jobs, and most of them never will be. All of them are allocators of both time and capital, and deeply constrained on both. They do not have the bandwidth to follow the labs. They do not have the headcount to hire a GitHub-resident research engineer. They cannot afford Anthropic’s $1.5 billion Goldman-Blackstone JV — which, on launch, isn’t selling to them anyway. They are running a calendar against two opponents running clocks, and they know it, and they don’t have a path. This is the audience that gets crushed by exponential gaps when those gaps show up as 50-percent-per-year compounding deltas. This is also the audience that, with the right partner, the right cadence, and the right practice, can flip from calendar to clock in one quarter.
The job of a publication that takes AI literacy seriously is to be the orienting instrument for that buyer. The job of a consulting practice built honestly for that buyer is to install the clock and run it with them, week by week, until the clock runs itself. AI education and continuous improvement for the masses. That is the assignment.
Phil Connors does not escape Punxsutawney by getting better at one thing. He doesn’t escape by learning piano. He doesn’t escape by learning French. He doesn’t escape by learning ice sculpture, or by catching the kid out of the tree, or by saving the man choking on the steak. He escapes by getting better at all of them, every day, for as many February 2nds as it takes. The film never says how many days it took. The most credible reading is somewhere between thirty and forty years. The good news for the 90 percent in May 2026 is that the clock now runs at machine speed and you don’t need thirty years. The bad news is that machine speed cuts both ways, and February 3rd is not coming for anybody who decides to sit this loop out.
The action item — and we’d commit to it ourselves on the record: Pick four people in your company. Senior, mid, junior, and one operating-line lead. Give each of them sixty minutes a day for the next four weeks to ship one workflow that they could not ship last week. Measure the deltas. Publish them internally. Stop running Tuesday’s procurement meeting on January’s vendor map. The clock starts when you start it. It does not start when someone hands you a five-year transformation plan. It does not start when the lab embeds a forward-deployed engineer. It does not start when your competitor announces theirs. It starts at 6 AM tomorrow morning, the same way every Groundhog Day starts at 6 AM, and the only thing that determines whether it is the same day or a different day is whether somebody in your office wakes up one increment smarter than they were yesterday.
The labs are running it. The attackers are running it. Sean Frank’s two shapes are running it. The 90 percent doesn’t have to run it alone, and after this week it is no longer rational for them to wait. Eight days separated two Linux root primitives. Two months separated Phil’s first piano scale from the ice sculpture that made Rita stop walking. Compounding works the same direction either way you point it. Anything different is good, Rita tells Phil, on a date that goes wrong in seventeen different ways before it goes right. Anything different is good. That’s the line for May 2026. Pick a thing. Make it different. Tomorrow.
It’s February 2nd. Don’t drive angry.
Cross-references: This piece extends the editorial line we have been building since Warp Speed, Fast And Slow, The Nail Factory, Whose Side Is Sam Altman On? (April 28), AI Heat (May 1), Porsche In The Driveway (May 3), I Drink Your Milkshake (May 4), Anthropic, OpenAI And The Name Of The Game (May 5), No One Set Off My Evil Detector (May 6), and What Would You Say You Do Here (May 7). The Bill Murray frame is, we suspect, going to come back. The clock is now the metric we will be running everything against.
Past Briefings
What Would You Say… You Do Here?
THE NUMBER: 3.5x — what 95th-percentile firms now consume in AI per worker compared to typical firms, per OpenAI's first enterprise B2B Signals report released yesterday. That ratio was 2x a year ago. Twelve months from now it's 5x. "What... would you say... you DO here?" That's Bob Slydell in Office Space, sitting across a folding conference table from Peter Gibbons, asking the question every consulting firm gets paid to ask and most CEOs are too polite to. It's also the question OpenAI just answered with a chart. In 2026 the answer "I use Claude" is not the right answer....
May 6, 2026No One Set Off My Evil Detector
THE NUMBER: 220,000 — the count of NVIDIA H100, H200, and GB200 GPUs that Elon Musk leased to Anthropic this morning, the company he called misanthropic in February and hating Western Civilization a week later. Three months from "Anthropic hates Western Civilization" to "No one set off my evil detector" is the entire arc of the AI capital cycle compressed into a single CEO's quote feed. SpaceX's Colossus 1 supercomputer in Memphis, fully leased, full capacity. 300 megawatts of new compute on the table within thirty days, doubled Claude Code rate limits the same afternoon, and an exploratory partnership on...
May 4, 2026I Drink Your Milkshake
THE NUMBER: $1.5 billion — what Anthropic, Blackstone, Goldman Sachs, and Hellman & Friedman committed Monday morning to a new joint venture that will embed Anthropic engineers directly inside the operations of mid-sized companies, starting with the hundreds of portfolio firms the founders already own. Apollo Global, General Atlantic, Sequoia, Leonard Green, and Singapore's GIC piled in alongside. The structure mirrors Palantir's forward-deployment model. The targeting list reads like a Big 3 deck. The pitch is a clean shot at McKinsey, Bain, BCG, and Accenture — combined with Anthropic ownership of the model running underneath. OpenAI is reportedly chasing a...