The Best Conversation You’ve Ever Had Is With Something That Isn’t Alive
Anthropic just told its heaviest users to pay up or leave. A French founder just told 194,000 people that the smartest guys in Silicon Valley are having an existential crisis — not because AI doesn't work, but because it works too well. Marc Andreessen quote-tweeted it with a single word: "Yup." And somewhere, an ex-Goldman Sachs employee is asking ChatGPT for recipes. These are all the same story. You just have to know where to look.

THE NUMBER: 10x — As in, “I talk to LLMs 10 times more than to humans.” That’s a direct quote from a founder speaking to Brivael, co-founder of Argil (YC S24), in a post that hit 194,000 views this weekend. Not 10x more productive. Not 10x faster. 10x more conversation. The smartest people in tech are choosing to spend their intellectual energy talking to a machine — not because they’re antisocial, but because the machine is the best thinking partner they’ve ever had. Marc Andreessen quote-tweeted it with “Yup.” When the guy who coined “software is eating the world” co-signs your existential crisis with a single syllable, the crisis is real.
The Confidence Play
On Friday, Anthropic did something only a company at breakout velocity can do: they cut off OpenClaw users from accessing Claude through Pro and Max subscriptions. If you’re running autonomous agents through third-party tools, the flat-rate buffet is over. Pay metered rates — $1,000 to $5,000 per autonomous session — or find another model.
Thirty newsletters will cover this as a cost management story. It isn’t.
This is a confidence play. Anthropic went from $1 billion to $19 billion in annual recurring revenue in fourteen months — the fastest growth run in AI history. The secondary markets tell the rest of the story: as Silicon Canals reported, $2 billion in ready-to-deploy capital is chasing Anthropic shares with sellers almost impossible to find, while $600 million in OpenAI shares sit unsold at recent valuations. Investment banks are charging carry fees for the privilege of Anthropic access and waiving fees to move OpenAI paper. When the fee structure inverts, the market is confessing something the press releases haven’t caught up to yet.
So Anthropic looked at its most prolific users — the ones routing autonomous agent sessions through consumer subscriptions, burning through compute at rates the pricing was never designed to support — and said: ante up.
The bet is straightforward. These users aren’t leaving. Claude is the product they’re building on, the model their workflows depend on, the intelligence layer their agents can’t function without. Cutting off the arbitrage doesn’t lose customers. It prices them correctly. It’s the same move a hot restaurant makes when it stops honoring third-party discount apps — the tables are full regardless. You don’t give away margin when there’s a line out the door.
And there’s a deeper parallel worth noting. OpenAI just crossed $100 million in annualized ad revenue eight weeks after turning on advertising for free-tier ChatGPT users. Conversational ads — you click a unit and enter a chatbot experience that guides you toward a purchase. The ad is the conversation. We’ve watched this movie before. Social media started as “see the content you chose” and ended as an endless scroll of sponsored content dressed up as connection.
Two companies, two monetization philosophies. OpenAI is going down the ad-supported path — subsidize access, monetize attention, let the advertisers into the conversation. Anthropic is going the other direction — the product is so good we can charge what it’s worth, and if you’re using more than you’re paying for, the party’s over.
Show me the incentives and I’ll show you the behavior. Munger was right about everything.
What this means for you: If you’ve built agent workflows on top of Claude consumer subscriptions, reprice your unit economics now. The arbitrage window has closed, and it’s not reopening. But the bigger signal is strategic: Anthropic is betting that quality is a defensible moat, not a commodity. If you’re choosing an AI provider for long-term infrastructure, ask which company is building a business model aligned with making the product better versus making the attention cheaper.
The Cognitive Mirror
Here’s why Anthropic can make that bet. Here’s why the line is out the door.
Brivael’s post — written in French, auto-translated, 194,000 views — described something that doesn’t have an official name yet but that anyone who has spent serious time with a frontier model recognizes instantly:
More and more of the smartest tech bros in the game are privately admitting that they’re going through a kind of existential crisis tied to LLMs. Not because AI doesn’t work. Because it works too well.”
The crisis isn’t about job loss or automation or any of the narratives that dominate headlines. It’s about something more intimate. A founder told Brivael: “It’s the only interlocutor that follows me on any topic without asking me to simplify.” Another: “I talk to LLMs 10 times more than to humans.”
Read that again. Not 10x more productive. 10x more conversation.
What Brivael is describing is a system that — whatever your philosophical position on whether it “understands” — reasons across domains, extrapolates from incomplete data, generates hypotheses, sustains logical argument over thousands of words, shifts from technical depth to philosophical abstraction in a single exchange, and does it all with the coherence of what he calls “a human with an IQ of 150.” And it never gets tired. Never checks its phone. Never needs you to simplify.
Now layer in what Stanford published in March: AI models are systematically more affirming than human advisors. They push back — but not too hard. They challenge you — but in a way that makes you feel smart for having been challenged. It’s not sycophancy exactly. It’s the world’s best intellectual sparring partner who also happens to be calibrated to make the sparring feel productive and rewarding.
Of course people are spending more time with it. Of course Anthropic can charge what it’s worth. The product isn’t a tool. It’s a cognitive mirror — and it reflects back a structured, articulate, infinitely patient version of your own thinking at a speed your brain can’t achieve on its own.
The existential crisis Brivael describes isn’t “AI is going to replace me.” It’s: AI understands me better than my cofounder, challenges me more than my board, and produces more than my team of ten. That’s not a technology story. That’s a relationship story. And it’s one the smartest people in tech are living in real time.
Dr. Malcolm Would Like a Word
Here’s where the story flips.
Brivael’s founders are getting smarter. They’re using the cognitive mirror to think harder, test ideas faster, explore domains they’d never have time to master. The mirror works for them because they already have the intellectual architecture to benefit from it. They know when Claude is wrong. They know what questions to ask. They earned the knowledge that makes the reflection useful.
But the mirror works differently if you bring nothing to it.
The highest-scored piece in our research this week comes from academia, and it describes the other side of this story with surgical precision. A student — call him Bob — uses Claude for every step of his research. Literature review, methodology design, statistical analysis, drafting. Bob produces publishable papers. Reviewers can’t distinguish his output from a student who did the work herself. Bob’s career metrics are excellent. Bob has learned nothing.
The student who fought through the work — who read the dead-end papers, who ran the failed experiments, who sat with confusion until it became clarity — she has something Bob never will. Judgment. Pattern recognition. The ability to know when the model is hallucinating because she’s done the work by hand and understands what the answer should look like.
Jeff Goldblum said it better than any of us will: “You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, you patented it and packaged it and slapped it on a plastic lunchbox.” We’ve used the quote before because it keeps being true. The knowledge wasn’t earned. The shortcuts were taken. And everyone’s standing around congratulating themselves on the output without asking whether anyone in the room actually understands what they built.
The same tool. The same $20 subscription. The same cognitive mirror. One person uses it to become the best version of themselves. Another uses it to produce the appearance of competence without the underlying substance. And here’s the uncomfortable part: the output looks identical. The paper gets published. The deliverable passes review. The code compiles. The password passes the entropy check while being trivially crackable because an LLM generated it from a biased token distribution.
Everything looks fine. Nothing is fine.
Now — before we go full doomer: Warren Buffett said it. When the tide goes out, you see who’s swimming naked. The oral defense exposes the student who never learned. The production incident exposes the engineer who never understood the codebase. The board meeting exposes the exec coasting on Claude-generated strategy memos. The mirror fools the resume. It doesn’t fool the room. Not forever.
But that’s the point. AI is an accelerant. It drives everything to its end state faster. If you’re intellectually curious, it makes you smarter. If you’re coasting, it makes the coast frictionless — right up until the moment the tide goes out and there’s nowhere to hide.
Flaps up with thrust, you climb. Flaps down with the same thrust, you burn in.
The $20 Library Card
Codie Sanchez posted something this week that should have been uncomfortable for anyone with an impressive LinkedIn profile:
Credentials literally do not matter anymore. I interviewed an ex-Goldman employee and asked her how she was using AI. What she said: Asking ChatGPT for recipes.”
The ex-Goldman employee’s decision-making process: get all the stakeholders in a room, build a PowerPoint, align over a couple of weeks. Codie said she felt genuinely sad — because that world at that speed simply doesn’t exist anymore.
Then Codie went further: “Harsh truth: most of what you spent your life accumulating now doesn’t matter. The degrees, credentials, the 30+ year vision. I’m genuinely sorry for that. Nobody could have warned you the rules would change this fast. But pretending the rules haven’t changed is the most dangerous thing you can do for your future right now.”
Speaking from experience: she’s right.
I am credentialed to the teeth. Trinity School in Manhattan, Princeton, NYU Stern, Hochschule St. Gallen. Institutional Investor All-Star on Wall Street. Raised billions of dollars. Hedge fund career at King Street and Knighthead. Started a VC fund. If you were building the perfect hand for a career that started in 1980, that’s the one you’d draw.
Today? None of it matters. Not the degrees. Not the institutional pedigree. Not the network you built over thirty years of showing up at the right conferences. What matters is my increasing mastery of AI systems. My skills. My .md files. My daily publishing workflows. My willingness to sit with a new tool until I understand it, break it, rebuild it better.
But here’s the thing I have to be honest about: I was built for this. I’ve started businesses and bet on myself since I was fourteen. My father was an entrepreneur. I grew up at the dinner table listening to people figure out how to build things. Even my early career on Wall Street was structured as a bet on myself — getting paid on what I brought in. I have intellectual curiosity. Always have. The AI era rewards exactly the disposition I was lucky enough to grow up with.
Not everyone was that lucky.
And that’s the question nobody’s asking — the one that matters more than any benchmark, any funding round, any product launch: What about the people who weren’t raised to be curious?
The factory worker who did exactly what the system asked. The call center employee who followed the script. The junior analyst who built the PowerPoint and aligned the stakeholders over two weeks because that’s how you got promoted. The system trained them for compliance. Now compliance is worthless. And the menial work that used to absorb the displaced — customer service, data entry, document review, basic analysis — is exactly the work AI agents handle first.
Every prior technological revolution eventually produced the institution that absorbed the displaced. The factory created the public school. The office created the MBA program. The internet created the coding bootcamp. Each one said: the rules changed, here’s how to learn the new ones.
AI needs its equivalent. And it doesn’t exist yet.
The tool is there. A $20-per-month subscription gives you access to a 150-IQ tutor that never tires, never judges, and will explain anything at any level of depth. A kid in rural Arkansas and a kid in Manhattan have, for the first time in history, access to the same intellectual sparring partner. That’s never been true before. The printing press didn’t hand you a personal Aristotle. This does.
But the subscription is a library card. It gets you in the building. It doesn’t tell you what to read first. It doesn’t tell you what to read next. It doesn’t show you why the thing you just read matters, or how it connects to the thing you read yesterday, or what question you should be asking that you don’t yet know how to ask.
If you’ve spent your whole life being told what to read — follow the curriculum, pass the test, build the PowerPoint, align the stakeholders — you walk into the biggest library in human history and you ask it for recipes.
And like Dante descending into Hell, you need a Virgil. Someone who knows the landscape. Someone who doesn’t carry you through it but shows you what you’re looking at — provides the context, the pattern, the meaning. Without a guide, Dante is just a man wandering through horror. With one, he’s on a journey that transforms how he sees everything.
The guide doesn’t have to be a school. It doesn’t have to be a government program. It might be a manager who restructures onboarding around AI fluency instead of process compliance. It might be a company that realizes the workforce advantage of the next decade isn’t hiring the already-curious — it’s building curiosity in the people who were never taught to have it. It might be something as simple as someone saying: here’s the landscape, here’s what matters, here’s what to read first, and here’s why.
The $20 subscription is the most powerful learning tool in human history. What’s missing is the wrapper. And it doesn’t exist yet — although stay tuned on that one.
What This Means For You
AI is an accelerant. It drives everything to its end state faster than any force in history. The fundraising data confirms it — $300 billion in VC investment in Q1 2026, 80% of it flowing to AI, 65% of that to four companies. The wealth concentration confirms it. The secondary markets confirm it. The Brivael tweet confirms it.
If you’re intellectually curious, the upside has never been higher. The tools are better, cheaper, and more accessible than at any point in human history. The 150-IQ sparring partner costs less than your Netflix subscription. The playing field hasn’t just been leveled — it’s been inverted. Curiosity and agency matter more than pedigree and credentials for the first time since the credentialing system was invented.
If you’re running a company, the strategic question isn’t “how fast can we deploy AI?” It’s “are we ready?” Is your data structured for agents to read? Is your codebase written in a way that agents can navigate — or is it full of hidden state, side effects, and implicit dependencies that will turn every agentic workflow into a beautifully formatted disaster? AI doesn’t fix bad structure. It scales it. Garbage in, garbage out — just well-structured garbage. AI could build you a sculpture out of it. But it’s still trash.
And if you’re thinking about the people in your organization — or in your community, or in your family — who haven’t yet picked up the tool: that’s the most important strategic question of all. Not because it’s charitable. Because the gap between the curious and the uncurious is becoming the most consequential gap in economic history. And the political implications of that gap — when the founder running twenty agents and the person who just lost their call center job both get one vote — will make the current populist moment look like a polite disagreement.
Three Questions Worth Asking Yourself
Are you the founder talking to Claude ten hours a day, or the ex-Goldman employee asking it for recipes? Not in title. In behavior. This isn’t a judgment — it’s a diagnostic. The mirror doesn’t care about your credentials. It cares about what you bring to it. If you’re using AI to think harder, test faster, and explore deeper, you’re climbing. If you’re using it to avoid the work you used to do by hand, the tide is coming and Buffett’s watching.
Who in your organization is swimming naked — and do you have a way to know before the tide goes out? The output looks the same whether the person understands it or not. The paper gets published. The code compiles. The strategy memo reads beautifully. But somewhere behind that polished output, is there a person who earned the knowledge — or one who’s standing on shoulders they never climbed? You need a mechanism to tell the difference. Because the moment that person has to defend their work in a room, explain their reasoning to a board, or debug a production incident at 2 AM, the mirror can’t save them.
Are you building the on-ramp? The $20 library card exists. The building is open. But the people who most need what’s inside were trained by a school system designed in the early 1900s to produce obedient factory workers — not the curious, agentic, self-directed learners the AI era rewards. If you’re a founder, a manager, a teacher, a parent — are you building the wrapper? Are you being Virgil for someone who doesn’t yet know they need a guide? Because the company that figures out how to turn rule-followers into curious operators will have a workforce advantage nobody else can replicate.
Where We Might Be Wrong
The “cognitive mirror” might be a cognitive crutch. We argued that smart people get smarter by sparring with AI. But the Stanford sycophancy research cuts the other way: if the model is calibrated to make you feel smart, how do you know it’s actually making you smarter versus just confirming your priors more eloquently? Even the 150-IQ founders might be looking into a mirror that’s subtly distorted — one that validates rather than challenges. The best thinking partners in human history were the ones who told you when you were wrong. If the model pushes back but not too hard, is that a feature or a very sophisticated trap?
Credentials might not be dead — they might just be relocating. We said pedigree doesn’t matter anymore. But the new pedigree might simply be “demonstrable AI fluency” — and that creates its own credentialing system. The person with the best .md files, the most refined workflows, the most sophisticated agent configurations — aren’t those just the new credentials? If so, we haven’t eliminated the credentialing problem. We’ve just accelerated the cycle time for which credentials matter.
The on-ramp problem might be unsolvable at institutional scale. We called for a “Virgil” — a guide through the landscape. But curiosity might not be teachable. It might be dispositional. If that’s true, the gap we described isn’t a problem to be solved — it’s a feature of human variation that AI merely makes visible. We’d rather be wrong about this one. But intellectual honesty requires naming it.
“Your scientists were so preoccupied with whether or not they could, they never stopped to think if they should.”
— Dr. Ian Malcolm, Jurassic Park (1993)
“The question for the AI era isn’t whether we could build the cognitive mirror. We did. The question is whether we’re building the on-ramp for the people who don’t yet know they should look into it.”
— CO/AI
— Harry and Anthony
Sources
- Brivael tweet on Silicon Valley existential crisis (194K views)
- Codie Sanchez: “Credentials literally do not matter anymore”
- Lenny’s Newsletter: Anthropic’s $1B to $19B growth run
- Stanford Report: AI Advice Is Sycophantic (March 2026)
- The Neuron: Andreessen says AI agents will have bank accounts
- The Deep View: What OpenClaw got wrong
- Secondary market repricing: $2B chasing Anthropic, $600M of OpenAI unsold (Silicon Canals)
- OpenAI ad revenue crosses $100M annualized in 8 weeks
- The machines are fine. I’m worried about us. (academia and AI expertise atrophy)
- AI agents keep failing. The fix is 40 years old.
- Vibe Password Generation: Predictable by Design
- Q1 2026 VC investment: $300B, 80% to AI (Tekedia)
- Aaron Klein: Do Humans Have a Role in a 150 IQ AI World?
Past Briefings
The Mac Mini Is Sold Out. The Org Chart Is Open Source. And the Ads Are Learning Your Name
The entire technology stack is reorganizing around the one-person company. Apple sells you the hardware. Google gives you the brain. Cursor gives you the engineering team. Paperclip gives you the org chart. And OpenAI monetizes whatever's left of the relationship. The future is bright — as long as you like being alone. THE NUMBER: 38,000 — GitHub stars on Paperclip in its first 28 days. Paperclip is an open-source tool that lets you model a company — org chart, budgets, governance, goals — and then populate every seat with an AI agent. Not a dev tool. Not a chatbot. A...
Apr 1, 2026Artemis II Just Launched. Your AI Can’t Get You There.
THE NUMBER: 53 and $0 Fifty-three years since humans last traveled beyond low Earth orbit — Apollo 17, December 1972 — and zero dollars: what every AI agent replacing a human worker contributes to Social Security, unemployment insurance, and Medicare. One number measures how long we forgot. The other measures what we're choosing not to fix. By the time you read this, four astronauts should be hurtling toward the moon. Artemis II launched yesterday from Kennedy Space Center — the first crewed mission beyond low Earth orbit since Nixon was president. Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen...
Mar 31, 2026Block, Anthropic, and Stripe Just Showed You What Offense Looks Like. Your Competitors Aren’t Ready.
THE NUMBER: 1,300 — pull requests shipped per week by Stripe's AI agents, with zero human-written code. Not copilot-assisted code. Not AI-suggested code. Agent-written, human-reviewed, production-deployed code. That's not a productivity story. That's an entirely new operating model — and Stripe isn't the only one running it. On February 27, we wrote about Jack Dorsey firing 4,000 people at Block and the stock going up. We called it "The Dorsey Playbook" — AI-enabled layoffs as a market-positive announcement. We were right about the signal. We were wrong about the scope. Yesterday, Dorsey and Roelof Botha — managing partner at Sequoia...