back

Altman lied about a handshake on camera. CrowdStrike fell 8%. Google just killed the $3,000 photo shoot.

Get SIGNAL/NOISE in your inbox daily

Sam Altman told reporters he was “confused” when Narendra Modi grabbed his hand at the India AI Impact Summit. He said he “wasn’t sure what was happening.” The video, which has been watched by tens of millions of people, shows Altman looking directly at Dario Amodei before raising his fist. He knew exactly what was happening. He chose not to do it, and then he lied about it. On camera. In multiple interviews. With the footage playing on every screen behind him.

That would be a minor character note in any other industry. In this one, it isn’t. Because on the same day Altman was explaining away a handshake, Anthropic’s engineering team shipped Claude Code Security, an autonomous AI tool that scans enterprise codebases for vulnerabilities the way a senior security researcher would. No rule-based pattern matching. It reads the code, reasons through it, surfaces what it finds. CrowdStrike fell 8% on the announcement. Cloudflare fell 8%. The market read the release as a threat to traditional security tooling and priced it accordingly.

Also this week: Google launched Pomelli Photoshoot, a free tool that turns a smartphone product photo into a professional studio shot. Shelly Palmer, who has run a production company for 45 years, tested it and called it functional. A Surface commercial ran for three months last year, largely AI-generated, before anyone noticed. It cut time and cost by 90%. And Charlie Warzel at The Atlantic sat down with technologist Anil Dash, who has been coding for 40 years and has been a skeptic of AI hype through every cycle. His read this week: something is actually different. Agents aren’t incremental. The people running the labs know it. Most of what they say publicly about it is still marketing.

Four threads, one pattern. The tools are getting more capable and more autonomous faster than anyone planned. The governance culture around them is broken. And the two men most responsible for that culture just demonstrated, in front of a world leader and a live camera, that they’re operating on personal grievance as much as anything else. One of them lied about it afterward. Plan accordingly.


Sam Altman Said He Was ‘Confused.’ He Wasn’t.

India wanted a symbol. Narendra Modi convened the India AI Impact Summit in New Delhi on February 19th with thirteen tech leaders on stage and a clear script: hand-linked chain photo, raised arms, unity of purpose. Modi lifted arms with Altman on one side, Sundar Pichai on the other. The chain formed everywhere along the line except in one place.

Altman and Amodei, standing side by side, raised fists.

Altman’s explanation, delivered to multiple reporters within hours: “I was confused. When Modi grabbed my hand and put it up, I just wasn’t sure what we were supposed to be doing.” Watch the video. Altman looks directly at Amodei before raising his fist. This wasn’t bewilderment. It was a decision, made in real time, in public, and then described afterward as accidental confusion to anyone with a press credential.

The backstory explains the decision. Anthropic ran a Super Bowl ad in early February that mocked OpenAI’s choice to put ads in ChatGPT directly and by name. Altman called the campaign “clearly dishonest” and described it as “on brand for Anthropic doublespeak.” The day after the ad aired, OpenAI hired the creator of OpenClaw, the agentic tool Anthropic had publicly disputed. That hire was a provocation. Both companies knew it.

So the fist-bump instead of a handshake, that part’s forgivable. Two rivals who genuinely don’t like each other got caught in a moment neither prepared for. People have been known to be petty. That’s human.

What isn’t defensible is what came next. Altman stood in front of cameras, repeatedly, and described a deliberate snub as accidental confusion. He’s not a bad communicator. He’s one of the most media-trained people in Silicon Valley. He knew what the footage showed. He chose to describe it differently anyway. Fortune’s coverage was blunt about the gap between the explanation and the video. So was Bloomberg’s. An a16z investing partner summarized the moment on X: “When you’re forced to do a group project with your opp.” The money crowd was making jokes.

Amodei said nothing publicly. Anthropic declined to comment. One CEO lied and the other went silent, and the industry mostly moved on to the meme.

Here’s what didn’t move on: the enterprise buyers choosing between their platforms. The regulators watching their relationship for signals. The governments picking sides. Silicon Snark called the moment “a live-action product positioning statement.” They’re right, but that undersells it. This wasn’t positioning. It was a character read. Altman lied about something trivial, in public, when the truth was on video. The question any executive should be asking is: what does he say when the stakes are higher and the footage is ambiguous?

These two companies define the terms of AI for everyone else. They set the reference points for safety frameworks. They’re the faces that appear before Congress. They’re the ones enterprise buyers call when they need to understand what’s happening. A visible rift is a risk factor. A CEO who lies about the rift, on camera, is a different category of problem.

The strategic read: Stop treating the OpenAI-Anthropic rivalry as background noise. It’s a documented variable in your vendor risk analysis now. Both companies have real enterprise commitments and real customers. Neither CEO demonstrated this week that they’re capable of putting those customers first when personal friction gets in the way. That belongs in your procurement analysis, not in a footnote.

Sources:

Sam Altman and Dario Amodei avoid holding hands at India AI summit — CNBC

How Modi’s AI handholding moment backfired — Fortune

Altman-Amodei Hand-Holding Snub Goes Viral — Bloomberg

Altman and Amodei Accidentally Fork the AI Industry — Silicon Snark


Anil Dash Has Been Coding for 40 Years. He Thinks This Time Is Different. He Also Thinks the Culture Is Broken.

Charlie Warzel runs a podcast at The Atlantic called Galaxy Brain. The guest is Anil Dash, who has been coding since the 1980s, advised the Obama White House, and has been a skeptic of AI hype through every cycle. I’ve crossed paths with Anil since 2003, when he was at Six Apart and I was building Buzznet. His read on tech and the culture around it has been consistently good across two decades. His read this week will surprise you.

Something is actually different this time.

Not AGI. Not “we’ve arrived.” But the shift from chatbots (ask it a question, get an answer) to agents (give it a task, come back when it’s done) is a real change in kind, not just degree. Most of the hype cycles Dash has lived through were incremental improvements dressed up as breakthroughs. This one isn’t. “As somebody who’s really fluent in the technologies,” Dash said, “this is the first time I’m like, ‘Oh, okay, there’s been a real interesting inflection point.'”

His diagnosis fits the handshake story almost exactly. The labs have built a “hermetically sealed bubble.” They’re isolated, openly competing for the same government contracts and enterprise customers, using inevitability as a marketing strategy because repetition makes the narrative feel true. “They are massively competing for attention,” Dash said. “And so the more extreme and loud that they can assert something, the more it travels.”

He named the labor asymmetry that explains why the public is so split. For coders, agents free them from drudgery and let them do the creative work. For writers, artists, and illustrators, agents take the creative work and leave only the drudgery. “A huge part of the cultural tension around these things,” Dash said, “is everybody advocating them is like, ‘Why wouldn’t you love this?’ And everybody whose industry is being destroyed by them is saying, ‘You are immiserating us while you’re putting us out of work.'” Five hundred thousand tech workers have been laid off since ChatGPT launched. People are starting to realize they’re in the same boat.

His prescription isn’t “no AI.” It’s build an alternative you can feel good about. Open source, consent-based training, environmentally responsible, not enterprise-capture by design. He’s been in tech long enough to remember when using a browser other than Internet Explorer seemed impossible. He thinks it’s possible. He remains hopeful, he said at the end. Despite it all.

Key takeaway: Dash’s read is the most useful calibration tool for executives making AI commitments right now. The freak-out is partly engineered. The capability shift is real. The governance culture is broken. Separate those three things before the noise pushes you into decisions you’ll spend the next two years unwinding.

Source: The AI-Panic Cycle, And What’s Actually Different Now — The Atlantic


Anthropic’s AI Hunts Security Vulnerabilities Autonomously. The Bad Guys Were Already Doing This.

The same week Amodei stayed publicly silent, his engineering team shipped something that actually matters.

Claude Code Security is now in limited preview for Enterprise and Team customers. It scans codebases for security vulnerabilities, assigns severity ratings, flags confidence levels on each finding after re-examining for false positives, and surfaces everything to a human dashboard. It doesn’t patch code directly. It reads it, reasons through it, and tells your security team what it found.

Here’s what sets this apart from traditional security tooling. Rule-based software knows what SQL injection looks like. It knows what an exposed API key looks like. What it can’t do is read a novel piece of code and reason about what an attacker could do with it in context. Claude Code Security claims it can. It “reasons through your code like a security researcher,” in Anthropic’s language, rather than matching patterns against a database of known vulnerabilities.

The market didn’t wait for an independent evaluation. CrowdStrike closed down nearly 8% on the day of the announcement. Cloudflare fell just over 8%. The cybersecurity industry read the release as a structural threat to rule-based security tooling, and the stock prices said so in real time.

Here’s the paragraph in the PCMag coverage that got the least attention: cybercriminals, including state-sponsored actors, have already been using frontier AI models to find exploitable vulnerabilities in mature enterprise systems. For more than a year. PCMagreported it without burying the lede, but most of the coverage focused on the announcement itself. The defensive version of this capability launched this week. The offensive version has been operational since at least early 2025.

OpenAI has been in this space since October with Aardvark, its agentic security researcher built on GPT-5. This isn’t a new category. It’s a category where offense got there first, defense is catching up, and the traditional vendors are watching their moat get bridged in real time. Boris Cherny, Head of Claude Code at Anthropic, told PCMag in November that future versions of the tool are “gonna run for a longer period of time without human intervention.” Claude Code Security surfaces findings to humans now. The next version may go further than that.

Why this matters: CrowdStrike’s 8% drop isn’t fear that Claude Code Security is better than their product today. It’s recognition that the trajectory of autonomous AI reasoning applied to security makes the rule-based model obsolete in the medium term. Every software category where expertise is the product faces the same question. What’s your moat when an AI can reason through the problem from scratch, without a rules database, at a fraction of the cost?

Source: Anthropic Rolls Out Autonomous Vulnerability-Hunting AI Tool For Claude Code — PCMag


Google Just Made Professional Product Photography Free. That’s Not a Feature. It’s a Repricing.

This one landed quietly. It shouldn’t have.

Google Labs launched Pomelli Photoshoot on February 19th. Free. You photograph your product with your phone, pick a template (Studio, Floating, Ingredient, In Use), apply your brand identity through what Google calls “Business DNA,” and generate professional marketing images. Available now in the US, Canada, Australia, and New Zealand. No waitlist for most users.

Shelly Palmer tested it. He’s been running a production company for 45 years and he’s not prone to enthusiasm about tools that don’t work. His read: it works as described. He drew a distinction worth stealing for your next board presentation. “Required creative” versus “inspired creative.” Product photography for an e-commerce listing needs to look professional and move inventory. It doesn’t need Annie Leibovitz. Pomelli delivers required creative. Fast. Free. On-brand. The inspired creative space, the work that comes from human emotion and genuine artistic vision, is a different conversation. The required creative market just got a free competitor.

Microsoft proved the model last year. A Surface commercial, largely AI-generated, ran for three months before anyone noticed. It cut time and cost by 90%. The line Palmer wrote that sticks: “I only have $10,000 in the budget for this,” said the producer while negotiating $50,000 worth of work. Commercial production is a factory business. The client sets the price. Pomelli just moved the floor.

Google’s own week included Gemini 3.1 Pro, which scored 77.1% on ARC-AGI-2 reasoning benchmarks, up from 31.1% on Gemini 3 Pro, passing both Claude and GPT-5.2 in the process. That got the headlines. Pomelli got the shrug. That’s backwards. Benchmark gains are a recurring news item. Free professional photography for every SMB in four English-speaking countries is a structural market event.

What this means for your business: The photography industry has been watching AI image tools with anxiety for two years. Pomelli isn’t a tool that produces impressive AI art. It’s a tool that replaces the $3,000 product photography shoot for a Shopify merchant who sells handmade candles. That’s a different threat, and it’s live today. If your business provides professional creative services at any price point where the deliverable is primarily functional rather than expressive, the floor on what customers will pay is moving toward zero. Not eventually. Now.

Sources:

Create studio-quality marketing assets with Photoshoot in Pomelli — Google Blog

Google’s Free Photo Studio — Shelly Palmer


TRACKING

What CEOs Should Be Watching:

Accenture ties AI usage to promotions — Financial Times — 550,000 of 780,000 employees through AI training. Associate directors now have weekly AI tool logins tracked as a “visible input” to leadership reviews. Employees called the in-house tools “broken slop generators.” When a 780,000-person firm makes AI adoption a condition of promotion, the mandate has moved from encouragement to enforcement. Watch their attrition numbers in Q2.

OpenAI nearing $100B+ funding round — Bloomberg — Amazon, SoftBank, Nvidia, and Microsoft reportedly backing it. Valuation approaching $850 billion, more than ExxonMobil. A company that has never turned an annual profit, led by a CEO who lied about a handshake on camera, may shortly be valued higher than the largest oil company in American history. That’s either the market pricing in a genuinely transformational future, or the most expensive personality premium in the history of venture capital.

Security analysts skeptical that Claude Code really hacked 30 organizations autonomously — PCMag — A claim circulating this week that Claude Code independently compromised 30 real organizations is drawing sharp skepticism from the security research community. The distinction between a controlled demonstration and actual autonomous offensive capability matters. Watch how Anthropic responds to the pushback.


THE BOTTOM LINE

The most consequential technology of our generation is being steered by two men who can’t shake hands in public without lying about it afterward. That’s the leadership layer. Below it, the products are shipping anyway, and they’re repricing whole industries in real time.

Treat the Altman credibility gap as a vendor risk, not a personality quirk. A CEO who lies to reporters about footage they’re watching in real time will describe your enterprise contract terms the same way when it suits him. That belongs in procurement, not in the footnotes.

Assume the floor in your market is moving this quarter, not next year. Google’s Pomelli launched February 19th. It’s free. Photographers and studios competing on functional work are getting crushed by a free tool from the largest technology company on earth. Claude Code Security is in preview. CrowdStrike fell 8% on the announcement. Google and Anthropic are not piloting these capabilities. They’re shipping them.

The people at the top are a problem. The products don’t care.


KEY PEOPLE & COMPANIES

NameRoleCompanyLink
Sam AltmanCEOOpenAIX
Dario AmodeiCEOAnthropicX
Anil DashTechnologistX
Charlie WarzelStaff WriterThe AtlanticX
Shelly PalmerCEOThe Palmer GroupX
Boris ChernyHead of Claude CodeAnthropicX
Narendra ModiPrime MinisterIndia

SOURCES

  1. Sam Altman and Dario Amodei avoid holding hands at India AI summit — CNBC
  2. How Modi’s AI handholding moment backfired — Fortune
  3. Altman-Amodei Hand-Holding Snub Goes Viral at India AI Event — Bloomberg
  4. Altman and Amodei Accidentally Fork the AI Industry — Silicon Snark
  5. Anthropic Rolls Out Autonomous Vulnerability-Hunting AI Tool For Claude Code — PCMag
  6. The AI-Panic Cycle, And What’s Actually Different Now — The Atlantic
  7. Create studio-quality marketing assets with Photoshoot in Pomelli — Google Blog
  8. Google’s Free Photo Studio — Shelly Palmer

Compiled from sources across news sites, X threads, and company announcements. Cross-referenced with thematic analysis and edited by Anthony Batt, Harry DeMott and CO/AI’s team with 30+ years of executive technology leadership.

Past Briefings

Feb 20, 2026

We’re Building the Agentic Web Faster Than We’re Protecting It

Google's WebMCP gives agents structured access to every website. Anthropic's data shows autonomy doubling with oversight thinning. OpenAI's agent already drains crypto vaults. Google shipped working code Thursday that hands AI agents a structured key to every website on the internet. WebMCP, running in Chrome 146 Canary, lets sites expose machine-readable "Tool Contracts" so agents can book a flight, file a support ticket, or complete a checkout without parsing screenshots or scraping HTML. Early benchmarks show 67% less compute overhead than visual approaches. Microsoft co-authored the spec. The W3C is incubating it. This isn't a proposal. It's production software already...

Feb 19, 2026

Control Is Slipping: Armed Robots, $135BBets, Self-Evolving AI

China's exporting missile-armed robotdogs. Meta's betting $135B on NVIDIA. AIagents learned to improve themselveswithout permission. The autonomous arms race just shifted into overdrive. Control is slipping in three directions at once. Last week in Riyadh, China displayed the PF-070 at the World Defense Show: a production-ready robot dog carrying four anti-tank missiles, marketed directly to Middle Eastern and Asian buyers. Not a prototype. A product. Turkey already fielded missile-armed quadrupeds at IDEF 2025. Russia showed an RPG-armed version in 2022. Ukraine's deploying them on the frontline. The global arms market for autonomous ground weapons is forming right now, and China's...

Feb 17, 2026

Stop optimizing for last quarter’s AI economics

Anthropic dropped Sonnet 4.6 on Tuesday at one-fifth the cost of their flagship model while matching its performance on enterprise benchmarks. For companies running agents that make millions of API calls per day, the math just changed. OpenAI and Google now have to match these prices or lose customers. That $30B raise last week wasn't about safety research—it was about having enough capital to undercut competitors while scaling infrastructure to handle the volume. While American AI labs fight over pricing and benchmarks, China put four humanoid robot startups on prime-time national TV. The CCTV Spring Festival gala drew 79% of...