ByteDance Beats Sora, Shadow AI Invades the Enterprise, and the Singularity Is Already Here
Everyone’s been watching OpenAI and Google race to own AI video. Turns out they should have been watching China. ByteDance dropped Seedance 2.0 last week and the demos are, frankly, stunning. Multi-scene narratives with consistent characters. Synchronized audio generated alongside video (not bolted on after). Two-minute clips in 2K. The model reportedly surpasses Sora 2 in several benchmarks. Chinese AI stocks spiked on the announcement. Then ByteDance had to emergency-suspend a feature that could clone your voice from a photo of your face.
Meanwhile, inside your organization, something quieter and arguably more consequential is happening. Rick Grinnell spent months talking to CISOs at major enterprises and found that nearly all of them haven’t deployed AI security solutions. They’re using legacy firewall rules and policies that prohibit AI usage. The problem is their employees aren’t waiting. Rogue AI agents and MCP servers are popping up across enterprises as workers quietly build their own automation. The threat isn’t coming from outside. It’s already inside.
And if you’re wondering whether all this matters, a researcher named Cam Pedersen just published a mathematical analysis of AI progress that should make you uncomfortable. He fit hyperbolic models to five metrics of AI advancement. Only one is actually accelerating toward a pole. It isn’t model performance. It’s human attention. The social singularity is front-running the technical one.
ByteDance Enters the Chat
ByteDance unveiled Seedance 2.0 on February 7. The AI video world noticed. This isn’t incremental. It’s a category shift.
The headline feature is what ByteDance calls “multi-lens storytelling” where the model creates several connected scenes while maintaining consistent characters and visual style. Previous AI video tools required manual editing to stitch clips together. Seedance 2.0 handles it natively. The system also generates audio and video simultaneously through a Dual-Branch Diffusion Transformer architecture. Every other model generates silent video first, then adds audio as post-processing. Seedance does both at once, including phoneme-perfect lip-sync in eight languages.
The Information reports the model is generating significant buzz, with some calling it a game changer. Silicon Republic notes that comparisons to Sora 2 are flying around social media. Chinese AI stocks responded accordingly (COL Group hit its 20% daily price ceiling; Shanghai Film and Perfect World rose 10%).
Then came the privacy crisis. TechNode reports that Seedance 2.0 demonstrated the ability to generate highly accurate personal voice characteristics using only facial images. No audio sample required. Just a photo. The operators of ByteDance’s Jimeng AI platform announced they “are making urgent changes” and will no longer allow real-human-like photos or videos as reference subjects.
The signal here: While American tech journalists spent 18 months writing breathless Sora previews, Chinese labs shipped. ByteDance didn’t announce a waitlist or a “limited preview.” They released a model that outperforms it. The West’s assumed lead in generative AI? It’s a comforting fiction. And the voice-cloning debacle tells you everything about the guardrails gap. They built it, shipped it, watched it go viral, then panicked. That’s not responsible AI development. That’s “move fast and break things” applied to deepfakes. Expect regulators to notice.
The Rogue Agent Problem
Rick Grinnell, founder of Glasswing Ventures, spent months talking to over fifty enterprise CISOs. What he found should worry anyone running technology at scale.
Writing in CIO, Grinnell found a yawning gap between the hype and the reality. If you listen to Silicon Valley, you’d think every CISO is scrambling to buy agentic security solutions, AI firewalls, and MCP lockdown products. They’re not. Nearly all the executives Grinnell talked to hadn’t deployed any of these. They’d written policies prohibiting AI and dusted off the legacy firewall rules.
The numbers back this up. McKinsey data shows 88% of firms are using AI in some form, but only 23% are scaling agentic AI. About 39% are experimenting, primarily in IT, knowledge work, or customer support.
Here’s the part that should keep you up at night. From Grinnell’s conversations with security service providers, rogue agents and MCP servers have sprung up in large numbers as employees try to test methods to perform their jobs with greater quality and ease. These aren’t malicious actors. They’re your own people, building their own automation because official channels are too slow.
The risk is real. These rogue deployments punch holes at every level: data exposure, compromised identity frameworks, vulnerable agents, hallucinations, and systems that ignore human directives entirely.
In plain English: Your security strategy is a “no AI” policy and some firewall rules from 2019. Meanwhile, your best employees are spinning up Claude agents connected to your CRM because the official tools are slow and IT said “maybe Q3.” This isn’t shadow IT. It’s shadow intelligence. Autonomous systems with access to customer data, API keys, and business logic, built by people who watched a YouTube tutorial last weekend. And your CISO doesn’t know what MCP stands for. That’s not a skills gap. That’s organizational negligence dressed up as caution.
The Singularity Will Occur on a Tuesday
Cam Pedersen published something worth your time last week. He took five real metrics of AI progress, fit a hyperbolic model to each, and looked for the one actually curving toward a pole. He found it. The date has millisecond precision. There’s a countdown.
Here’s the uncomfortable part. The capability metrics (MMLU scores, tokens per dollar, release intervals) aren’t accelerating toward infinity. They’re improving linearly. No pole. No singularity. The only curve pointing at a finite date is the count of arXiv papers about emergence. Researchers noticing and naming new behaviors. Field excitement, measured memetically.
Pedersen’s conclusion is worth quoting directly: “The data says machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.”
The social consequences he catalogs are not predictions for 2034. They’re descriptions of 2026. Labor displacement (1.1 million layoffs announced in 2025, over 55,000 explicitly citing AI). Institutional failure (the EU AI Act’s high-risk rules delayed to 2027; US policy contradicting itself monthly). Capital concentration (top 10 S&P 500 stocks at 40.7% of index weight, surpassing the dot-com peak). Epistemological breakdown (less than a third of AI research is reproducible; under 5% of researchers share code).
The pole at his calculated singularity date isn’t when machines become superintelligent. It’s when humans lose the ability to make coherent collective decisions about machines.
Read it this way: Everyone’s debating whether AGI arrives in 2027 or 2030. They’re missing the point. The machines are improving at a steady clip. We’re the ones losing our minds. Companies are laying off workers because of what AI might do, not what it’s actually doing. Regulators are writing laws for problems that existed two years ago. Investors are pricing in outcomes nobody can define. The singularity isn’t a moment when machines become superintelligent. It’s the moment humans can no longer form coherent collective responses to technological change. By that definition, we’re already there. The math just proved it.
What Folks Are Really Vibe Coding
SaaStr’s Jason Lemkin published data on what people are actually building with vibe coding tools. The answer is messier than the hype.
Lovable has crossed $300M in ARR, is raising at an $8B+ valuation, and sees over 100,000 new projects built on the platform every single day. Replit did $240M in revenue in 2025 and is raising at $9B. Cursor (Anysphere) raised $2.3B at a $29.3B valuation. The combined valuation of top vibe coding startups has grown from roughly $8B in mid-2024 to over $36B today.
But here’s what they’re actually building. Anton Osika, Lovable’s CEO, shared the top four use cases.
The killer app is rapid prototyping without waiting on engineering. A product manager can have a working prototype in 20-60 minutes instead of waiting six weeks for engineering to pick it up from the backlog. Replit’s CEO Amjad Masad noted a public company CEO told him AI coding has had negligible impact on engineering teams (time saved generating code gets lost debugging and auditing). The real shift is on product and design teams who gained “a fundamentally new super power of being able to make software.”
Then there’s internal tools that actually match your process. Every company has them. The tools that sit on the engineering backlog for 18 months because they’re not customer-facing. Now non-technical teams just build them. An HR person at Replit built her own org chart software in three days because every vendor option was wrong in its own way.
People are also vibe coding interactive demos instead of slide decks. Working prototypes that stakeholders can click through, not static PowerPoints.
And finally, replacing simple SaaS with custom solutions. Not Salesforce. The $49/month tool that does 80% of what you need and 40% of what annoys you.
The impact on the broader ecosystem is showing up in the data. A16z published numbers showing iOS app releases were flat for three years, hovering around 0% year-over-year growth. Then agentic coding tools hit the market. Since then, new iOS app releases are up 60% year-over-year.
The takeaway: Nobody’s vibe coding their own Salesforce. They’re vibe coding the thing that should have been built two years ago but engineering never had bandwidth for. The 12-slide deck that could have been a working prototype. The $49/month SaaS tool that’s 60% of what they need and 100% of what annoys them. Klarna’s CEO stopped “disturbing his poor engineers with half good ideas and half bad ideas” and started testing them himself. That’s not a quirky anecdote. That’s the new expectation. If you’re a PM who can’t demo your own ideas, you’re already behind. If you’re a B2B founder selling simple tools, your TAM just got vibe-coded out from under you. The moat isn’t features anymore. It’s whether you can do something an afternoon with Claude can’t.
Tracking
- AI Super Bowl Ads — Deep coverage from CO/AI for the Anthropic vs OpenAI ad war
- Seedance 2.0 rollout — @TheRundownAI, PetaPixel for demo clips and availability updates
- Shadow AI governance — @rickgrinnell, CIO.com for enterprise security frameworks
- Singularity metrics — @campedersen for methodology updates and responses
- Vibe coding economics — @jasonlk, @aaboronin for usage data and startup funding
The Bottom Line
Four stories. One thread. The gap between what’s possible and what institutions can absorb is widening.
ByteDance proved Chinese labs can match Western models in video, then showed why that’s terrifying by shipping voice-cloning they had to kill within days. Your employees are building AI faster than your policies can govern. The math shows human attention (not machine capability) is the variable going vertical. And vibe coding means software creation now belongs to anyone who can describe what they want.
None of this is slowing down. None of it cares about your governance framework, your board, or your planning cycle.
Three imperatives:
Staff for the gap, not the capability. The constraint isn’t AI performance. It’s human capacity to absorb change. Hire people who bridge technical possibility and organizational reality.
Legalize the rogue agents. Your employees are already building them. Bring them inside the tent before they become a breach.
Prototype before you plan. A working demo in 60 minutes beats a six-week spec cycle. “Demo, don’t memo” isn’t a catchphrase. It’s an edge.
The machines are improving linearly. We’re the ones accelerating.
Only the paranoid survive.” — Andy Grove
Key People & Companies
| Name | Role | Company | Link |
|---|---|---|---|
| Cam Pedersen | Engineer/Researcher | Independent | X |
| Rick Grinnell | Founder & Managing Partner | Glasswing Ventures | |
| Anton Osika | CEO | Lovable | X |
| Amjad Masad | CEO | Replit | X |
| Jason Lemkin | Founder | SaaStr | X |
| Sebastian Siemiatkowski | CEO | Klarna |
Sources
- PetaPixel: ByteDance Seedance 2.0
- The Information: Seedance 2.0 Generates Buzz
- Silicon Republic: Seedance Surpasses Sora 2
- TechNode: ByteDance Suspends Voice Feature
- CIO: Shadow AI Practices
- Cam Pedersen: The Singularity Will Occur on a Tuesday
- SaaStr: What Folks Are Really Vibe Coding
- A16z: iOS App Release Data
Compiled from 22 sources across tech news, research papers, X threads, and company announcements. Cross-referenced with thematic analysis and edited by CO/AI’s team with 30+ years of executive technology leadership. This edition was edited while listening to Quickness by Bad Brains.
Past Briefings
The Agent Supply Chain Broke, Goldman Deployed Claude Anyway, and Gartner Says 40% of You Will Quit
Two weeks ago we flagged OpenClaw as an agent security crisis waiting to happen. The viral open-source assistant had 145,000 GitHub stars, a 1-click remote code execution vulnerability, and users handing it their email, calendars, and trading accounts. We wrote: "The butler can manage your entire house. Just make sure the front door is locked." Turns out the front door was wide open. Security researchers at Bitdefender found 341 malicious skills in OpenClaw's ClawHub marketplace, all traced to a coordinated operation they're calling ClawHavoc. The skills masqueraded as cryptocurrency trading tools while stealing wallet keys, API credentials, and browser passwords. Initial scans...
Feb 8, 2026The Machines Went to War
The Super Bowl of AI, the SaaSpocalypse, and 16 Agents That Built a Compiler On Friday we told you the machines were organizing. This weekend they went to war. Anthropic ran Super Bowl ads mocking OpenAI's move into advertising. Sam Altman called them "deceptive" and "clearly dishonest," then accused Anthropic of "serving an expensive product to rich people." Software stocks cratered $285 billion in a single day as investors realized these companies aren't building copilots anymore. They're building replacements. And somewhere in an Anthropic lab, 16 Claude agents finished building a C compiler from scratch. Cost: $20,000. Time: two weeks....
Feb 5, 2026The Coding War Goes Hot, Agent Teams Arrive, and AI Starts Hiring Humans
Yesterday we said the machines started acting. Today they started hiring. Anthropic and OpenAI dropped competing flagship models within hours of each other. Claude Opus 4.6 brings "agent teams" and a million-token context window. OpenAI's GPT-5.3-Codex is 25% faster and, according to the company, helped build itself. Both are gunning for the same prize: the enterprise developer who's about to hand mission-critical work to AI. Meanwhile, a weekend project called Rentahuman.ai crossed 10,000 signups in 48 hours. The pitch: AI agents can now hire humans for physical tasks. Deliveries, errands, in-person meetings. Pay comes in crypto. The creator's response when...