COAI - All Signal, No Noise

ALL SIGNAL, NO NOISE

Subscribe
COAI - All Signal, No Noise

ALL SIGNAL, NO NOISE

  • Signal Noise
  • Raw Feed
  • Long Form
  • Videos
  • Clear Channel
  • Future Proof
  • COAI About Us
COAI All Signal, No Noise
  • Signal/Noise
  • Raw Feed
  • Long Form
  • Videos
  • Clear Channel
  • Future Proof
  • COAIAbout Us
back

We can still sleep peacefully — or so we thought. Steganography via internal activations is already possible in small language models — a potential first step toward persistent hidden reasoning.

Source
lesswrong
Published
Oct 12, 2025
Share On
Get SIGNAL/NOISE in your inbox daily

Introduction
In this post we continue to work on the project, started in our previous  post, of exploring  hidden reasoning of large language models…

Recent Stories

Jan 19, 2026

App Store apps are exposing data from millions of users

An effort led by security research lab CovertLabs is actively uncovering troves of (mostly) AI-related apps that leak and expose user data.

Jan 19, 2026

Stop ignoring AI risks in finance, MPs tell BoE and FCA

Treasury committee urges regulators and Treasury to take more ‘proactive’ approach

Jan 19, 2026

OpenAI CFO Friar: 2026 is year for ‘practical adoption’ of AI

OpenAI CFO Sarah Friar said the company is focused on "practical adoption" in 2026, especially in health, science, and enterprise.

COAI

ALL SIGNAL, NO NOISE

No hype. No doom. Just actionable resources and strategies to accelerate your success in the age of AI.

Subscribe to SIGNAL/NOISE

© 2026 OUTSIDER LABS, INC. ALL RIGHTS RESERVED.

POWERED BY PARSE PRIVACY TERMS