back
Researchers discover a shortcoming that makes LLMs less reliable
Get SIGNAL/NOISE in your inbox daily
MIT researchers find large language models sometimes mistakenly link grammatical sequences to specific topics, then rely on these learned patterns when answering queries. This can cause LLMs to fail on new tasks and could be exploited by adversarial agents to trick an LLM into generating harmful content.
Recent Stories
Jan 14, 2026
Confused by a contract? Docusign’s AI will explain it now – but don’t skip the fact-check
Docusign's new AI tool can summarize and answer questions about a legal document. But can you trust AI to get the information right?
Jan 14, 2026Microsoft’s Spending on Anthropic AI Is on Pace to Hit $500 Million
OpenAI is Microsoft’s most important AI provider and its biggest cloud server customer. But as OpenAI does more business with Microsoft’s cloud rivals, Microsoft is doing more business with Anthropic, OpenAI’s archrival. Microsoft has quietly become one of Anthropic’s top customers and was ...
Jan 14, 2026GenAI In European B2B Marketing: Why Hesitation Is The Real Risk
European B2B marketers see the promise of generative AI, but strict regulations and technical hurdles often slow progress.