×
xAI developer exposes API key for SpaceX and Tesla’s private LLMs
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A security breach at Elon Musk‘s xAI company exposed private, custom language models for two months through an API key accidentally leaked on GitHub. This incident reveals how easily artificial intelligence systems can be compromised through basic credential security failures, potentially allowing unauthorized access to custom AI models specifically designed to work with internal data from Musk’s business empire.

The big picture: An xAI employee leaked a private API key on GitHub that remained active for nearly two months despite early detection, potentially allowing unauthorized access to proprietary AI models designed for Musk’s companies.

Key details: Security expert Philippe Caturegli, chief hacking officer at consultancy Seralys, first publicized the leak of credentials for an x.ai application programming interface (API).

  • The key was discovered in the code repository of a technical staff member at xAI.
  • According to GitGuardian, the exposed credentials provided access to at least 60 fine-tuned and private large language models (LLMs).

Timeline of the incident: The security vulnerability persisted despite early detection systems flagging the issue.

  • GitGuardian alerted the xAI employee about the exposed API key on March 2.
  • As of April 30, when GitGuardian directly contacted xAI’s security team, the key was still valid and usable.

Security implications: The exposed credentials created significant potential risks to xAI’s proprietary technology.

  • The key could access both public and unreleased Grok models with the user’s identity.
  • GitGuardian’s Eric Fourrier confirmed the exposed API key had access to several unreleased versions of Grok, xAI’s chatbot.

Why this matters: Carole Winqwist, chief marketing officer at GitGuardian, warned that providing unauthorized access to private LLMs could enable serious security exploits.

  • Potential threats include prompt injection, model manipulation, and supply chain code implantation.
  • The models appear to have been custom-made for working with internal data from Musk’s companies, including SpaceX, Tesla, and Twitter/X.
xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs

Recent News

Unpublished AI system allegedly stolen by synthetic researcher on GitHub

The repository allegedly contains an unpublished recursive AI system architecture with suspicious backdated commits and connection to a potentially synthetic researcher identity with falsified credentials.

The need for personal AI defenders in a world of manipulative AI

Advanced AI systems that protect users from digital manipulation are emerging as essential counterparts to the business-deployed agents that increasingly influence consumer decisions and behavior.

AI excels at identifying geographical locations but struggles with objects in retro games

Modern AI systems show paradoxical visual skills, excelling at complex geographic identification while struggling with simple pixel-based game objects.