×
xAI developer exposes API key for SpaceX and Tesla’s private LLMs
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A security breach at Elon Musk‘s xAI company exposed private, custom language models for two months through an API key accidentally leaked on GitHub. This incident reveals how easily artificial intelligence systems can be compromised through basic credential security failures, potentially allowing unauthorized access to custom AI models specifically designed to work with internal data from Musk’s business empire.

The big picture: An xAI employee leaked a private API key on GitHub that remained active for nearly two months despite early detection, potentially allowing unauthorized access to proprietary AI models designed for Musk’s companies.

Key details: Security expert Philippe Caturegli, chief hacking officer at consultancy Seralys, first publicized the leak of credentials for an x.ai application programming interface (API).

  • The key was discovered in the code repository of a technical staff member at xAI.
  • According to GitGuardian, the exposed credentials provided access to at least 60 fine-tuned and private large language models (LLMs).

Timeline of the incident: The security vulnerability persisted despite early detection systems flagging the issue.

  • GitGuardian alerted the xAI employee about the exposed API key on March 2.
  • As of April 30, when GitGuardian directly contacted xAI’s security team, the key was still valid and usable.

Security implications: The exposed credentials created significant potential risks to xAI’s proprietary technology.

  • The key could access both public and unreleased Grok models with the user’s identity.
  • GitGuardian’s Eric Fourrier confirmed the exposed API key had access to several unreleased versions of Grok, xAI’s chatbot.

Why this matters: Carole Winqwist, chief marketing officer at GitGuardian, warned that providing unauthorized access to private LLMs could enable serious security exploits.

  • Potential threats include prompt injection, model manipulation, and supply chain code implantation.
  • The models appear to have been custom-made for working with internal data from Musk’s companies, including SpaceX, Tesla, and Twitter/X.
xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs

Recent News

Musk-backed DOGE project targets federal workforce with AI automation

DOGE recruitment effort targets 300 standardized roles affecting 70,000 federal employees, sparking debate over AI readiness for government work.

AI tools are changing workflows more than they are cutting jobs

Counterintuitively, the Danish study found that ChatGPT and similar AI tools created new job tasks for workers and saved only about three hours of labor monthly.

Disney abandons Slack after hacker steals terabytes of confidential data using fake AI tool

A Disney employee fell victim to malware disguised as an AI art tool, enabling the hacker to steal 1.1 terabytes of confidential data and forcing the company to abandon Slack entirely.