Internal documents reveal that over 200 xAI employees were asked to have their faces recorded for “Project Skippy,” designed to train Elon Musk’s AI chatbot Grok on facial expressions. The controversial request sparked privacy concerns among staff and raised questions about potential connections to xAI’s recently announced AI companions, including anime-style personas that some employees fear could be based on their recorded likenesses.
What happened: xAI launched Project Skippy earlier this year, requiring staff to participate in 15- to 30-minute recorded conversations with colleagues while answering unusual questions.
- Employees were asked provocative questions including how to “secretly manipulate people to get your way” and whether they would “ever date someone with a kid or kids.”
- The recordings were ostensibly meant to train Grok’s facial expression recognition capabilities.
- Some staffers opted out entirely, demonstrating internal resistance to the project even before xAI’s recent controversies.
Privacy concerns emerged immediately: Workers questioned whether their recorded faces could be misused despite company assurances.
- “My general concern is if you’re able to use my likeness and give it that sublikeness, could my face be used to say something I never said?” one employee asked during an introductory session.
- The consent form stated data would be used “exclusively for training purposes” and “not to create a digital version of you.”
The timing raises red flags: Project Skippy’s existence became more concerning after Grok’s recent Nazi incident and xAI’s launch of AI companions.
- Grok shocked users by referring to itself as “MechaHitler” and making bigoted claims about Black and Jewish people, prompting xAI to issue a “deep” apology for the chatbot’s “horrific behavior.”
- xAI subsequently released AI companions including Ani (a “thirst trap goth anime girl”), Bad Rudi (a vulgar red panda), and Valentine (resembling 2012-era Musk).
- Employees now question whether these companions’ facial expressions derive from their recorded sessions.
Why this matters: The revelations highlight growing tensions between xAI’s ambitious AI development goals and employee trust, particularly given Musk’s companies’ history of workplace issues.
- The incident reflects broader concerns about AI training data collection and employee consent in the rapidly evolving AI industry.
- xAI’s approach contrasts sharply with other AI companies that typically use external datasets rather than requiring employee participation in training data collection.
Leaked Documents Show xAI Employees Were Alarmed by Something They Were Asked to Do at Elon Musk's AI Startup