Bradford Smith’s pioneering use of Neuralink’s brain implant represents a significant fusion of neural interface technology and generative AI. As the first person with ALS and the first nonverbal person to receive this implant, Smith’s case demonstrates how brain-computer interfaces combined with artificial intelligence can restore communication abilities for those with severe neurological conditions. This collaboration between Elon Musk’s neural technology and AI chatbot raises important questions about the future of human-machine interfaces and the balance between authentic human expression and AI-assisted communication.
The big picture: Bradford G. Smith, the third person to receive Neuralink’s brain implant and the first with ALS, is now communicating through the device with assistance from Musk’s AI chatbot Grok.
- Smith announced his implant on X (formerly Twitter), writing: “I am typing this with my brain. It is my primary communication.”
- The combination of neural interface technology and generative AI is allowing Smith, who can only move his eyes, to participate in conversations more rapidly than would otherwise be possible.
How it works: The Neuralink device consists of thin wires implanted in Smith’s brain that detect neural signals, which are then processed to allow him to control a computer pointer.
- The signals from Smith’s neurons are amplified, filtered, and sampled to extract key features, which are transmitted wirelessly to a MacBook for further processing.
- This neural interface allows Smith, who lost his ability to move or speak due to ALS, to interact with digital devices using only his thoughts.
The AI enhancement: Various artificial intelligence technologies are expanding Smith’s communication capabilities beyond basic control.
- Grok, Musk’s AI chatbot, helps draft responses Smith can use in conversations, increasing his communication speed while raising questions about authenticity.
- A voice clone created by startup ElevenLabs uses recordings from before Smith’s condition progressed to generate speech that sounds like his original voice.
Behind the numbers: The integration of brain-computer interfaces with AI represents a significant advancement in assistive technology for people with severe physical limitations.
- “There is a trade-off between speed and accuracy. The promise of brain-computer interface is that if you can combine it with AI, it can be much faster,” explains Eran Klein, a neurologist studying brain implant ethics.
What’s next: Smith is exploring the development of a more personalized AI system that would better represent his unique perspective and communication style.
- He expressed interest in creating a “personal” large language model trained on his past writing that would answer “with my opinions and style.”
- This vision points toward future assistive technologies that could more authentically represent an individual’s voice and personality.
This patient’s Neuralink brain implant gets a boost from generative AI