U.S. Marines successfully outsmarted an advanced AI surveillance system during a DARPA experiment by using creative tactics including hiding in cardboard boxes, somersaulting across terrain, and disguising themselves as trees. The demonstration revealed significant limitations in current AI technology, showing how systems trained on specific datasets can be easily fooled by scenarios outside their training parameters—a critical vulnerability in military applications where adversaries actively seek to exploit weaknesses.
What happened: Eight Marines managed to approach and touch an AI-powered detection robot without being identified during DARPA’s Squad X program testing.
- The AI system had undergone six days of intensive training to recognize Marines moving through various urban scenarios.
- Two Marines somersaulted across 300 meters of terrain to avoid detection.
- Another pair simply threw a cardboard box over themselves and walked directly to the sensor, with observers noting “you could hear them giggling the whole time.”
- One Marine stripped a fir tree and used it as camouflage while approaching the system.
The technical challenge: AI systems excel at recognizing patterns they’ve been trained on but struggle with unexpected scenarios, a phenomenon experts call “distributional shift.”
- Phil Root, deputy director of the Defense Sciences Office at DARPA, explained: “A tank looks like a tank, even when it’s moving. A human when walking looks different than a human standing. A human with a weapon looks different.”
- The AI had learned to identify people walking normally but had never encountered somersaulting humans or tree-disguised individuals.
In plain English: Think of AI like a student who memorized specific examples for a test but struggles when the actual exam presents the same concepts in unfamiliar ways. The Marines essentially showed up to the test wearing costumes the AI had never seen before.
Why this matters: The experiment highlights critical vulnerabilities in AI systems deployed in adversarial environments where opponents actively seek to exploit weaknesses.
- Paul Scharre, author of ‘Four Battlegrounds: Power in the Age of Artificial Intelligence,’ noted: “An algorithm is brittle, and the takeaway from this is that there will always be these edge cases.”
- Military applications face unique challenges because “it operates in an inherently adversarial environment, and people will always have the ability to evolve.”
The broader implications: While AI can process vast amounts of data at incredible speeds, it lacks the creative problem-solving abilities that allow humans to adapt to unexpected situations.
- “Humans tend to have a much richer understanding of the world,” Scharre observed, explaining why the Marines could easily outmaneuver the detection system.
- The experiment demonstrates the danger of “mistaking performance for competence” when evaluating AI capabilities in controlled versus real-world scenarios.
What experts are saying: The challenge for military organizations lies in understanding both AI’s capabilities and limitations.
- Scharre emphasized that this shouldn’t be viewed as a definitive judgment on AI’s current capabilities, noting that artificial intelligence continues to advance rapidly.
- The key challenge involves “creating doctrine to rapidly spin in what AI technology can do” while acknowledging its constraints.
Marines managed to get past an AI powered camera "undetected" thanks to hiding in boxes