Microsoft‘s latest AI security presentation at Build was dramatically interrupted by protesters and then accidentally revealed confidential information about Walmart‘s plans to adopt Microsoft’s AI security services. The incident highlights the growing tensions between tech companies’ AI development and political activism, while inadvertently exposing Microsoft’s competitive position against Google in enterprise AI security solutions.
What happened: During a session on AI security best practices at Microsoft Build, protesters disrupted the presentation to criticize Microsoft’s cloud contracts with Israel’s government.
- Two former Microsoft employees, Hossam Nasr and Vaniya Agrawal from the protest group No Azure for Apartheid, interrupted the talk being given by Microsoft’s head of security for AI, Neta Haiby, and head of responsible AI, Sarah Bird.
- The livestream was temporarily muted and cameras redirected while protesters were escorted out, with one protester directly accusing Sarah Bird of “whitewashing the crimes of Microsoft in Palestine.”
The accidental reveal: After the presentation resumed, Haiby inadvertently exposed confidential internal communications when she switched to Microsoft Teams while sharing her screen.
- The messages revealed that Walmart, one of Microsoft’s largest corporate customers, is planning to implement Microsoft’s Entra Web and AI Gateway services.
- A Microsoft cloud solution architect was shown writing that “Walmart is ready to rock and roll with Entra Web and AI Gateway,” while a Walmart AI engineer was quoted saying “Microsoft is WAY ahead of Google with AI security.”
The bigger picture: This marks the third protest interruption at Microsoft Build, highlighting ongoing controversy over tech companies’ government contracts.
- The protests come shortly after Microsoft announced it had conducted an internal review using an external firm to assess how its technology is being used in the Gaza conflict.
- Microsoft maintains that its relationship with Israel’s Ministry of Defense is “structured as a standard commercial relationship” and claims to have found no evidence that its technologies have been used to harm people.
Why this matters: The incident reveals both Microsoft’s competitive advantage in AI security over Google and the increasing pressure tech companies face regarding the ethical implications of their government partnerships.
- The unintentional disclosure provides rare insight into how major enterprises view the competitive landscape of AI security solutions.
- The protests represent growing activism within the tech industry around the ethical use of AI and cloud technologies in military and government applications.
Microsoft’s AI security chief accidentally reveals Walmart’s AI plans after protest