×
AI Security Bootcamp opens applications for August session
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A new AI security bootcamp launching in London aims to address the growing need for specialized security skills in artificial intelligence systems. This intensive 4-week program offers fully funded training that bridges the gap between AI safety research and practical security implementation. By combining hands-on exercises with theoretical foundations, the program represents a significant investment in developing the security expertise needed to protect increasingly complex AI infrastructure and models.

The big picture: The AI Security Bootcamp (AISB) will run from August 4-29, 2025, in London, providing comprehensive security training specifically tailored for AI researchers and engineers.

  • The curriculum covers three main areas: security fundamentals, AI infrastructure security, and attacks on modern ML systems.
  • All expenses for participants will be covered, including accommodation and travel, indicating substantial financial backing for this initiative.

Key details: Applications are open until June 22, 2025, targeting individuals at the beginning of their security journey who already possess working knowledge in machine learning.

  • The program is designed to cultivate a “security mindset” through hands-on experience rather than just theoretical knowledge.
  • Participants will engage in pair programming exercises, lectures, vulnerability studies, and expert discussions throughout the month-long training.

Inside the curriculum: The bootcamp’s content progresses from basic security principles to AI-specific vulnerabilities across three distinct phases.

  • The first section introduces security fundamentals including cryptography, Linux security, and network protocol analysis.
  • The second phase focuses on AI infrastructure, covering containerization, supply chain security, application security, and SecOps.
  • The final section addresses ML-specific attacks such as model extraction, trojans, and adversarial inputs across various model types including LLMs and multimodal systems.

Behind the scenes: The program is actively recruiting instructors and operational staff in addition to participants, suggesting this may be a first iteration of what could become a recurring training initiative.

  • The organizers, Pranav Gade and Jan Michelfeit, emphasize the importance of developing a practiced security mindset that helps identify and patch vulnerabilities.
  • The application process focuses on finding individuals excited about technical deep dives and understanding systems by “peeling away layers of abstraction.”

Why this matters: As AI systems become more powerful and integrated into critical infrastructure, the security vulnerabilities they present require specialized expertise that combines traditional cybersecurity with an understanding of machine learning architectures.

  • By funding this training initiative, the organizers are addressing a crucial skills gap in the AI safety ecosystem.
  • The program’s design suggests recognition that theoretical knowledge alone is insufficient for securing complex AI systems against emerging threats.
Apply to the AI Security Bootcamp [Aug 4 - Aug 29]

Recent News

Grok stands alone as X restricts AI training on posts in new policy update

X explicitly bans third-party AI companies from using tweets for model training while still preserving access for its own Grok AI.

Coming out of the dark: Shadow AI usage surges in enterprise IT

IT leaders report 90% concern over unauthorized AI tools, with most organizations already suffering negative consequences including data leaks and financial losses.

Anthropic CEO opposes 10-year AI regulation ban in NYT op-ed

As AI capabilities rapidly accelerate, Anthropic's chief executive argues for targeted federal transparency standards rather than blocking state-level regulation for a decade.