AI Red Teaming Room

Tuesday, August 19, 2025
10:00 AM - 1:30 PM
Focus Track (Salon IV)

About This Session

{Open House Format - Come by anytime between 10:0AM - 1PM}

Step into the AI Red Teaming Room and join experts from Scale AI for an interactive, hands-on experience where you’ll get to play the role of an adversary. In this session, you won’t just learn about AI vulnerabilities — you’ll exploit them. Engage directly in guided exercises designed to expose weaknesses in language models and other AI systems. Try your hand at crafting adversarial prompts to manipulate model behavior, bypass safeguards, and trigger unintended outputs.

Whether you're a security professional, AI researcher, policy expert, or just curious about how AI can go wrong, this is your chance to explore the limits of today's AI systems in a safe, controlled environment. Alongside the red-teaming challenges, you'll learn how these same systems can be defended, evaluated, and improved.

No prior experience with red teaming required — just bring your curiosity. Take 15–20 minutes to stop by, test your skills, and walk away with a deeper understanding of both the power and the fragility of modern AI.

Speaker

David Campbell

David Campbell

AI Security Lead - Scale AI

David Campbell is a seasoned tech leader with nearly 20 years in Silicon Valley startups, now leading AI Security efforts at Scale AI. He built a pioneering AI Red Teaming platform blending ethics and security. His work has been recognized by Congress and highlighted by the White House. David also contributed to JCDC.AI’s Cyber TTX with CISA, tackling AI-driven cyber threats. With a strong background in Security, Infrastructure, and Platform Engineering, he champions integrating responsible AI into real-world security practices to build safer, more ethical AI systems.