Augmenting AI Security: External Strategies for Threat Mitigation

Tuesday, August 19, 2025
11:00 AM - 11:30 AM
AI Risk Summit Track 1 (Salon I)

About This Session

As AI systems are increasingly deployed in production, securing them requires more than just protecting the models themselves. This session focuses on practical external strategies for safeguarding AI and its surrounding infrastructure. Topics include using resource controls to prevent denial-of-service attacks, applying rate limiting and input validation to secure APIs, detecting anomalies through audit logging and output filtering, and protecting data integrity with version control, automated backups, tokenization, and redaction.

By focusing on the systems and workflows around AI models, including containers, endpoints, and data pipelines, this session offers a broader and more realistic view of where threats emerge and how to stop them. Attendees will leave with actionable insights to improve the security of AI deployments and defend against risks like prompt injection, data leakage, and insecure configurations.

Speaker

Jason Kramer

Jason Kramer

Senior Software Engineering Researcher - ObjectSecurity

Jason is dedicated to advancing the state of the art in secure and robust AI. With a bachelor’s degree in computer science from San Diego State University, he is focused on ensuring trust, security, privacy, bias, and robustness of AI/ML models. Jason has led the development efforts of a commercial solution for the detection and repair of vulnerabilities in deep learning systems, and the co-author of multiple patents related to the cybersecurity of systems including AI/ML, embedded devices, supply chain, and others. His passion for improving the field has driven him to push the boundaries of what is possible and make a meaningful impact in the fields of AI and cybersecurity.