When AI Agents Go Rogue - Securing Autonomous AI Systems Before They Act
About This Session
Autonomous AI agents are no longer theoretical. They’re building workflows, calling APIs, writing code, and making decisions at scale. But with that power comes risk, new, emergent, and often unpredictable. As agent frameworks like AutoGPT, LangGraph, CrewAI, and custom orchestrators gain adoption, organizations must ask: What happens when your AI doesn’t just hallucinate but acts?
In this talk, Advait Patel, cloud security engineer and contributor to the Cloud Security Alliance’s AI Control Matrix, will unpack the risks associated with AI agents acting autonomously in production environments. Through real-world examples and red-team simulations, we’ll explore how agentic systems can be manipulated, coerced, or simply misaligned in ways that lead to security incidents, privacy violations, and cascading system failures.
Topics we’ll cover:
- How AI agents make decisions and where control is lost
- Prompt injection + tool usage = real-world lateral movement
- Over-permissive action spaces: API abuse, identity leaks, and shadow access paths
- Why traditional threat modeling fails for agentic workflows
- Techniques to sandbox, constrain, and monitor AI agents (function routers, policy-as-code, response filters)
- Logging and observability for “invisible” agent behavior
The attendees will walk away with:
- A framework to assess agentic AI security posture in your environment
- Examples of attack chains involving AI agents, cloud APIs, and dynamic plugin execution
- Architectural patterns to deploy secure-by-design agent frameworks in enterprise settings
- Recommendations for SOC teams on how to detect and respond to rogue agent behavior
This session is designed for CISOs, security architects, red teams, and AI product engineers who are exploring or deploying autonomous AI systems. If your LLM can act, it can be exploited, and this talk will show you how to defend against that future.
In this talk, Advait Patel, cloud security engineer and contributor to the Cloud Security Alliance’s AI Control Matrix, will unpack the risks associated with AI agents acting autonomously in production environments. Through real-world examples and red-team simulations, we’ll explore how agentic systems can be manipulated, coerced, or simply misaligned in ways that lead to security incidents, privacy violations, and cascading system failures.
Topics we’ll cover:
- How AI agents make decisions and where control is lost
- Prompt injection + tool usage = real-world lateral movement
- Over-permissive action spaces: API abuse, identity leaks, and shadow access paths
- Why traditional threat modeling fails for agentic workflows
- Techniques to sandbox, constrain, and monitor AI agents (function routers, policy-as-code, response filters)
- Logging and observability for “invisible” agent behavior
The attendees will walk away with:
- A framework to assess agentic AI security posture in your environment
- Examples of attack chains involving AI agents, cloud APIs, and dynamic plugin execution
- Architectural patterns to deploy secure-by-design agent frameworks in enterprise settings
- Recommendations for SOC teams on how to detect and respond to rogue agent behavior
This session is designed for CISOs, security architects, red teams, and AI product engineers who are exploring or deploying autonomous AI systems. If your LLM can act, it can be exploited, and this talk will show you how to defend against that future.
Speaker

Charit Upadhyay
Senior Site Reliability Engineer - Oracle
Charit Upadhyay is a Senior Site Reliability Engineer at Oracle, specializing in building scalable, secure, and high-performance cloud infrastructures. With extensive experience across Kubernetes, Terraform, observability, and security operations, he has led initiatives integrating AI into DevOps and cloud security workflows. Charit’s work focuses on applying emerging AI technologies to enhance operational efficiency, mitigate risks, and strengthen threat detection in complex systems. He is an active contributor to industry conferences, a reviewer for multiple technical committees, and a strong advocate for practical, real-world applications of AI in security and reliability engineering.