Prompt Defense "A Multi-Layered Approach"
About This Session
Large Language Models (LLMs) are reshaping how we build applications—but with great power comes great vulnerability. Prompt injection attacks exploit the very thing that makes LLMs so useful: their ability to follow natural language instructions. The result? Malicious prompts that can hijack model behavior, often in subtle and dangerous ways.
While prompt injection is now widely recognized, the defenses being deployed across the industry often fall short. Why? Because what works in one context—one model, one use case—can completely fail in another. In this talk, we’ll go beyond just classifying attack types to focus on what really matters: how to build prompt defenses that actually work.
We’ll dig into practical, layered defense strategies—like prompt hardening, input/output validation, and system prompt design—while highlighting why secure prompting must be tailored to your model architecture, application flow, and risk surface. From SLMs to multi-modal inputs, we’ll show how “one prompt to rule them all” just doesn’t exist.
You’ll also get an overview of emerging tools for stress-testing and validating your prompt security, helping you move from reactive patching to proactive defense. If you're building with LLMs, it's time to think beyond generic guardrails and start securing prompts like it actually matters—because it does.
While prompt injection is now widely recognized, the defenses being deployed across the industry often fall short. Why? Because what works in one context—one model, one use case—can completely fail in another. In this talk, we’ll go beyond just classifying attack types to focus on what really matters: how to build prompt defenses that actually work.
We’ll dig into practical, layered defense strategies—like prompt hardening, input/output validation, and system prompt design—while highlighting why secure prompting must be tailored to your model architecture, application flow, and risk surface. From SLMs to multi-modal inputs, we’ll show how “one prompt to rule them all” just doesn’t exist.
You’ll also get an overview of emerging tools for stress-testing and validating your prompt security, helping you move from reactive patching to proactive defense. If you're building with LLMs, it's time to think beyond generic guardrails and start securing prompts like it actually matters—because it does.
Speakers

Sharon Augustus
Lead Product Security Engineer - Salesforce
Sharon Augustus is a Lead Product Security Engineer at Salesforce, with a current emphasis on Large Language Models (LLM), Generative AI and Agentic systems. She previously worked as security consultant where she conducted penetration testing, threat modeling, and vulnerability analysis for client applications, also guiding them on secure software development methodologies.

Jason Ross
Product Security Principal - Salesforce
Jason Ross is a passionate cybersecurity expert with a diverse skill set in generative AI, Penetration Testing, Cloud Security, and OSINT. As a product security principal at Salesforce, Jason performs security testing and exploit development with a specific focus on generative AI, Large Language Models, and Agentic systems.
Jason is a frequent speaker at industry conferences, and is active in the security community participating as a core member of the OWASP GenAI Project and serving as a DEF CON NFO goon.
Jason is a frequent speaker at industry conferences, and is active in the security community participating as a core member of the OWASP GenAI Project and serving as a DEF CON NFO goon.