The Developers Playbook for Large Language Model Security


As Gen AI technologies rapidly advance, the potential risks and vulnerabilities associated with Large Language Models (LLMs) become increasingly significant. This talk, based on insights from ""The Developer's Playbook for Large Language Model Security,"" published by O'Reilly Media, provides a comprehensive framework for securing LLM applications. Attendees will gain a deep understanding of common vulnerabilities, such as prompt injection, training data poisoning, model theft, and overreliance on LLM outputs.

The session will explore real-world case studies and actionable best practices, illustrating how LLM applications can be safeguarded against these threats. Through examples of past security incidents, both from real-world implementations and speculative scenarios from popular culture, participants will see the potential consequences of unaddressed vulnerabilities. The talk will also cover the implementation of the RAISE framework, which stands for Responsible AI Security Engineering, designed to provide a step-by-step approach to building secure and resilient AI systems.

Attendees will learn about zero trust architectures, supply chain security, and continuous monitoring practices essential for maintaining the integrity of LLM applications. The session will highlight the importance of ethical considerations in AI development, ensuring that technological advancements benefit society while minimizing risks. By the end of this talk, developers and security professionals will be equipped with the knowledge and tools needed to build, deploy, and maintain secure LLM applications, paving the way for a safer AI-driven future.

Session Outline:

Lesson 1: Chatbots Breaking Bad
Familiarize yourself with the security landscape of chatbots, focusing on common vulnerabilities such as prompt injection and insecure output handling. Real-world examples and case studies will illustrate the risks and potential impacts on LLM applications.

Lesson 2: Prompt Injection
Dive deep into the concept of prompt injection, a critical vulnerability in LLM applications. Learn how attackers craft inputs to manipulate LLMs and discover effective strategies to prevent such attacks through robust input validation and handling techniques.

Lesson 3: Can Your LLM Know Too Much?
Explore the risks of sensitive data disclosure, learning best practices for data handling and privacy. This lesson will provide tools and methodologies to ensure your LLM applications do not inadvertently expose sensitive or proprietary information.

Lesson 4: Do Language Models Dream of Electric Sheep?
Address the challenge of LLM hallucinations by understanding their causes and impacts. Techniques for reducing and managing hallucinations will be discussed, with real-world examples illustrating effective mitigation strategies.

Lesson 5: Trust No One
Implement zero trust architectures for LLM applications by understanding guardrails and other defensive measures. Real-world examples of successful zero trust implementations will demonstrate how to build robust defenses for your applications.

Lesson 6: Don’t Lose Your Wallet
Learn how to protect against Denial of Service (DoS) and Denial of Wallet (DoW) attacks. This lesson will cover effective defenses and monitoring techniques, supported by case studies of high-profile security incidents.

Lesson 7: Don’t Be the Weakest Link


Steve Wilson is a leader and innovator in AI, cybersecurity, and cloud computing, with more than 20 years of experience. He is the founder and project leader at the Open Web Application Security Project (OWASP) Foundation, where he has assembled a team of more than 1,000 experts to create the leading comprehensive reference for Generative AI security called the “Top 10 List for Large Language Model Applications.” The list educates developers, designers, architects, managers, and organizations about the critical security vulnerabilities and risks when deploying and managing applications built using LLM technology.

Wilson is the author of The Developer’s Playbook for Large Language Model Security from O'Reilly Media.

Wilson is also Chief Product Officer at Exabeam, a global cybersecurity company that, for more than 10 years, has been using AI and Machine learning for cybersecurity threat detection and investigation. He’s previously worked at industry giants such as Citrix and Oracle, and he was an early member of the team that developed Java at Sun Microsystems.

Open Data Science




Open Data Science
One Broadway
Cambridge, MA 02142

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google