Steve Wilson

Steve Wilson

Chief Product Officer at Exabeam

    Steve Wilson is a leader and innovator in AI, cybersecurity, and cloud computing, with more than 20 years of experience. He is the founder and project leader at the Open Web Application Security Project (OWASP) Foundation, where he has assembled a team of more than 1,000 experts to create the leading comprehensive reference for Generative AI security called the “Top 10 List for Large Language Model Applications.” The list educates developers, designers, architects, managers, and organizations about the critical security vulnerabilities and risks when deploying and managing applications built using LLM technology. Wilson is the author of The Developer’s Playbook for Large Language Model Security from O'Reilly Media. Wilson is also Chief Product Officer at Exabeam, a global cybersecurity company that, for more than 10 years, has been using AI and Machine learning for cybersecurity threat detection and investigation. He’s previously worked at industry giants such as Citrix and Oracle, and he was an early member of the team that developed Java at Sun Microsystems.

    All Sessions by Steve Wilson

    Tutorials/Workshops West 07/23/2024

    The Developers Playbook for Large Language Model Security

    <span class="etn-schedule-location"> <span class="firstfocus">LLMs</span> <span class="secfocus">All Levels</span> </span>

    As Gen AI technologies rapidly advance, the potential risks and vulnerabilities associated with Large Language Models (LLMs) become increasingly significant. This talk, based on insights from ""The Developer's Playbook for Large Language Model Security,"" published by O'Reilly Media, provides a comprehensive framework for securing LLM applications. Attendees will gain a deep understanding of common vulnerabilities, such as prompt injection, training data poisoning, model theft, and overreliance on LLM outputs. The session will explore real-world case studies and actionable best practices, illustrating how LLM applications can be safeguarded against these threats. Through examples of past security incidents, both from real-world implementations and speculative scenarios from popular culture, participants will see the potential consequences of unaddressed vulnerabilities. The talk will also cover the implementation of the RAISE framework, which stands for Responsible AI Security Engineering, designed to provide a step-by-step approach to building secure and resilient AI systems. Attendees will learn about zero trust architectures, supply chain security, and continuous monitoring practices essential for maintaining the integrity of LLM applications. The session will highlight the importance of ethical considerations in AI development, ensuring that technological advancements benefit society while minimizing risks. By the end of this talk, developers and security professionals will be equipped with the knowledge and tools needed to build, deploy, and maintain secure LLM applications, paving the way for a safer AI-driven future.

    Open Data Science




    Open Data Science
    One Broadway
    Cambridge, MA 02142

    Privacy Settings
    We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
    Consent to display content from - Youtube
    Consent to display content from - Vimeo
    Google Maps
    Consent to display content from - Google