Abstract: Just like any other piece of software, machine learning models are vulnerable attacks from malicious agents. However, data scientists and ML engineers rarely think about the security of their models.
Models are vulnerable too—they’re representations of underlying training datasets, and are susceptible to attacks that can compromise the privacy and confidentiality of data.
Every single step in the machine learning lifecycle is susceptible to various security threats. But there are steps you can take.
Attend this presentation to:
- Learn about the most common types of attacks targeting the integrity, availability, and confidentiality of machine learning models
- Discover best practices for data scientists and ML engineers to mitigate security risks
- Ask security-related questions of ML experts
Bio: Hari Bhaskar is an engineering leader with hands on experience in designing and developing the AI platform at OCI. He is a researcher with a PhD on big data architectures and machine learning. His expertise and interests include the areas of model life cycle management, MLOps, and ML security and bias assessment. He has published 25+ papers in leading academic journals such as IEEE and Springer, and presented in international conferences on topics related to AI and machine learning. He is passionate about model security as it is one of the nascent areas of research where threat vectors emerge in terms of sophisticated and crafted attacks to mine models and associated information on data sets.