
Abstract: Just like any other piece of software, machine learning models are vulnerable attacks from malicious agents. However, data scientists and ML engineers rarely think about the security of their models.
Models are vulnerable too—they’re representations of underlying training datasets, and are susceptible to attacks that can compromise the privacy and confidentiality of data.
Every single step in the machine learning lifecycle is susceptible to various security threats. But there are steps you can take.
Attend this presentation to:
- Learn about the most common types of attacks targeting the integrity, availability, and confidentiality of machine learning models
- Discover best practices for data scientists and ML engineers to mitigate security risks
- Ask security-related questions of ML experts
Bio: Jean-René Gauthier is the product architect behind the Oracle Cloud Infrastructure AI platform. Previously at DataScience.com, Jean-René designed the datascience.com platform model management features and roadmap. In addition, he managed a team of data experts in developing algorithms and analytics models to solve customers’ unique business problems. He is also responsible for educating clients on these algorithms and models, ensuring that they are incorporated into the business to add maximum value. Prior to his three years at DataScience.com, Jean-René was a data scientist at AuriQ Systems where he focused on online marketing analytics and data engineering, often involving high-speed processing of massive data sets. He holds a PhD in astrophysics from the University of Chicago and was a Millikan fellow at the California Institute of Technology.

Jean-Rene Gauthier, PhD
Title
AI Platform Architect | Oracle
