Practical Adversarial Learning: How to Evaluate, Test, and Build Better Models

Abstract: 

Machine learning has rapidly evolved to become an industry wide toolkit for solving a variety of automated tasks by extracting patterns from data. However, many pitfalls remain that can leave the learned model vulnerable to both mistakes and malfeasance, which adversaries can exploit to craft attacks. While the overall craft of training production-grade learned models from large datasets has largely been largely solved, there remains little consistency in how we validate the quality of these models and check them for vulnerabilities. In this training, we will introduce you to the techniques that have been developed for constructing adversarial examples for a model, tools that can find potential vulnerabilities in them, and training procedures that can produce better models.

Session Outline:

Lesson 1: Model Analysis

Familiarize yourself techniques and toolkits that can be used for validating models. Within this lesson, you will use state-of-the-art model validation tools to find both benign and adversarial vulnerabilities within a model.

Lesson 2: Constructing Adversarial Examples

Learn how to use common techniques for constructing adversarial examples that can expose the vulnerabilities of your model to potential adversaries. We will learn about different attack scenarios as well as a suite of methods that can be used to produce attacks.

Lesson 3: Adversarially Resilient Models and Detecting Attacks

Now that we’ve seen how adversarial examples can be crafted, we’ll learn how to build models that are resilient to these attacks and how we can monitor to detect possible attacks. With this final learning, you’ll have the tools needed to build and protect models and, combined with the previous lessons, you can will be confident in your model.

Background Knowledge:

* Participants should bring a laptop
* We'll be using Python 3.9+
* Proficiency with numpy/pandas is strongly encouraged

Bio: 

William Fu-Hinthorn is an ML engineer at Robust Intelligence. He is responsible for RI’s AI Security services, which eliminate security risks posed by the adoption of third-party models and tooling, he assists in developing models robust to adversarial attacks, and he develops techniques to detect and eliminate adversarial threats for models in production. William also leads the unstructured data team, which develops AI Integrity solutions for NLP, CV, and other modalities. Prior to joining Robust Intelligence, he worked as an R&D engineer for Microsoft’s Cognitive Services Research Group, where he worked on multilingual conversation transcription, contextual Q&A, dialog summarization, sentiment analysis, dialog modeling, machine translation, intelligent agents, distributed machine learning, and related tasks.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google