Abstract: There is growing recognition that machine learning exposes new security issues in software systems. In this tutorial, we first articulate a comprehensive threat model for machine learning, then present an attack against model prediction integrity.
Machine learning models were shown to be vulnerable to adversarial examples, subtly modified malicious inputs crafted to compromise the integrity of their outputs. Furthermore, adversarial examples that affect one model often affect another model, even if the two models have different architectures, so long as both models were trained to perform the same task. An attacker may therefore conduct an attack with very little information about the victim by training their own substitute model to craft adversarial examples, and then transferring them to a victim model. The attacker need not even collect a training set to mount the attack. Indeed, we demonstrate how adversaries may use the victim model as an oracle to label a synthetic training set for the substitute. We conclude this first part of the tutorial by formally showing that there are (possibly unavoidable) tensions between model complexity, accuracy, and resilience that must be calibrated for the environments in which they will be used.
An introduction to machine learning
A taxonomy of threat models for security in machine learning
Attacks using adversarial examples against vision systems, malware detection, and reinforcement learning agents
Black-box attacks against machine learning
Adversarial example transferability
Defending machine learning with adversarial training and defensive distillation
Open problems in defenses such as gradient masking
Short tutorial on CleverHans (an open-source library for adversarial machine learning)
To explain the fundamentals of security in machine learning
To bring the audience up-to-date with the state-of-the-art attack techniques
To make the audience aware of the open problems in defense strategies and as a consequence the risks associated with deploying machine learning in security sensitive settings
To prepare the audience to make original contributions in this area
Bio: Nicolas Papernot is a PhD student in Computer Science and Engineering working with Dr. Patrick McDaniel at the Pennsylvania State University. His research interests lie at the intersection of computer security, privacy and machine learning. He is supported by a Google PhD Fellowship in Security. He received a best paper award at ICLR 2017. Nicolas is the co-author of CleverHans, an open-source library for benchmarking the vulnerability of machine learning models. In 2016, he received his M.S. in Computer Science and Engineering from the Pennsylvania State University and his M.S. in Engineering Sciences from the Ecole Centrale de Lyon.
Google PhD Fellow in Security at Penn State University