The Robustness Problem
The Robustness Problem


Despite impressive performance on many benchmarks, state of the art machine learning algorithms have been shown to be extremely brittle in the presence of distribution shift. In this talk we will survey several recent works in the literature on robustness, discussing known causes this brittleness and best methods for mitigating the problem. We will focus on robustness in the image domain, where models have been shown to easily latch onto spurious correlations in the data. We will also discuss how the popular notion of adversarial examples relates to the problem of distribution shift.


Justin is a Research Scientist at Google Brain working on statistical machine learning and artificial intelligence. Much of his current focus is on building robust statistical classifiers that can generalize well in dynamic environments in the real world. He holds a PhD in Theoretical Mathematics from Rutgers University.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google