Abstract: Machine learning models are inherently vulnerable to adversarial attacks”, a theme often repeated in research papers and articles. Yet there isn’t an outburst of exploited vulnerabilities in practice. Is it too soon? With all eyes on generative AI, there are higher chances of adversarial actors inventing more sophisticated attacks, posing serious threats to machine learning systems. A common way to increase the model's robustness is to incorporate adversarial training in the machine learning lifecycle. By deliberately introducing adversarial examples during the training step, the model can learn to better recognize and defend against attacks. However, there is no one-size-fits-all attack and defense available. Adversarial attacks are specific to the architecture and usage of the model with trade-offs and limitations.
This session will focus on security threats to machine learning models, including a demonstration of the similarities and differences between adversarial attacks in the domains of computer vision and natural language processing (NLP) with examples from open source projects like Adversarial Robustness Toolbox and TextAttack. Join the session to discuss applying adversarial research to real-world systems and learn how to be proactive with machine learning security.
Bio: Teodora is an open source software engineer at VMware AI Labs. During her first couple of years in VMware, as part of the Open Source Program Office, she was an active contributor and maintainer of The Update Framework (TUF) - a framework for securing software update systems. Currently, she invests her time in open source projects related to machine learning security.