Security First, Create a Robust Machine Learning Model


Machine learning models are inherently vulnerable to adversarial attacks”, a theme often repeated in research papers and articles. Yet there isn’t an outburst of exploited vulnerabilities in practice. Is it too soon? With all eyes on generative AI, there are higher chances of adversarial actors inventing more sophisticated attacks, posing serious threats to machine learning systems. A common way to increase the model's robustness is to incorporate adversarial training in the machine learning lifecycle. By deliberately introducing adversarial examples during the training step, the model can learn to better recognize and defend against attacks. However, there is no one-size-fits-all attack and defense available. Adversarial attacks are specific to the architecture and usage of the model with trade-offs and limitations.

This session will focus on security threats to machine learning models, including a demonstration of the similarities and differences between adversarial attacks in the domains of computer vision and natural language processing (NLP) with examples from open source projects like Adversarial Robustness Toolbox and TextAttack. Join the session to discuss applying adversarial research to real-world systems and learn how to be proactive with machine learning security.


Teodora is an open source software engineer at VMware AI Labs. During her first couple of years in VMware, as part of the Open Source Program Office, she was an active contributor and maintainer of The Update Framework (TUF) - a framework for securing software update systems. Currently, she invests her time in open source projects related to machine learning security.

Open Data Science




Open Data Science
One Broadway
Cambridge, MA 02142

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google