Abstract: We can easily trick a classifier into making embarrassingly false predictions. When this is done systematically and intentionally, it is called an adversarial attack. Specifically, this kind of attack is called an evasion attack. In this session, we will examine an evasion use case and elaborate on other forms of attacks. Then, we explain two defense methods: spatial smoothing preprocessing and adversarial training. Lastly, we will demonstrate one robustness evaluation method and one certification method to ascertain that the model can withstand such attacks.
- How ML classifiers can be tricked
- In what ways are ML models vulnerable
- How can you defend ML models from evasion attacks
- How can you certify ML model robustness
Intended for ML Engineers, Data Scientists, MLOps Engineers, and SecOps Engineers but any somewhat experienced programmer with an interest in AI/ML can attend
Bio: Serg Masís is a Data Scientist in agriculture with a lengthy background in entrepreneurship and web/app development, and the author of the bestselling book "Interpretable Machine Learning with Python". Passionate about machine learning interpretability, responsible AI, behavioral economics, and causal inference.