
Abstract: The ubiquity of intelligent systems underscores the paramount importance of ensuring their trustworthiness. Traditional machine learning approaches often assume that training and test data follow similar distributions, neglecting the possibility of adversaries manipulating either distribution or natural distribution shifts, which can lead to severe trustworthiness issues in machine learning. Our previous research has demonstrated that motivated adversaries can circumvent anomaly detection or other machine learning models at test-time through evasion attacks, or inject malicious instances into training data to induce errors through poisoning attacks. In this talk, I will provide a succinct overview of our research on trustworthy machine learning, including robustness, privacy, generalization, and their underlying interconnections.
Session Outline:
Lesson 1: vulnerabilities of machine learning systems; lesson 2: robustness, privacy, and generalization for machine learning models; lesson 3: the underlying connections between different aspects of trustworthy machine learning
Bio: Bio Coming Soon

Bo Li, PhD
Title
Assistant Professor | University of Illinois at Urbana–Champaign (UIUC)
