Abstract: This talk will overview our recent work on reasoning about the behavior of learned classifiers. I will cover techniques to inject domain knowledge into machine learning models, for example by enforcing monotonicity constraints, or logical structure, on neural network outputs. I will also discuss explainable AI techniques, specifically efficient SHAP explanations, and a new XAI notion called probabilistic sufficient explanations, which formulate explaining an instance of classification as choosing the `simplest' subset of features such that only observing those features is probabilistically `sufficient' to explain the classification.
Bio: Guy Van den Broeck is an Associate Professor and Samueli Fellow at UCLA, in the Computer Science Department, where he directs the Statistical and Relational Artificial Intelligence (StarAI) lab. His research interests are in Machine Learning, Knowledge Representation and Reasoning, and Artificial Intelligence in general. His work has been recognized with best paper awards from key artificial intelligence venues such as UAI, ILP, KR, and AAAI (honorable mention). He also serves as Associate Editor for the Journal of Artificial Intelligence Research (JAIR). Guy is the recipient of an NSF CAREER award, a Sloan Fellowship, and the IJCAI-19 Computers and Thought Award.