Abstract: Machine learning models are being deployed extensively in many important areas to assist humans in making important decisions. But there is no guarantee a model will always perform well after deployment as its developers intended. Understanding the correctness of a model is thus crucial to prevent potential failures that may have a significant detrimental impact in critical application areas. In this talk, I will discuss the challenges we face to ensure the correctness of deployed machine learning models and introduce some works on how to efficiently test a machine learning model using only a small amount of labelled test data.
Bio: Dr. Huong Ha is currently a Lecturer at the Artificial Intelligence Discipline, School of Computing Technologies, RMIT University, Melbourne, Australia. Her research is in the areas of Artificial Intelligence and Software Engineering, particularly trustworthy machine learning, automated machine learning, and data-driven software engineering. She regularly publishes her works in the leading international research venues in these areas including NeurIPS, ICML, AAAI, AISTATS, ICSE, and ICSME. In addition to her current role in academia, Huong has previous working experience in the industry as a data scientist and a product development engineer.