
Abstract: For many AI applications, a prediction is not enough. End-users need to understand the “why” behind a prediction to make decisions and take next steps. Explainable AI techniques today can provide some insight into what your model has learned, but recent research highlights the need for interactivity with XAI tools. End-users need to interact and test “what if” scenarios in order to understand and build trust with an AI system. In this talk, I’ll discuss what human-factors research tells us about human decision making and how users build trust (or lose trust) in systems. I’ll also present interaction design techniques that can be applied to XAI services design.
Background Knowledge:
They should be familiar with general concepts and XAI techniques. But it will largely be a talk appropriate to beginners in the XAI space.
Bio: Meg is currently the Lead UXR for Intrinsic.ai, where she focuses her work on making it easier for engineers to adopt and automate with industrial robotics. She is a “Xoogler”, and prior to Intrinsic worked on the Explainable AI services on Google Cloud. Meg has had a varied career working for start-ups and large corporations alike, and she has published on topics such as user research, information visualization, educational-technology design, voice user interface (VUI) design, explainable AI (XAI), and human-robot interaction (HRI). Meg is also a proud alumnus of Virginia Tech, where she received her Ph.D. in Human-Computer Interaction.

Meg Kurdziolek, PhD
Title
Staff UX Researcher | Intrinsic.ai
