Abstract: Now that learning methods such as deep networks can, given enough data, produce arbitrarily accurate predictions, the challenge is explainability: can they tell us how they made the predictions they did, in a way a human can understand? We here refer to explainability as equivalent to actionability: if a human observer understands *how* a learner makes the predictions that it does, the observer should be able to effect positive change in the system being modeled. In this first half of this talk I will present our theoretical work on realizing explainable AI systems. In the second half of the talk I will ground the theory in a specific
example: our recent work with NASA earth scientists to understand the impacts of climate change in the Amazon rainforest.
Bio: Josh Bongard is the Veinott Professor of Computer Science at the University of Vermont. He runs the Morphology, Evolution & Cognition Laboratory and is the director of UVM’s high performance computing facility. His lab researches evolutionary robotics and the crowdsourcing of AI. He received a half million dollar research award from Barack Obama at a ceremony at the White House in 2011 and appeared in an episode of Morgan Freeman’s science program ‘Through the Wormhole’. He is the author of the book “How The Body Shapes the Way we Think” and runs the Ludobots online course through reddit.com.