AI Neuroscience: Can We Understand the Neural Networks We Train?
AI Neuroscience: Can We Understand the Neural Networks We Train?

Abstract: 

Deep neural networks now enable machines to learn to solve problems that were previously been easy for humans but difficult for computers, like playing Atari games or identifying lions and jaguars in photos. But how do these neural nets actually work? What concepts do they learn en route to their goals? We built and trained the networks, so on the surface these questions might seem trivial to answer. However, network training dynamics, internal representations, and mechanisms of computation turn out to be surprisingly tricky to study and understand, because networks have so many connections — often millions or more — that the resulting computation is fundamentally complex.

This high fundamental complexity enables the models to master their tasks, but we find now that we need something like neuroscience just to understand the AI models that we’ve constructed! As we continue to train more complex networks on larger and larger datasets, the gap between what we can build and what we can understand will only grow wider. This gap both inhibits progress toward more competent AI and bodes poorly for a society that will increasingly be run by learned algorithms that are poorly understood. In this talk, we’ll look at a collection of research aimed at shrinking this gap, with approaches including interactive model exploration, optimization, and visualization.

Bio: 

Jason Yosinski is a machine learning researcher, founding member of Uber AI Labs, and scientific adviser to Recursion Pharmaceuticals. His work focuses on building more capable and more understandable AI. As scientists and engineers build increasingly powerful AI systems, the abilities of these systems increase faster than does our understanding of them, motivating much of his work on AI Neuroscience -- an emerging field of study that investigates fundamental properties and behaviors of AI systems. Dr. Yosinski completed his PhD as a NASA Space Technology Research Fellow working at the Cornell Creative Machines Lab, the University of Montreal, Caltech/NASA Jet Propulsion Laboratory, and Google DeepMind. His work on AI has been featured on NPR, Fast Company, the Economist, TEDx, and on the BBC. Prior to his academic career, Jason cofounded two web technology companies and started a program in the Los Angeles school district that teaches students algebra via hands-on robotics. In his free time, Jason enjoys cooking, sailing, reading, paragliding, and sometimes pretending he's an artist.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google