Information Flow and Deep Representation Learning

Abstract: 

Representation learning in neural nets continues to play a fundamental role in advancing our understanding of deep learning algorithms and our ability to extend successful applications. In this session we will explore how the information bottleneck analysis of deep learning algorithms sheds insight into how these algorithms learn and patterns across layers of learned representations. We conclude with discussion of how this analysis sheds a more practical light on theoretical concepts in deep learning research such as nuisance insensitivity and disentanglement.

Bio: 

Mike serves as Chief ML Scientist and Head of Machine Learning for SIG, UC Berkeley Data Science faculty, and Director of Phronesis ML Labs. He has led teams of Data Scientists in the bay area as Head of Data Science at Uber ATG, Chief Data Scientist for InterTrust and Takt, Director of Data Science for MetaScale/Sears, and CSO for Galvanize where he founded the galvanizeU-UNH accredited Masters in Data Science degree and oversaw the company's transformation from co-working space to Data Science organization. Mike began his career in academia serving as a mathematics teaching fellow for Columbia University before teaching at the University of Pittsburgh.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google