Probabilistic Deep Learning in TensorFlow: The Why and How
Probabilistic Deep Learning in TensorFlow: The Why and How

Abstract: 

Bayesian probabilistic techniques allow machine learning practitioners to encode expert knowledge in otherwise-uninformed models and support uncertainty in model output. Probabilistic deep learning models take this further by fitting distributions rather than point estimates to each of the weights in a neural network, allowing its builder to inspect the prediction stability for any given set of input data. Following a slew of recent technical advancements, it's never been easier to apply probabilistic modeling in a deep learning context, and TensorFlow Probability offers full support for probabilistic layers as a first-class citizen in the TensorFlow 2.0 ecosystem. This tutorial will focus on the motivation for probabilistic deep learning and the trade-offs and design decisions relevant to applying it in practice, with applications and examples demonstrated in TensorFlow Probability.

Bio: 

Zach Anglin is a lead data scientist in the AI Engineering department at S&P Global, where he focuses on problems in natural language processing and probabilistic machine learning. He's particularly passionate about numerical optimization and the Julia programming language. Zach lives in Charlottesville, Virginia with his wife, Kylie, and their dog, Boolean.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google