
Abstract: Bayesian probabilistic techniques allow machine learning practitioners to encode expert knowledge in otherwise-uninformed models and support uncertainty in model output. Probabilistic deep learning models take this further by fitting distributions rather than point estimates to each of the weights in a neural network, allowing its builder to inspect the prediction stability for any given set of input data. Following a slew of recent technical advancements, it's never been easier to apply probabilistic modeling in a deep learning context, and TensorFlow Probability offers full support for probabilistic layers as a first-class citizen in the TensorFlow 2.0 ecosystem. This tutorial will focus on the motivation for probabilistic deep learning and the trade-offs and design decisions relevant to applying it in practice, with applications and examples demonstrated in TensorFlow Probability.
Bio: Zach Anglin is a lead data scientist in the AI Engineering department at S&P Global, where he focuses on problems in natural language processing and probabilistic machine learning. He's particularly passionate about numerical optimization and the Julia programming language. Zach lives in Charlottesville, Virginia with his wife, Kylie, and their dog, Boolean.

Zach Anglin
Title
Lead Data Scientist | S&P Global
Category
tutorials-europe19
