
Abstract: After a brief introduction to core TensorFlow concepts, we'll focus on the newly open-sourced TensorFlow Lattice tools, that make your machine-learned models more interpretable, without sacrificing accuracy. TF Lattice enables you to impose prior information about monotonic global trends into your models, such as that closer coffee shops are better (if all else is the same). Such global trends may be missed by flexible models like RF's and DNN's when trained on noisy data, and may only become problems when you run your model on examples that are different than your training data (data shift). By learning your preferred global trends, TF Lattice produces models that generalize better, and that you can explain and debug more easily, because you know what the model is doing. We'll show you how to use TF Lattice's pre-built TF Estimators, and how to use the underlying TF operators to build your own deeper lattice network models or plug-n-play with other TF models. Suitable for TF newbies and advanced TF users.
Bio: Maya Gupta leads Google's Glassbox Machine Learning R and D team, which focuses on designing and developing controllable and interpretative machine learning algorithms that solve Google product needs. Prior to Google, Gupta was an Associate Professor of Electrical Engineering at the University of Washington from 2003-2013. Her PhD is from Stanford, and she holds a BS EE and BA Econ from Rice. Gupta founded and runs the wooden jigsaw puzzle company, Artifact Puzzles.

Maya Gupta, PhD
Title
Glassbox ML R&D Team Lead at Google
Category
west2017trainings
