Abstract: TensorFlow, a general-purpose numerical computation library open-sourced by Google in November 2015, enables users to focus on building the computation graph and deploy the model with little efforts on heterogeous platforms such as mobile devices, hundreds of machines, or thousands of computational devices. TF.Learn is a high-level Python module for distributed machine learning inside TensorFlow. It provides an easy-to-use Scikit-learn style interface to simplify the process of creating, configuring, training, evaluating, and experimenting a machine learning model. TF.Learn integrates a wide range of state-of-art machine learning algorithms built on top of TensorFlow's low level APIs for small to large-scale supervised and unsupervised problems. This module focuses on bringing machine learning to non-specialists using a general-purpose high-level language as well as researchers who want to implement, benchmark, and compare their new methods in a structured environment. Emphasis is put on ease of use, performance, documentation, and API consistency. In this talk, we will introduce the main features of TensorFlow and the wide range of use cases for building distributed machine learning models both for research and production.
Bio: Yuan Tang is a data scientist at Uptake in Chicago, working on buiding scalable data science engine used across multiple verticals. He's a committer of TensorFlow, MXNet, and XGBoost as well as author of several other open-source packages, such as ggfortify. He's awarded Open Source Peer Bonus by Google for his great contributions to the open-source community.