
Abstract: Deep learning practitioners spend most of their time troubleshooting & debugging. Troubleshooting models is notoriously difficult because the same performance problem can be attributed to many different sources, and performance can be extremely sensitive to small changes in architecture and hyperparameters. In this talk, I will attempt to demystify the troubleshooting process by presenting a decision tree for improving your model's performance.
Bio: Josh is a Research Scientist at OpenAI working at the intersection of machine learning and robotics. His research focuses on applying deep reinforcement learning, generative models, and synthetic data to problems in robotic perception and control.
Additionally, he co-organize a machine learning training program for engineers to learn about production-ready deep learning called Full Stack Deep Learning.
Josh did his PhD in Computer Science at UC Berkeley advised by Pieter Abbeel. He have also been a management consultant at McKinsey and an Investment Partner at Dorm Room Fund.

Josh Tobin, PhD
Title
Research Scientist | OpenAI
Category
advanced-w19 | deep-learning-w19 | intermediate-w19 | talks-w19
