
Abstract: Deep reinforcement learning provides a path towards solving many outstanding challenges in robotics. It lets machines learn more like humans do, by trial and error. The main obstacle has been getting enough data for training. Recent advances show that sim-to-real techniques, training entirely in simulation and transferring to a real robot, may bridge the gap and enable a new wave of applications. To showcase these techniques, we train a deep neural network to solve Rubik’s Cube in simulation, and then deploy it to a real world human-like robot hand. This shows that reinforcement learning isn’t just a tool for virtual tasks, but can solve physical-world problems requiring unprecedented dexterity.
Bio: Peter Welinder is a Research Scientist at OpenAI, where he leads projects on learning-based robotics. His past projects include teaching robots to learn by imitating humans and autonomously manipulating objects with robotic hands. Previously, he was Head of Machine Learning at Dropbox, where he founded and managed applied machine learning and infrastructure teams. He founded a startup, Anchovi Labs, out of grad school which was acquired by Dropbox in 2012. Peter has a PhD in Computation and Neural Systems from Caltech and a degree in Physics from Imperial College London.

Peter Welinder, PhD
Title
Research Scientist | OpenAI
Category
accelerate-ai-w19 | innovation-w19 | talks-w19
