Abstract: While offering state-of-the-art performance across a variety of tasks, deep learning models can be time-consuming to train, thus hindering the exploration of model architectures and hyperparameter configurations. However, this bottleneck can be greatly reduced by leveraging the near-linear speedups afforded by multi-GPU training. In this talk, we will explore the different manners in which Tensorflow supports training to be distributed across a collection of GPUs.
Bio: Neil Tenenholtz is a Technical Lead and Senior Machine Learning Scientist at the MGH & BWH Center for Clinical Data Science, where his responsibilities include the training of novel deep learning models for clinical diagnosis, the development of robust infrastructure for their deployment into the clinical setting, and the creation of tooling to facilitate these processes. Prior to joining the center, Neil was a Senior Research Scientist at Fitbit where he leveraged machine learning and modeling techniques to develop new features and algorithms that reside both on-device and in the cloud. Neil received his PhD from Harvard University where he was a recipient of the NSF Graduate Research Fellowship and the Link Foundation Fellowship in Advanced Simulation and Training.
Neil Tenenholtz, PhD
Technical Lead, Sr. Machine Learning Scientist at MGH & BWH Center for Clinical Data Science