Train Your PyTorch Models Faster and to a Higher Accuracy with Composer

Abstract: 

As state of the art deep learning models grow in size, so does training time and cost. These ever increasing costs of training are limiting adoption of these models across enterprises and professionals who are constrained by cost and compute.
The open source project Composer (https://github.com/mosaicml/composer) aims to address this challenge by offering a library for training PyTorch models. Composer offers a trainer API alongside dozens of training efficiency algorithms that are integrated with the trainer. With Composer you can speed up model training and often achieve better generalization, with notable examples being speeding ResNet-50 by 7x, and GPT-2 by 2x.

In this talk we will:
- Explore the growth trend of model complexity and size
- Explore how algorithmic efficiency drastically reduces training time and cost
- Learn how to leverage the open source project Composer for efficiently training PyTorch models
- Dive into concrete examples across Computer Vision (CV) and Natural Language Processing (NLP), applying Composer and evaluating the results across training time and model accuracy.

Bio: 

Bio Coming Soon!

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google