Abstract: Anyone who is interested in deep learning has gotten their hands dirty playing around with Tensorflow, Google's open source deep learning framework. Tensorflow has its benefits like wide scale adoption, deployment on mobile, and support for distributed computing, but it also has a somewhat challenging learning curve and is difficult to debug. Plus it doesn't support variable input lengths and shapes due to it's static graph architecture unless you use external packages. PyTorch is a new deep learning framework that solves a lot of those problems.
PyTorch is only in beta, but users are rapidly adopting this modular deep learning framework. PyTorch supports tensor computation and dynamic computation graphs that allow you to change how the network behaves on the fly unlike static graphs that are used in frameworks such as Tensorflow. PyTorch offers modularity which enhances the ability to debug or see within the network and for many, is more intuitive to learn than Tensorflow.
This talk will objectively look at PyTorch and why it might be the best fit for your deep learning use case and we'll look at use cases that will showcase why you might want consider using Tensorflow instead.
Bio: Stephanie Kim is a developer advocate at Algorithmia, where she enjoys writing accessible documentation, tutorials, and scripts to help data scientists learn how to productionize their models quickly and painlessly. Stephanie is the founder of Seattle PyLadies and a co-organizer of the Seattle Building Intelligent Applications Meetup. She enjoys machine learning projects, particularly ones where she gets to dive into unstructured text data to discover friction points in the UI or find out what users are thinking with natural language processing techniques. She loves to learn, write and talk about data science, machine learning and deep learning especially regarding racial bias in AI, and writing helpful and fun articles that make machine learning accessible to anyone.