Advanced NLP with TensorFlow and PyTorch: LSTMs, Self-attention and Transformers
Advanced NLP with TensorFlow and PyTorch: LSTMs, Self-attention and Transformers

Abstract: 

Natural Language Processing (NLP) has recently experienced its own ""ImageNet"" moment. Rapidly evolving language models have enabled practitioners to decipher long lost languages, translate speech in one language to speech in another language directly without converting to text, generate long form text that adapts to the style and content of human prompts, and translate between language pairs never seen explicitly by computer systems (among many other impressive results).

In this training, you will develop a theoretical understanding of modern NLP along with the hands-on skills needed to develop state-of-the-art models. You will implement a variety of recurrent layer and transformer based architectures in both TensorFlow and PyTorch for tasks including text classification, machine translation, and predictive text.

Session Outline
Module 1: Modern NLP
Learn about the shift in NLP methodologies from specialized algorithms requiring expert linguistic information to generalized machine learning models. At the end of this module, you will be confident in representing NLP tasks as machine learning problems.

Module 2: Recurrent Layers
Level up your models for natural languages by handling sequential data with recurrent neural networks. You will learn about and implement simple recurrent layers along with more complicated units like LSTM and GRU.

Module 3: Self-Attention and Transformers
Scale up your handling of text and understand context using self-attention and transformers. These architectures are driving the majority of state-of-the-art NLP, and you will learn how to understand and implement these models to solve problems like machine translation.

Module 4: Transfer Learning for NLP
Standing on the shoulders of giants via transfer learning can speed up your AI development process and give you better results. At the end of this module, you will have confidence in pulling down pre-trained models and fine-tuning them for your own problems.

Background Knowledge
Python programming, familiarity with Jupyter notebooks, foundational math knowledge (some exposure to things like exponentials and matrix arithmetic), experience with the basic machine learning workflow (pre-processing, training, testing, inference).

Bio: 

Daniel Whitenack (aka Data Dan) is a PhD trained data scientist who has been developing artificial intelligence applications in the real world for over 10 years. He knows how to see beyond the hype of AI and machine learning to build systems that create business value, and he has taught these skills to 1000’s of developers, data scientists, and engineers all around the world. Now with the AI Classroom event, Data Dan is bringing this knowledge to an live, online learning environment so that you can level up your career from anywhere!

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google