BERT and Beyond — Language Modeling with Pytorch

Abstract: 

BERT (Bidirectional Encoder Representations from Transformers) is a Transformer-based architecture that applies bidirectional training to language modeling. It is pre-trained on a large corpus of unlabelled text and relies on the concept of transfer learning to achieve state-of-the-art performance in a wide variety of language tasks.

In this workshop, you will learn how to carry out BERT fine-tuning for various downstream NLP tasks using Pytorch. We will review the state-of-the-art in NLP and identify drawbacks of traditional approaches. We will go beyond the vanilla BERT architecture and extend its application to longer texts and documents.

Session Outline
Lesson 1: Gentle introduction to BERT
Familiarize yourself with the key concepts of BERT! We'll dive deep into the inner-workings of the architecture and understand the role of Attention. We will touch on the state-of-the-art in NLP and describe limitations of sequential and unidirectional models.

Lesson 2: BERT and its applications
Let's go over the basics of language modeling with Pytorch. This lesson will provide everything you need to get started with BERT and fine-tuning BERT to specific language tasks.

Lesson 3: Beyond BERT
Let's take a look at some of the current improvements to the BERT architecture. We will demonstrate how to overcome its maximum input sequence length limit through sample code implementations.

Background Knowledge
Python, Pytorch, NLP

Bio: 

Chaine San Buenaventura is a Lead Machine Learning Engineer at WizyVision. Her team, awarded as 2021 Startup of the Year in France by EUROCLOUD France, focuses on the adoption of computer vision models across Google Cloud Platform products and services for use cases relating to frontline workers. She received her master's degree from the University of the Philippines Diliman in June 2018. Her graduate research was on Smartphone-Based Human Activity Recognition (HAR) for Ambient Assisted Living (AAL). Charlene is currently specializing in Deep Learning applied to Computer Vision and Natural Language Processing. She has numerous publications and has many years of experience in deep learning research, development and engineering.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google