GANs N’ Roses: Understanding Generative Models

Abstract: 

Generative models are at the heart of DeepFakes, and can be used to synthesize, replace, or swap attributes of images.
Learn the basics of Generative Adversarial Networks, the famous GANs, from the ground up: autoencoders, latent spaces, generators, discriminators, vanilla GAN, DCGAN, WGAN, and more.

The main goal of this sessions is to show you how GANs work: we will start with a simple example using synthetic data (not generated by GANs) to learn about latent spaces and how to use them to generate more synthetic data (using GANs to generate them). We will improve on the model's architecture, incorporating convolutional layers (DCGAN), different loss functions (WGAN, WGAN-GP) and use them to generate synthetic images of flowers (the roses!).

Session Outline:
Intro: DeepFakes, GANs, and Synthetic data
Learn about the different types of DeepFakes, and how GANs can be used to synthesize new data.

Module 1: Latent spaces and autoencoders
Learn how autoencoders use latent spaces to represent data, and how variational autoencoders allow for easy sampling and generating data.

Module 2: Your first GAN
Learn how decoders can be used as Generators, generating images from sampling latent spaces, and how to combine them with Discriminators to build your first GAN.

Module 3: Improving your GAN using Wasserstein distance (WGAN and WGAN-GP)
Learn how to improve your GAN by changing its loss function and adding gradient penalty (GP).

Wrapping up: GANs N' Roses
It's time to generate some synthetic roses!

Background Knowledge:
We will use Google Colab and work our way together into building and training several GANs. You should be comfortable using Jupyter notebooks and Numpy, and training simple models in PyTorch.

Bio: 

Daniel has been teaching machine learning and distributed computing technologies at Data Science Retreat, the longest-running Berlin-based bootcamp, for more than three years, helping more than 150 students advance their careers. He writes regularly for Towards Data Science. His blog post "Understanding PyTorch with an example: a step-by-step tutorial" reached more than 220,000 views since it was published. The positive feedback from the readers motivated him to write the book Deep Learning with PyTorch Step-by-Step, which covers a broader range of topics. Daniel is also the main contributor of two python packages: HandySpark and DeepReplay. His professional background includes 20 years of experience working for companies in several industries: banking, government, fintech, retail and mobility.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google