
Abstract: Variational autoencoders (VAEs) are one of the most widely used deep generative models with applications to computer vision, language processing, and genomics, among other fields. VAEs are typically used to perform non-linear dimensionality reduction, by mapping high dimensional samples such as images into a low-dimensional latent space for visualization and other downstream analysis. One of the key limitations of VAEs is their lack of interpretability: until now, it has been challenging to identify the relationship, or attributions, between individual latent dimensions and the original input features of the samples. Increasing the interpretability of the latent dimensions learned by the VAE will improve our understanding of what the latent space of VAEs is capturing, and help interpret their visualizations.
In this hands-on tutorial, we will introduce attendees to the siVAE (scalable, interpretable VAE) model that infers a set of factor loadings that explicitly map latent dimensions to the input features that define them, during training of the VAE model. Using standard datasets from computer vision (MNIST, Fashion-MNIST and CIFAR-10), we will walk attendees through the process of training the siVAE model, visualizing the sample embeddings inferred by classic VAEs, and extracting and visualizing the features that contribute to individual latent dimensions. We will also teach attendees how to estimate and visualize feature awareness, a new metric for measuring the overall importance of individual features for embedding a sample in the latent space. At the end of the tutorial, attendees will be able to train an siVAE model on their own datasets and interpret and visualize the latent dimensions inferred.
Session Outline
Tentative schedule:
- Introduction to VAE, siVAE (10 minutes)
- Hands-on introduction to basic TensorFlow commands (30 minutes)
- Hands-on training on how to train siVAE using the Google Colaboratory (20 minutes)
- Hands-on visualization of sample embeddings, factor loadings (interpretation), feature awareness (25 minutes)
- Wrap-up (5 minutes)
Bio: Gerald Quon is an Assistant Professor in the Department of Molecular and Cellular Biology at the University of California at Davis. He obtained his Ph.D. in Computer Science from the University of Toronto, M.Sc. in Biochemistry from the University of Toronto, and B. Math in Computer Science from the University of Waterloo. He also completed postdoctoral research training at MIT. His lab focuses on applications of machine learning to human genetics, genomics and health, and is funded by the National Science Foundation, National Institutes of Health, the Chan Zuckerberg Initiative, and the American Cancer Society.

Gerald Quon, PhD
Title
Assistant Professor | UC Davis Machine Learning & AI Group UC Davis
