Abstract: The current direction of deep learning research has been geared towards improving hardware specifications and GPU computing to train on a relatively small amount of available data. Yet, human brains only utilize enough compute power in processing large volumes of data. While deep neural networks today are highly dense and overparameterized, it has been shown that removing over 90% of redundancy to reduce parameter count does not compromise performance.
In this session, we will investigate the current state of sparsity in deep learning. We will walk through some examples on how sparsification can be applied to a few state-of-the-art deep neural network architectures commonly used in Computer Vision and NLP to boost generalization, interpretability, and performance, as well as reduce model and data size that translates to lower computational, storage and energy requirements.
Bio: Chaine San Buenaventura is a Lead Machine Learning Engineer at WizyVision. Her team, awarded as 2021 Startup of the Year in France by EUROCLOUD France, focuses on the adoption of computer vision models across Google Cloud Platform products and services for use cases relating to frontline workers. She received her master's degree from the University of the Philippines Diliman in June 2018. Her graduate research was on Smartphone-Based Human Activity Recognition (HAR) for Ambient Assisted Living (AAL). Charlene is currently specializing in Deep Learning applied to Computer Vision and Natural Language Processing. She has numerous publications and has many years of experience in deep learning research, development and engineering.