Learning From Limited Data
Learning From Limited Data

Abstract: 

Tremendous progress has been achieved across a wide range of machine learning tasks with the introduction of deep learning in recent years. However, conventional deep learning approaches rely on large amounts of labeled data and suffer from performance decay in problems with limited training data. On the one hand, objects in the real world have a long-tailed distribution, and obtaining annotated data is expensive. On the other hand, novel categories of objects arise dynamically in nature, which fundamentally limits the scalability and applicability of supervised learning models for handling this dynamic scenario when labeled examples are not available. Take the surveillance traffic analysis as an example. Current solutions need examples that span all weather conditions, times of day, cities of operation, and camera locations, to produce a model robust to these variations. This quickly becomes expensive for a particular application and infeasible when considering the breadth of applications for which visual recognition is needed. The fundamental limitation of this approach is that it does not adapt to new task and data domain automatically and thus needs new human labeled examples for each new visual task and for each variation within tasks. To overcome these challenges, my research focuses on developing machine learning algorithms that can learn from limited training data, including domain adaptation and low shot learning. In this talk, I will introduce my research work on multi-source domain adaptation and low shot learning, which transfer information through multiple domains and enable the learning system to adapt to real-world variations. I will also explain how the developed machine learning systems automatically capture the relationships among different domains/classes and make accurate predictions of the novel domains/classes, providing fundamental principles for different applications, such as intelligent transportation and clinical informatics.

Bio: 

Dr. Shanghang Zhang is currently a researcher at Petuum Inc.. Her research covers deep learning, computer vision, and natural language processing. She especially focuses on domain adaptation, meta-learning, and low-shot learning. She has been awarded “2018 Rising Stars in EECS” (a highly selective program launched at MIT in 2012, and which has since been hosted at UC Berkeley, Carnegie Mellon, and Stanford annually). She is the recipient of Adobe Academic Collaboration Fund and Qualcomm Innovation Fellowship (QInF) Finalist Award. She was also selected to CVPR 2018 Doctoral Consortium and invited to Facebook's 3rd Annual Women in Research Lean In Event. Shanghang has co-organized Human in Loop Learning Workshop for ICML 2019, and the Special Session "MMA: Multi-Modal Affective Computing of Large-Scale Multimedia Data" for ACM International Conference on Multimedia Retrieval (ICMR), 2019. Before joining Petuum, Shanghang received her Ph.D. from Carnegie Mellon University, supervised by Prof. Jose Moura and Prof. Joao Costeira.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google