Taming Large Language Models into Trustworthy Conversational Virtual Assistants

Abstract: 

What if computers can truly converse with us in our native tongue? Computers will transform into effective, personalized assistants for everybody. Commercial chatbots today are notoriously brittle as they are hardcoded to handle a few possible choices of user inputs. Recently introduced large language neural models, such as GPT-3, are remarkably fluent, but they are prone to hallucinations, often producing incorrect statements. This talk describes how we can tame these neural models into robust, trustworthy, and cost-effective conversational agents across all industries and languages.

Bio: 

Monica Lam is a Professor in the Computer Science Department at Stanford University since 1988. She is the faculty director of the Open Virtual Assistant Lab (OVAL). She received a B.Sc. from University of British Columbia in 1980 and a Ph.D. in Computer Science from Carnegie Mellon University in 1987. Monica is a Member of the National Academy of Engineering and an ACM Fellow. She is a co-author of the popular text Compilers, Principles, Techniques, and Tools (2nd Edition), also known as the Dragon book.

Professor Lam's current research is on conversational virtual assistants with an emphasis on privacy protection. Her research uses deep learning to map task-oriented natural language dialogues into formal semantics, represented by a new executable programming language called ThingTalk. Her Almond virtual assistant, trained on open knowledge graphs and IoT API standards, can be easily customized to perform new tasks. She is leading an Open Virtual Assistant Initiative to create the largest, open, crowdsourced language semantics model to promote open access in all languages. Her decentralized Almond virtual assistant that supports fine-grain sharing with privacy has received Popular Science's Best of What's New Award in Security in 2019.

Prof. Lam is also an expert in compilers for high-performance machines. Her pioneering work of affine partitioning provides a unifying theory to the field of loop transformations for parallelism and locality. Her software pipelining algorithm is used in commercial systems for instruction level parallelism. Her research team created the first, widely adopted research compiler, SUIF. Her contributions in computer architecture include the CMU Warp Systolic Array and the Stanford DASH Distributed Memory Multiprocessor. She was on the founding team of Tensilica, now a part of Cadence.

She received an NSF Young Investigator award in 1992, the ACM Most Influential Programming Language Design and Implementation Paper Award in 2001, an ACM SIGSOFT Distinguished Paper Award in 2002, the ACM Programming Language Design and Implementation Best Paper Award in 2004, the ACM SIGARCH/SIGPLAN/SIGOPS ASPLOS Influential Paper Awards in two consecutive years, 2021 and 2022. She was the author of two of the papers in ""20 Years of PLDI--a Selection (1979-1999)"", and one paper in the ""25 Years of the International Symposia on Computer Architecture"". She received the University of British Columbia Computer Science 50th Anniversary Research Award in 2018.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google