Maryam Fazel-Zarandi, PhD

Maryam Fazel-Zarandi, PhD

Researcher Engineering Manager, FAIR at Meta

Maryam Fazel-Zarandi is currently the Research Engineering Lead and Manager for North America at FAIR Labs – Meta AI, supporting core machine learning and responsible AI researchers. Prior to this, she was the Research Manager for the US east coast NLP, speech, and Core ML areas. Her research interests are in the areas of Natural Language Understanding and Dialog Systems, with current focus on reasoning in large language models and self-supervised learning for speech. Maryam has over 10 years of industry experience and extensive experience in collaborating with engineering teams to deliver science solutions to production. Prior to joining FAIR, Maryam was an Applied Science Manager at Amazon working on Conversational AI for Alexa since 2017, and prior to that, she was a Senior Research Scientist at Nuance Communications, working on open-domain natural language understanding and question answering. She received her Ph.D. in Computer Science from the University of Toronto in 2013.

All Sessions by Maryam Fazel-Zarandi, PhD

Reasoning in Large Language Models

LLMs | All Levels

Scaling language models has improved state-of-the-art performance on nearly every NLP benchmark, with large language models (LLMs) performing impressively as few-shot learners. Despite these achievements, even the largest of these models still struggle with tasks that require reasoning. Recent work has shown that prompting or fine-tuning LLMs to generate step-by-step rationales, or asking them to verify their final answer can lead to improvements on reasoning tasks. While these methods have proven successful in specific domains, there is still no general framework for LLMs to be capable of reasoning in a wide range of situations. In this talk, I will give an overview of some of the existing methods used for improving and eliciting reasoning in large language models, methods for evaluating reasoning in these models, and discuss limitations and challenges.

Reasoning in Large Language Models

LLMs | All Levels

Scaling language models has improved state-of-the-art performance on nearly every NLP benchmark, with large language models (LLMs) performing impressively as few-shot learners. Despite these achievements, even the largest of these models still struggle with tasks that require reasoning. Recent work has shown that prompting or fine-tuning LLMs to generate step-by-step rationales, or asking them to verify their final answer can lead to improvements on reasoning tasks. While these methods have proven successful in specific domains, there is still no general framework for LLMs to be capable of reasoning in a wide range of situations. In this talk, I will give an overview of some of the existing methods used for improving and eliciting reasoning in large language models, methods for evaluating reasoning in these models, and discuss limitations and challenges.

Reasoning in Large Language Models

LLMs | All Levels

Scaling language models has improved state-of-the-art performance on nearly every NLP benchmark, with large language models (LLMs) performing impressively as few-shot learners. Despite these achievements, even the largest of these models still struggle with tasks that require reasoning. Recent work has shown that prompting or fine-tuning LLMs to generate step-by-step rationales, or asking them to verify their final answer can lead to improvements on reasoning tasks. While these methods have proven successful in specific domains, there is still no general framework for LLMs to be capable of reasoning in a wide range of situations. In this talk, I will give an overview of some of the existing methods used for improving and eliciting reasoning in large language models, methods for evaluating reasoning in these models, and discuss limitations and challenges.

Reasoning in Large Language Models

LLMs | All Levels

Scaling language models has improved state-of-the-art performance on nearly every NLP benchmark, with large language models (LLMs) performing impressively as few-shot learners. Despite these achievements, even the largest of these models still struggle with tasks that require reasoning. Recent work has shown that prompting or fine-tuning LLMs to generate step-by-step rationales, or asking them to verify their final answer can lead to improvements on reasoning tasks. While these methods have proven successful in specific domains, there is still no general framework for LLMs to be capable of reasoning in a wide range of situations. In this talk, I will give an overview of some of the existing methods used for improving and eliciting reasoning in large language models, methods for evaluating reasoning in these models, and discuss limitations and challenges.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google