Ines Chami

Ines Chami

Co-founder and Chief Scientist at NUMBERS STATION AI

    Ines Chami is the Chief Scientist and Co-Founder of Numbers Station. She received her Ph.D. in the Institute for Computational and Mathematical Engineering from Stanford University where she was advised by Prof. Christopher Ré. Prior to attending Stanford, she studied Mathematics and Computer Science at Ecole Centrale Paris. Ines is particularly excited about building intelligent models to automate data-intensive work. Her work spans applications such as knowledge graph construction and data cleaning. For her work on graph representation learning, she won the 2021 Stanford Gene Golub Doctoral Dissertation Award. During her Ph.D. she interned at Microsoft AI and Research and Google Research where she co-authored the graph representation learning chapter of Kevin Murphy's "Probabilistic Machine Learning: An Introduction" book.

    All Sessions by Ines Chami

    Day 1 04/23/2024
    1:10 pm - 1:40 pm

    From Research to the Enterprise: Leveraging Large Language Models for Enhanced ETL, Analytics, and Deployment

    <span class="etn-schedule-location"> <span class="firstfocus">Data Engineering</span> </span>

    As Foundation Models (FMs) continue to grow in size and capability, data is often left behind in the rush toward solving problems involving documents, images, and videos. This talk will describe our research at Stanford University and Numbers Station AI on applying FMs to structured data and their applications in the modern data stack. Starting with ETL/ELT, we'll discuss our 2022 VLDB paper ""Can Foundation Models wrangle your data?"", the first line of work to use FMs to accelerate tasks like data extraction, cleaning and integration. We'll then move up the stack and discuss our work at Numbers Station to use FMs to accelerate data analytics workflows, by automating tasks like text-to-SQL generation, semantic catalog curation and data visualizations. We will then conclude this talk by discussing challenges and solutions for production deployment in the modern data stack.

    Day 1 04/23/2024
    1:10 pm - 1:40 pm

    From Research to the Enterprise: Leveraging Large Language Models for Enhanced ETL, Analytics, and Deployment

    <span class="etn-schedule-location"> <span class="firstfocus">Data Engineering</span> </span>

    As Foundation Models (FMs) continue to grow in size and capability, data is often left behind in the rush toward solving problems involving documents, images, and videos. This talk will describe our research at Stanford University and Numbers Station AI on applying FMs to structured data and their applications in the modern data stack. Starting with ETL/ELT, we'll discuss our 2022 VLDB paper ""Can Foundation Models wrangle your data?"", the first line of work to use FMs to accelerate tasks like data extraction, cleaning and integration. We'll then move up the stack and discuss our work at Numbers Station to use FMs to accelerate data analytics workflows, by automating tasks like text-to-SQL generation, semantic catalog curation and data visualizations. We will then conclude this talk by discussing challenges and solutions for production deployment in the modern data stack.

    Open Data Science

     

     

     

    Open Data Science
    One Broadway
    Cambridge, MA 02142
    info@odsc.com

    Privacy Settings
    We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
    Youtube
    Consent to display content from - Youtube
    Vimeo
    Consent to display content from - Vimeo
    Google Maps
    Consent to display content from - Google