Preethi Raghavan

Preethi Raghavan

Vice President, NLP and Machine learning at Fidelity

    Preethi Raghavan is a seasoned artificial intelligence professional with over 12 years of cross-disciplinary research experience in AI, Natural Language Processing, Healthcare, and Finance. She currently serves as a Vice President of Data Science at the AI Center of Excellence at Fidelity, where she leads a team focused on NLP, conversational AI, and machine learning. She has a proven track record in cross-functional AI strategy and ethical AI governace to propel business transformation, and set industry-leading standards in AI-driven product innovation. Before joining Fidelity, Preethi was a Research Staff Member at IBM Research and a PI at the MIT-IBM AI Lab in Cambridge, where she worked on question answering for electronic health records. Preethi has tackled various fundamental NLP problems for clinical text, such as temporal reasoning, semantic parsing, and question answering. She has published widely, with over 700 citations, and serves as a program committee member and publication chair for various prominent conferences and journals, including the Association for Computational Linguistics. Preethi received her Ph.D. in Computer Science from The Ohio State University in 2014, with her dissertation research focusing on timeline generation from unstructured longitudinal patient records.

    All Sessions by Preethi Raghavan

    Day 3 04/25/2024
    11:35 am - 12:05 pm

    Generative AI Guardrails for Enterprise LLM Solutions

    <span class="etn-schedule-location"> <span class="firstfocus">Generative AI</span> </span>

    Generative LLMs have transformed consumer interactions with AI, lowering entry barriers and increasing accessibility to AI-powered solutions. However, this widespread adoption comes with potential unintended consequences, as LLM-generated content may not always be accurate or appropriate. As excitement around both vendor-based LLMs like ChatGPT, as well as open-source ones like LLaMa and similar generative AI solutions grows, organizations must remain attentive to the potential risks associated with their use. Ignoring these risks could lead to significant negative impacts on a brand or business. To address these risks and promote responsible generative AI usage in business contexts, it is crucial to implement guardrails on both the input to the LLM as well as the text generated by the LLM. In this talk, i will present both existing models in the responsible AI landscape and propose a system for content regulation.

    Open Data Science




    Open Data Science
    One Broadway
    Cambridge, MA 02142

    Privacy Settings
    We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
    Consent to display content from - Youtube
    Consent to display content from - Vimeo
    Google Maps
    Consent to display content from - Google