
Abstract: In this session, we will take a deep dive into a novel application of AI: training Large Language Models (LLM) on individual employee's Slack messages. The first portion of our discussion is dedicated to the technical aspects of this process, where we will explain the steps involved in fine-tuning the LLM. We will demonstrate how such models, tailored to mimic specific individual's textual styles, can serve as the foundation for applications in text generation and automated question answering systems.
Transitioning into the second part of our talk, we will spotlight the often-underemphasized side of AI deployment - the risks and ethical concerns. Using our project as a case study, we will expose you to the potential pitfalls we encountered and the proactive measures taken to manage these risks. Our aim is to underline the necessity of an encompassing risk management framework for AI.
Attendees can anticipate gaining a deep understanding of the intricacies of LLM training on a unique dataset: internal employee data. Simultaneously, they will acquire actionable insights into risk assessment, mitigation strategies, and ethical guidelines. This knowledge can be immediately applied in AI projects to ensure ethical and responsible AI deployment, a must-have skillset in any AI practitioner's arsenal.
Learning Objectives: LLM fine tuning, Huggingface, Risk Management FrameworkLearning objectives: LLM fine tuning, Huggingface, Risk Management Framework
Background Knowledge:
Python, HuggingFace
Bio: Eli is CTO and Co-Founder at Credo AI. He has led teams building secure and scalable software at companies like Netflix and Twitter. Eli has a passion for unraveling how things work and debugging hard problems. Whether it's using cryptography to secure software systems or designing distributed system architecture, he is always excited to learn and tackle new challenges. Eli graduated with an Electrical Engineering and Computer Science degree from U.C. Berkeley.