Abstract: 2022 was a significant year for generative AI. This was evident on social media platforms, where images created by generative machine learning models such as DALL-E and Stable Diffusion were prevalent. Additionally, startups that built products using generative models were able to attract funding, even in a challenging market. Major tech companies also started to integrate generative models into their mainstream products.
This talk will focus on the challenges and opportunities of implementing generative AI in organizations. The talk will consist of three parts:
(1) An overview of the latest generative AI models and how they work
(2) Best practices and techniques for training and deploying generative AI models
(3) Ethical considerations in generative AI
In the first part of the talk I will provide an overview of the latest generative AI models and how they work. This will include discussing the various types of generative AI models, such as diffusion models for image generation and transformer (GPT-like) models for text generation and their underlying architectures and key concepts.
In the next section I will focus on the challenges organizations typically face when training and deploying these large models and best practices to overcome them. The open source community has made significant progress in 2022 in keeping up with AI services like GPT-3 and DALL-E and released state-of-the-art models like BLOOM, OPT, and Stable Diffusion that rival their proprietary counterparts. When it comes to training and deploying these open-source models, however, organizations often struggle with ease of access, latency, and costs due to the size of these models (BLOOM, for example, has 176 billion parameters and can require more than 350 GB of GPU memory to run). I will discuss several techniques that can be used to train and deploy these models, such as model parallelism, quantization, distillation, and CPU/NVMe offloading.
Finally, I will address some of the ethical concerns surrounding generative AI. This will include bias, as generative AI models are trained on large datasets that may contain biases that are reflected in the generated content. I will also discuss and show examples on which tasks can already benefit greatly from generative AI and which tasks are out of reach (for now).
Attendees will be able to take the insights from this tutorial and immediately apply it in their organization. They will have the knowledge to train and deploy open-source generative AI models and experiment with them to see if they fit their use cases.
Bio: Heiko Hotz is a Senior Solutions Architect for AI & Machine Learning at AWS with a special focus on Natural Language Processing (NLP), Large Language Models (LLMs), and Generative AI. He is also the founder of the NLP London Meetup group, bringing together NLP enthusiasts and industry experts.