
Abstract: Multimodal AI is a fast-growing field where deep neural networks are trained using multiple types of input data simultaneously (e.g. text, image, video, audio). Multimodal models perform better in content understanding applications, and are setting new standards for content generation in models such as DALL-E and StableDiffusion. Building multimodal models is hard; In this session we share more about multimodal AI, why you should care about it, what are some challenges you might face and how TorchMultimodal, our new PyTorch domain library eases the developer experience of building multimodal models.
Bio: Evan Smothers is a software engineer on the PyTorch Multimodal team at Meta. His work focuses on supporting researchers building state-of-the-art vision and language models, and helping to scale these models to billions of parameters. Previously Evan was a data scientist at Uber using ML to improve their matching algorithms. His academic background is in mathematics, and he completed his PhD from UC Davis with a research focus in partial differential equations.