
Abstract: Large Language Models have proven extraordinary capabilities in the field of Generative AI, from text understanding and generation to problem solving, exhibiting the unique capability of common-sense reasoning. But what exactly is a Large Language Model and why does it represent such a paradigm shift in the AI landscape?
By completing this workshop, you will develop an understanding of Generative AI and Large Language Models, including the architecture behind them, their functioning and how to leverage their unique conversational capabilities. You will also become familiar with the concept of LLM as reasoning engine that can power your applications, paving the way to a new landscape of software development in the era of Generative AI. Finally, we will cover some examples of LLM-powered applications in Python using popular AI orchestrator, such as LangChain.
Session Outline:
Lesson 1: Generative AI and LLM
Familiarize yourself with Generative AI and why it differs from traditional AI. You will be introduced to the core model of Generative AI
that is the Large Foundation Model, to then focus on the most popular declination of them which are Large Language Models. Understanding
of the architecture and functioning behind an LLM and its main applications.
Lesson 2: How LLMs are paving the way to new software development landscape
Examples of new conversational application using LLMs. Importance of the prompt as main driver to customize your model.
Familiarizing with common techniques of prompt engineering. Focus on how to tune your model with one or few shot learning and, eventually, fine-tuning.
Lesson 3: Build LLM-powered application
Understanding the new component that LLM-powered application are bringing to the table, such as memory, plug-ins, prompts and so on.
Familiarize with the architectural framework of LLM-powered application and get familiar with AI orchestrators such as LangChain.
Conclude with a demo of some examples of LLMs-powered application.
Learning objectives:
Understanding of the theory behind LLMs
Understanding of how to use LLMs in real world scenarios
Understanding of LLM-related concepts such as prompt, memory, plug-ins
Understanding of architectural framework of LLM-powered applications
Understanding of Python code to embed LLM into your applications
Background Knowledge:
Basic knowledge of Python
Bio: Valentina is a Data Science MSc graduate and Cloud Specialist at Microsoft, focusing on Analytics and AI workloads within the manufacturing and pharmaceutical industry since 2022. She has been working on customers' digital transformations, designing cloud architecture and modern data platforms, including IoT, real-time analytics, Machine Learning, and Generative AI. She is also a tech author, contributing articles on machine learning, AI, and statistics, and recently published a book on Generative AI and Large Language Models.
In her free time, she loves hiking and climbing around the beautiful Italian mountains, running, and enjoying a good book with a cup of coffee.

Valentina Alto
Title
Azure Specialist - Data and Artificial Intelligence | Microsoft
