Abstract: Unless you’ve been living under a rock for the past 6 months, you won’t have been able to avoid being bombarded with news about the latest developments in generative AI. Much of this information quickly devolved into wild speculation about the capabilities of these models, with many claiming that they are sophisticated enough to soon replace roles as diverse as writers, designers, lawyers, doctors … and even data scientists. Others have gone further, claiming that these models are showing at least some signs of artificial general intelligence or that we’re on an inevitable path to an AI apocalypse.
In this talk, we’ll cut through the hype and delve deeply into claims of artificial general intelligence in large language models (LLMs). We’ll discuss how to more systematically measure intelligence in artificial systems, and talk about where current LLMs stack up against this definition. We’ll also discuss the impact that the current untempered discussions about artificial general intelligence have on the perception of what LLMs are actually capable of, and how these discussions might be causing us to overlook their actual current use cases. By the end of this talk, you’ll see how far away we are from creating truly intelligent models, and also see the potential of such an intelligence if it could be developed.
* Attendees will learn about the wider context of claims that LLMs models like ChatGPT and GPT-4 are showing signs of artificial general intelligence (AGI), including previous points in history where people were sure that AGI was just around the corner.
* This talk will review claims made in Microsoft's famous 2023 """"Sparks of Artificial General Intelligence"""" paper, which posits that GPT-4 was showing """"signs"""" of AGI based on an obscure definition of intelligence from the 1990's.
* I'll then challenge this definition of intelligence based on how modern psychology actually defines it, and present François Chollet's alternative method of measuring AGI presented in his 2019 paper """"On the measure of intelligence"""".
* I'll present hard evidence to disprove specific claims about intelligent behaviour in GPT-4, such as that it demonstrates theory of mind and that it can solve coding puzzles.
* The talk will conclude by investigating the impact that focusing on AGI has on the perception of LLMs, such as accepting their answers without criticism despite their well-known problems with hallucination (a.k.a. automation bias). I'll also present current use cases that work within the limitations of these models while not overextending them.
Bio: Dr. Jodie Burchell is the Developer Advocate in Data Science at JetBrains, and was previously a Lead Data Scientist at Verve Group Europe. She completed a PhD in clinical psychology and a postdoc in biostatistics, before leaving academia for a data science career. She has worked for 7 years as a data scientist in both Australia and Germany, developing a range of products including recommendation systems, analysis platforms, search engine improvements and audience profiling. She has held a broad range of responsibilities in her career, doing everything from data analytics to maintaining machine learning solutions in production. She is a long time content creator in data science, across conference and user group presentations, books, webinars, and posts on both her own and JetBrain's blogs.