Trevor Back

Trevor Back

Chief Product Officer at Speechmatics

    Trevor Back is an established product leader who has successfully taken applications and services from concept to launch across industries as varied as wearables, healthcare, and automotive. He joined speech intelligence company Speechmatics in 2023 as Chief Product Officer, bringing with him over a decade of experience in machine learning and AI. A former startup founder himself, Trevor was an early DeepMind employee and was part of the team that commercialised AlphaFold, which in turn gave rise to DeepMind spin-off, Isomorphic Labs.

    All Sessions by Trevor Back

    Day 2 04/24/2024
    4:05 pm - 4:35 pm

    Fallacy of Scale

    <span class="etn-schedule-location"> <span class="firstfocus">Machine Learning</span>

    The leaps in AI made over the last few years – particularly LLMs – have been achieved with scale, i.e., training models on increasingly large datasets. Whether LLMs, like GPT4, can be improved with another run is a question of scale. The current approach – train models on huge datasets – has reliably delivered impressive results. But the quality of any LLM is determined by the size of the dataset it’s trained on and there’s a limit to those datasets, even if they are the size of the internet. The wider industry has been calibrated to believe that more data equals improved models, so we chase bigger and bigger runs and will likely see the first $1bn run within a year. But there’s a limitation to this approach – the data itself. Many communities lack datasets of a comparable size to those in English, and even that has its limitations. The day is approaching when scale alone won’t be enough to deliver meaningful advances. Efficient learning is a key component to true intelligence. Therefore, a focus on efficiency – learning deeper understanding from smaller data – is becoming increasingly important as scale reaches its limits to growth. With increased efficiency, it will be possible to continue the rapid advancement of AI, and potentially even more capable and intelligent models as higher levels of abstraction and representations can be learned. To create the next generation of intelligent algorithms that can deliver for less well-represented communities besides those that speak in English, we will need continued progress in efficient learning mechanisms and methods. Learning outcomes: · Why a focus on algorithmic sample efficiency is required to enable further advancement in AI · The advantages that efficient learning models can provide, from removing blockers to the development of theory of mind to increasing less well represented communities in speech tech. · Why LLMs are just the beginning, and the applications for speech technology once models can truly understand intent. · What are the next generation of intelligent systems and what it’ll take to get us there.

    Open Data Science




    Open Data Science
    One Broadway
    Cambridge, MA 02142

    Privacy Settings
    We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
    Consent to display content from - Youtube
    Consent to display content from - Vimeo
    Google Maps
    Consent to display content from - Google