

TALKS AND DEMOS
PARTNERS
NETWORKING EVENTS
ATTENDEES


TALKS AND DEMOS
PARTNERS
NETWORKING EVENTS
ATTENDEES
Discover How to Generate the Future with AI
Want to keep up with the latest AI developments, trends, and insights? Dealing with the build or buy dilemma to grow your business? Seeking to interact with data-obsessed peers and build your network?
Look no further: The ODSC AI Expo & Demo Hall is the destination for you
Expo Hall Topics
Partner sessions offer compelling insights on how to make data science and AI work for your industry. Here are some of the topics you can expect at the AI Expo & Demo Hall. → Registration coming soon
In-person | Demo Talk | All Levels
Once upon a time we had the Data Warehouse, life was good but it had its limitations, particularly around loading/storing complex data types. As data grew larger and more varied, the warehouse became too rigid and opinionated.
So we dove headfirst into Data Lakes to store our data. Again, things were good, but missed some of the good times that the Data Warehouse had given us. The lake had become too flexible, we needed stability in our life. In particular, we needed A.C.I.D (Atomicity, Consistency, Isolation, and Durability) Transactions.
Delta Lake, hosted by the Linux Foundation, is an open-source file layout protocol for giving us back those good times, whilst retaining all of the flexibility of the lake. Delta has gone from strength to strength, and in 2022 Databricks finally open-sourced the entire code-base, including lots of advanced features that were previously Databricks-only.
This session will take you from the absolute basics of using Delta within a Lake, through to some of those advancing engineering features, including:
• Handling Schema Drift
• Applying Constraints
• Time-Travel & Management
• Optimize & Performance Tuning
Anna is a veteran software & data engineer and a Microsoft Data Platform MVP, with over 17 years of experience. Anna has tackled projects from real-time analytics with Scala & Kafka, building out Data Lakes with spark and applying engineering to Data Science. She is a senior consultant with Advancing Analytics, helping shape & evolve their data engineering practice. Anna has a real passion for data and strives to bring the worlds of Software Development and Data Science closer together. Other areas of interest include UX, Agile methodologies, and helping to organize/run local Code Clubs.
In-person | Demo Talk | All Levels
Modeling time series data is difficult due to its large quantities and constantly evolving nature. Existing techniques have limitations in scalability, agility, explainability, and accuracy. Despite 50 years of research, current techniques often fall short when applied to time series data. The Tangent Information Modeler (TIM) offers a game-changing approach with efficient and effective feature engineering based on Information Geometry. This multivariate modeling co-pilot can handle a wider range of time series use cases with award-winning results and incredible performance.
Philip Wauters is Customer Success Manager and Value engineer at Tangent Works working on practical applications of time series machine learning at customers from various industries such as Siemens, BASF, Borealis and Volkswagen. With a commercial background and experience with data engineering, analysis and data science his goal is to find and extract the business value in the enormous amounts of time-series data that exists at companies today.
Demo Talk | Virtual
In this session, we will hear from Continental Tire about their journey toward implementing MLOps since 2015. We will explore how they enable data scientists from diverse backgrounds to easily build models with the languages, frameworks, and tools they are comfortable with.
The session will delve into the challenges faced by Continental Tire’s data science teams, and the strategies they have used to address them. Additionally, the session will cover important considerations for those starting on their MLOps journey, including what to keep in mind when building infrastructure and workflows for data science projects.
The session will conclude with a demo and overview of the Valohai platform, which has been used by Continental Tire to streamline their MLOps workflows.
Demo Talk | In-person | All Levels
Join this demo to find how to centralize your ML pipeline and cut down operational complexities at each stage along the way. Qwak’s platform supports multiple use cases across any business vertical and allows data teams to productionize their models more efficiently and without depending on engineering resources.
Join us to watch how to create features from data and build, train, and deploy models into production. All under a single platform and with unprecedented simplicity.
Pavel Klushin is a seasoned solution architecture expert who currently leads the function at Qwak. With years of experience in the technology industry, he is known for his exceptional ability to design and deliver innovative solutions that meet the specific needs of his clients. Pavel previously led the solution architecture team at Spot (Aquired by NetApp).
In-person | Demo Talk | All Levels
In this informative session, we invite you to delve into the world of MLOps and explore the intricacies of managing large-scale machine learning experiments, ensuring data lineage, orchestrating efficient pipelines, and deploying models into production. Join us for a practical demonstration of the Valohai MLOps platform, which simplifies and streamlines the entire MLOps lifecycle.
Toni Perämäki is the Chief Operating Officer of Valohai, a globally recognized MLOps platform dedicated to automating machine learning workflows. With an academic background in software engineering and economics, he brings a strong blend of technical and business acumen to the table. An advocate of the ‘giving forward’ principle, Toni generously offers pro-bono support to young entrepreneurs, sharing his wealth of experience and knowledge. He understands the challenges faced by emerging startups and is committed to empowering the next generation of leaders. Beyond his professional endeavors, Toni is a passionate sailor and a jiu-jitsu enthusiast. These pursuits reflect his philosophy of balance, discipline, and resilience, which he seamlessly applies to his leadership role at Valohai.
In-person | Demo Talk | All Levels
Graph AI can achieve the state of the art on many machine learning tasks regarding relational data. One of them is recommendations which can be found in many services such as content streaming, shopping or social media. Discrete data approaches are limited by definition while analysing interconnections is fundamental to understanding complex interactions and behaviours. Our customer-centric approach lets you create a holistic view of the customer from different perspectives. With our solution, a Graph AI platform with explainability at the core, you can build a recommendation engine powered by connected data to provide better recommendations. We will show, step by step, how the user can interact with the platform to get new insights and better understand customer behaviour and preferences that are the basis for recommending better content to them. The platform also provides the explainability of the recommendation which is fundamental to building better and more trustworthy models.
Alberto De Lazzari is Chief Scientist at LARUS, he takes care of research and development in the world of artificial intelligence and network science, following collaborations with various Italian universities. In the last 10 years he has worked in different areas: from automotive and management sectors to banking and insurance. He has experience in IT process internalization and digital transformation projects.
Virtual | Demo Talk | All Levels
TimeXtender provides all the features you need to build a future-proof infrastructure for ingesting, transforming, modelling, and delivering clean, reliable data in the fastest, most efficient way possible. You can’t optimize for everything all at once. That’s why we take a holistic approach to data integration that optimizes for agility, not fragmentation. By unifying each layer of the data stack, TimeXtender empowers you to build data solutions 10x faster, while reducing your costs by 70%-80%.
In this session, we shill show you how to:
- Gather data in raw form and structure for use in advanced analytics and AI
- Create a modern data warehouse with access to improved data
- Build semantic models for self-service analytics
- Document your analytics data for compliance
Chris is a specialist in business analysis, system infrastructure, management information, business Intelligence, hardware and software implementation and project management. As a Business Analyst, he has worked with blue chip and global organisations such as Imperial Collage Hospital, Nottingham University Teaching Hospital, Smith & Nephew, St Andrews University, Chelsea FC and The Super League.
Today, Chris is a lead specialist at TimeXtender showing businesses a better way to work with data building modern data estates for analytics and AI applications.
In-person | Demo Talk | All Levels
In this demo, we will showcase the use of our newest Applied ML Prototype (AMP), which demonstrates how to use an open source pre-trained instruction-following LLM (Large Language Model) to build a ChatBot-like web application. By leveraging a Vector Database populated with relevant documentation for context retrieval, the application enhances the LLM’s responses, creating a subject matter expert Chatbot. All components run within Cloudera Machine Learning (CML), eliminating the need for external model APIs or additional LLM training. Attendees will see how this Retrieval Augmented Generation (RAG) approach improves response accuracy for industry specific use cases.
With over 10 years of working with data management and advanced analytics products, Peter Ableda is serving as the Director of Product Management for Cloudera Machine Learning at Cloudera. Peter holds a Master of Science degree in Computer Science from Budapest University of Technology and is an 8-year veteran of Cloudera — recognized across the industry for his work managing big data technology products and cutting-edge data-driven applications for high-growth organizations. https://www.linkedin.com/in/peterableda/?originalSubdomain=hu
In-person | Demo Talk | All Levels
In the Python open-source ecosystem, many packages are available that cater to:
– the building of great algorithms
– the visualization of data
Despite this, over 85% of Data Science Pilots remain pilots and do not make it to the production stage. With Taipy, a new open-source Python framework, Data Scientists/Python Developers are able to build great pilots as well as stunning production-ready applications for end-users. Taipy provides two independent modules: Taipy GUI and Taipy Core.
In this talk, we will demonstrate how:
1. Taipy-GUI goes way beyond the capabilities of the standard graphical stack: Gradio, Streamlit, Dash, etc.
2. Taipy Core fills a void in the standard Python back-end stack.
Alexandre worked in Amazon Business Intelligence.He developed a graph-based interactive Python editor: Pyflow (1.2k stars!). He is skilled in MLOps, Data Engineering, and Python. He has studied Master of Engineering – CentraleSupélec from University of Paris-Saclay.
Florian Jacta is a specialist of Taipy, a low-code open-source Python package enabling any Python developers to easily develop a production-ready AI application. Package pre-sales and after-sales functions. He is data Scientist for Groupe Les Mousquetaires (Intermarche) and ATOS. He developed several Predictive Models as part of strategic AI projects. Also, Florian got his master’s degree in Applied Mathematics from INSA, Major in Data Science and Mathematical Optimization.
In-person | Demo Talk | All Levels
Once upon a time we had the Data Warehouse, life was good but it had its limitations, particularly around loading/storing complex data types. As data grew larger and more varied, the warehouse became too rigid and opinionated.
So we dove headfirst into Data Lakes to store our data. Again, things were good, but missed some of the good times that the Data Warehouse had given us. The lake had become too flexible, we needed stability in our life. In particular, we needed A.C.I.D (Atomicity, Consistency, Isolation, and Durability) Transactions.
Delta Lake, hosted by the Linux Foundation, is an open-source file layout protocol for giving us back those good times, whilst retaining all of the flexibility of the lake. Delta has gone from strength to strength, and in 2022 Databricks finally open-sourced the entire code-base, including lots of advanced features that were previously Databricks-only.
This session will take you from the absolute basics of using Delta within a Lake, through to some of those advancing engineering features, including:
• Handling Schema Drift
• Applying Constraints
• Time-Travel & Management
• Optimize & Performance Tuning
Anna is a veteran software & data engineer and a Microsoft Data Platform MVP, with over 17 years of experience. Anna has tackled projects from real-time analytics with Scala & Kafka, building out Data Lakes with spark and applying engineering to Data Science. She is a senior consultant with Advancing Analytics, helping shape & evolve their data engineering practice. Anna has a real passion for data and strives to bring the worlds of Software Development and Data Science closer together. Other areas of interest include UX, Agile methodologies, and helping to organize/run local Code Clubs.
In-person | Demo Talk | All Levels
Modeling time series data is difficult due to its large quantities and constantly evolving nature. Existing techniques have limitations in scalability, agility, explainability, and accuracy. Despite 50 years of research, current techniques often fall short when applied to time series data. The Tangent Information Modeler (TIM) offers a game-changing approach with efficient and effective feature engineering based on Information Geometry. This multivariate modeling co-pilot can handle a wider range of time series use cases with award-winning results and incredible performance.
Philip Wauters is Customer Success Manager and Value engineer at Tangent Works working on practical applications of time series machine learning at customers from various industries such as Siemens, BASF, Borealis and Volkswagen. With a commercial background and experience with data engineering, analysis and data science his goal is to find and extract the business value in the enormous amounts of time-series data that exists at companies today.
Demo Talk | Virtual
In this session, we will hear from Continental Tire about their journey toward implementing MLOps since 2015. We will explore how they enable data scientists from diverse backgrounds to easily build models with the languages, frameworks, and tools they are comfortable with.
The session will delve into the challenges faced by Continental Tire’s data science teams, and the strategies they have used to address them. Additionally, the session will cover important considerations for those starting on their MLOps journey, including what to keep in mind when building infrastructure and workflows for data science projects.
The session will conclude with a demo and overview of the Valohai platform, which has been used by Continental Tire to streamline their MLOps workflows.
Demo Talk | In-person | All Levels
Join this demo to find how to centralize your ML pipeline and cut down operational complexities at each stage along the way. Qwak’s platform supports multiple use cases across any business vertical and allows data teams to productionize their models more efficiently and without depending on engineering resources.
Join us to watch how to create features from data and build, train, and deploy models into production. All under a single platform and with unprecedented simplicity.
Pavel Klushin is a seasoned solution architecture expert who currently leads the function at Qwak. With years of experience in the technology industry, he is known for his exceptional ability to design and deliver innovative solutions that meet the specific needs of his clients. Pavel previously led the solution architecture team at Spot (Aquired by NetApp).
In-person | Demo Talk | All Levels
In this informative session, we invite you to delve into the world of MLOps and explore the intricacies of managing large-scale machine learning experiments, ensuring data lineage, orchestrating efficient pipelines, and deploying models into production. Join us for a practical demonstration of the Valohai MLOps platform, which simplifies and streamlines the entire MLOps lifecycle.
Toni Perämäki is the Chief Operating Officer of Valohai, a globally recognized MLOps platform dedicated to automating machine learning workflows. With an academic background in software engineering and economics, he brings a strong blend of technical and business acumen to the table. An advocate of the ‘giving forward’ principle, Toni generously offers pro-bono support to young entrepreneurs, sharing his wealth of experience and knowledge. He understands the challenges faced by emerging startups and is committed to empowering the next generation of leaders. Beyond his professional endeavors, Toni is a passionate sailor and a jiu-jitsu enthusiast. These pursuits reflect his philosophy of balance, discipline, and resilience, which he seamlessly applies to his leadership role at Valohai.
In-person | Demo Talk | All Levels
Graph AI can achieve the state of the art on many machine learning tasks regarding relational data. One of them is recommendations which can be found in many services such as content streaming, shopping or social media. Discrete data approaches are limited by definition while analysing interconnections is fundamental to understanding complex interactions and behaviours. Our customer-centric approach lets you create a holistic view of the customer from different perspectives. With our solution, a Graph AI platform with explainability at the core, you can build a recommendation engine powered by connected data to provide better recommendations. We will show, step by step, how the user can interact with the platform to get new insights and better understand customer behaviour and preferences that are the basis for recommending better content to them. The platform also provides the explainability of the recommendation which is fundamental to building better and more trustworthy models.
Alberto De Lazzari is Chief Scientist at LARUS, he takes care of research and development in the world of artificial intelligence and network science, following collaborations with various Italian universities. In the last 10 years he has worked in different areas: from automotive and management sectors to banking and insurance. He has experience in IT process internalization and digital transformation projects.
Virtual | Demo Talk | All Levels
TimeXtender provides all the features you need to build a future-proof infrastructure for ingesting, transforming, modelling, and delivering clean, reliable data in the fastest, most efficient way possible. You can’t optimize for everything all at once. That’s why we take a holistic approach to data integration that optimizes for agility, not fragmentation. By unifying each layer of the data stack, TimeXtender empowers you to build data solutions 10x faster, while reducing your costs by 70%-80%.
In this session, we shill show you how to:
- Gather data in raw form and structure for use in advanced analytics and AI
- Create a modern data warehouse with access to improved data
- Build semantic models for self-service analytics
- Document your analytics data for compliance
Chris is a specialist in business analysis, system infrastructure, management information, business Intelligence, hardware and software implementation and project management. As a Business Analyst, he has worked with blue chip and global organisations such as Imperial Collage Hospital, Nottingham University Teaching Hospital, Smith & Nephew, St Andrews University, Chelsea FC and The Super League.
Today, Chris is a lead specialist at TimeXtender showing businesses a better way to work with data building modern data estates for analytics and AI applications.
In-person | Demo Talk | All Levels
In this demo, we will showcase the use of our newest Applied ML Prototype (AMP), which demonstrates how to use an open source pre-trained instruction-following LLM (Large Language Model) to build a ChatBot-like web application. By leveraging a Vector Database populated with relevant documentation for context retrieval, the application enhances the LLM’s responses, creating a subject matter expert Chatbot. All components run within Cloudera Machine Learning (CML), eliminating the need for external model APIs or additional LLM training. Attendees will see how this Retrieval Augmented Generation (RAG) approach improves response accuracy for industry specific use cases.
With over 10 years of working with data management and advanced analytics products, Peter Ableda is serving as the Director of Product Management for Cloudera Machine Learning at Cloudera. Peter holds a Master of Science degree in Computer Science from Budapest University of Technology and is an 8-year veteran of Cloudera — recognized across the industry for his work managing big data technology products and cutting-edge data-driven applications for high-growth organizations. https://www.linkedin.com/in/peterableda/?originalSubdomain=hu
In-person | Demo Talk | All Levels
In the Python open-source ecosystem, many packages are available that cater to:
– the building of great algorithms
– the visualization of data
Despite this, over 85% of Data Science Pilots remain pilots and do not make it to the production stage. With Taipy, a new open-source Python framework, Data Scientists/Python Developers are able to build great pilots as well as stunning production-ready applications for end-users. Taipy provides two independent modules: Taipy GUI and Taipy Core.
In this talk, we will demonstrate how:
1. Taipy-GUI goes way beyond the capabilities of the standard graphical stack: Gradio, Streamlit, Dash, etc.
2. Taipy Core fills a void in the standard Python back-end stack.
Alexandre worked in Amazon Business Intelligence.He developed a graph-based interactive Python editor: Pyflow (1.2k stars!). He is skilled in MLOps, Data Engineering, and Python. He has studied Master of Engineering – CentraleSupélec from University of Paris-Saclay.
Florian Jacta is a specialist of Taipy, a low-code open-source Python package enabling any Python developers to easily develop a production-ready AI application. Package pre-sales and after-sales functions. He is data Scientist for Groupe Les Mousquetaires (Intermarche) and ATOS. He developed several Predictive Models as part of strategic AI projects. Also, Florian got his master’s degree in Applied Mathematics from INSA, Major in Data Science and Mathematical Optimization.
Last Chance To Join
Offer Ends in
Save 30% on Full Price
Save 30% on Full Price
Visionaries and Thought Leaders
With an AI Expo Pass, you can take advantage of 40+ demo sessions and ODSC Keynotes on the virtual platform. Our Speakers will provide compelling insights on how to make data science and AI work for your industry.
Past Keynote Speakers
Previous ODSC Europe Expo Speakers

Philip Wauters
Philip Wauters is Customer Success Manager and Value engineer at Tangent Works working on practical applications of time series machine learning at customers from various industries such as Siemens, BASF, Borealis and Volkswagen. With a commercial background and experience with data engineering, analysis and data science his goal is to find and extract the business value in the enormous amounts of time-series data that exists at companies today.
Learn how to Efficiently Build and Operationalize Time Series Models in 2023(Workshop)
Demo Talk: The Tangent Information Modeler, time series modeling reinvented
Abstract:
Modeling time series data is difficult due to its large quantities and constantly evolving nature. Existing techniques have limitations in scalability, agility, explainability, and accuracy. Despite 50 years of research, current techniques often fall short when applied to time series data. The Tangent Information Modeler (TIM) offers a game-changing approach with efficient and effective feature engineering based on Information Geometry. This multivariate modeling co-pilot can handle a wider range of time series use cases with award-winning results and incredible performance.
During this demo session we will showcase how best-in-class and very transparent time series models can be built with just one iteration through the data. We will cover several concrete use cases for advanced time series forecasting, anomaly detection and root cause analysis.

Anna-Maria Wykes
Anna is a veteran software & data engineer and a Microsoft Data Platform MVP, with over 17 years of experience. Anna has tackled projects from real-time analytics with Scala & Kafka, building out Data Lakes with spark and applying engineering to Data Science. She is a senior consultant with Advancing Analytics, helping shape & evolve their data engineering practice. Anna has a real passion for data and strives to bring the worlds of Software Development and Data Science closer together. Other areas of interest include UX, Agile methodologies, and helping to organize/run local Code Clubs.
Demo Session Title: DeltaLake – Enabling Open Source Lakehouses
Abstract: Once upon a time we had the Data Warehouse, life was good but it had its limitations, particularly around loading/storing complex data types. As data grew larger and more varied, the warehouse became too rigid and opinionated.
So we dove headfirst into Data Lakes to store our data. Again, things were good, but missed some of the good times that the Data Warehouse had given us. The lake had become too flexible, we needed stability in our life. In particular, we needed A.C.I.D (Atomicity, Consistency, Isolation, and Durability) Transactions.
Delta Lake, hosted by the Linux Foundation, is an open-source file layout protocol for giving us back those good times, whilst retaining all of the flexibility of the lake. Delta has gone from strength to strength, and in 2022 Databricks finally open-sourced the entire code-base, including lots of advanced features that were previously Databricks-only.
This session will take you from the absolute basics of using Delta within a Lake, through to some of those advancing engineering features, including:
• Handling Schema Drift
• Applying Constraints
• Time-Travel & Management
• Optimize & Performance Tuning

Alexandre Sajus
Alexandre worked in Amazon Business Intelligence.He developed a graph-based interactive Python editor: Pyflow (1.2k stars!). He is skilled in MLOps, Data Engineering, and Python. He has studied Master of Engineering – CentraleSupélec from University of Paris-Saclay.
How to Build Stunning Data Science Web Applications in Python – Taipy Tutorial
Demo Session Title: Turning your Data/AI algorithms into full web apps in no time with Taipy
Abstract:
In the Python open-source ecosystem, many packages are available that cater to:
– the building of great algorithms
– the visualization of data
Despite this, over 85% of Data Science Pilots remain pilots and do not make it to the production
stage.
With Taipy, a new open-source Python framework, Data Scientists/Python Developers are able to
build great pilots as well as stunning production-ready applications for end-users.
Taipy provides two independent modules: Taipy GUI and Taipy Core.
In this talk, we will demonstrate how:
1. Taipy-GUI goes way beyond the capabilities of the standard graphical stack: Gradio,
Streamlit, Dash, etc.
2. Taipy Core fills a void in the standard Python back-end stack.

Florian Jacta
Florian Jacta is a specialist of Taipy, a low-code open-source Python package enabling any Python developers to easily develop a production-ready AI application. Package pre-sales and after-sales functions. He is data Scientist for Groupe Les Mousquetaires (Intermarche) and ATOS. He developed several Predictive Models as part of strategic AI projects. Also, Florian got his master’s degree in Applied Mathematics from INSA, Major in Data Science and Mathematical Optimization.
How to Build Stunning Data Science Web applications in Python – Taipy Tutorial(Workshop)
Bringing AI to Retail and Fast Food with Taipy’s Applications(Track Keynote)
Demo Session Title: Turning your Data/AI algorithms into full web apps in no time with Taipy
Abstract:
In the Python open-source ecosystem, many packages are available that cater to:
– the building of great algorithms
– the visualization of data
Despite this, over 85% of Data Science Pilots remain pilots and do not make it to the production
stage.
With Taipy, a new open-source Python framework, Data Scientists/Python Developers are able to
build great pilots as well as stunning production-ready applications for end-users.
Taipy provides two independent modules: Taipy GUI and Taipy Core.
In this talk, we will demonstrate how:
1. Taipy-GUI goes way beyond the capabilities of the standard graphical stack: Gradio,
Streamlit, Dash, etc.
2. Taipy Core fills a void in the standard Python back-end stack.

Alberto De Lazzari
Alberto De Lazzari is Chief Scientist at LARUS, he takes care of research and development in the world of artificial intelligence and network science, following collaborations with various Italian universities. In the last 10 years he has worked in different areas: from automotive and management sectors to banking and insurance. He has experience in IT process internalization and digital transformation projects.
Demo Session Title: Building Next-gen Recommendation Systems with Galileo.XAI
Abstract:
Graph AI can achieve the state of the art on many machine learning tasks regarding relational data. One of them is recommendations which can be found in many services such as content streaming, shopping or social media. Discrete data approaches are limited by definition while analysing interconnections is fundamental to understanding complex interactions and behaviours. Our customer-centric approach lets you create a holistic view of the customer from different perspectives. With our solution, a Graph AI platform with explainability at the core, you can build a recommendation engine powered by connected data to provide better recommendations. We will show, step by step, how the user can interact with the platform to get new insights and better understand customer behaviour and preferences that are the basis for recommending better content to them. The platform also provides the explainability of the recommendation which is fundamental to building better and more trustworthy models.

Chris Butcher
Chris is a specialist in business analysis, system infrastructure, management information, business Intelligence, hardware and software implementation and project management. As a Business Analyst, he has worked with blue chip and global organisations such as Imperial Collage Hospital, Nottingham University Teaching Hospital, Smith & Nephew, St Andrews University, Chelsea FC and The Super League.
Today, Chris is a lead specialist at TimeXtender showing businesses a better way to work with data building modern data estates for analytics and AI applications.
Demo Session Title: Build a Modern Data Estate in 15 Minutes
Abstract:
TimeXtender provides all the features you need to build a future-proof infrastructure for ingesting, transforming, modelling, and delivering clean, reliable data in the fastest, most efficient way possible. You can’t optimize for everything all at once. That’s why we take a holistic approach to data integration that optimizes for agility, not fragmentation. By unifying each layer of the data stack, TimeXtender empowers you to build data solutions 10x faster, while reducing your costs by 70%-80%.
In this session, we shill show you how to:
Gather data in raw form and structure for use in advanced analytics and AI
Create a modern data warehouse with access to improved data
Build semantic models for self-service analytics
Document your analytics data for compliance
Europe 2022 AI Expo Hall Schedule
We are delighted to announce our Europe 2022 Preliminary Schedule! More sessions coming soon!
Past ODSC Europe Partners
ODSC is proud to partner with numerous industry leaders providing organizations with the tools to accelerate digital transformation with AI. You can reach out to our Expo partners prior to the event for more information. → Registration coming soon
Interested in Partnering with ODSC?
Last year, ODSC welcomed nearly 20,000 attendees to an unparalleled range of events, from large conferences to hackathons and small community gatherings.
Who Should Attend?
The AI Expo & Demo Hall gathers executives, business professionals, experts, and data scientists who are transforming the enterprise with Artificial Intelligence.
Business Leaders and Executives: Chief Data Scientists, Chief AI Officers, CDO, CIO, CTO, VPs of Engineering, R&D, Marketing, Business Development, Product, Development, Data
Directors of Data Science: Data Analytics Managers, Heads of Data and Innovation; Software, IT, and Product Managers
Data Science Professionals: Data Scientists, Data Engineers, Data Analysts, Architects, ML and DL Experts, Database Admins
Software Development Experts: Software Architects, Engineers, and Developers
ARE YOU AN EARLY-STAGE STARTUP?
Past Companies in Attendance
Connect with like-minded professionals to learn about the latest languages, tools, and frameworks related to all types of streaming media applications. Here’s a sampling of companies that have attended Streaming Media Connect events.
ODSC Newsletter
Stay current with the latest news and updates in open source data science. In addition, we’ll inform you about our many upcoming Virtual and in-person events in Boston, NYC, Sao Paulo, San Francisco, and London. And keep a lookout for special discount codes, only available to our newsletter subscribers!
Virtual AI Expo & Demo Hall
HEAR FROM THE BRIGHTEST MINDS IN THE FIELD DURING KEYNOTES, TALKS, AND PRODUCT DEMOS