Selling Out Soon 20% Offer | Ends In
Responsible AI and Social Good
The Responsible AI track brings together top data ethicists to provide a practical, ethical framework for technologists to develop machine learning systems.
Using case studies and existing frameworks, we’ll give you the tools to build out your own ethical approach to realize the best outcomes while deploying machine learning in the real world.
You will be able to responsibly design human-in-the-loop review processes, monitor bias, build trust transparency, and develop explainable machine learning systems to ensure data and model security.
What You'll Learn
Talks + Workshops + Special Events on these topics:
AI Ethics and Bias
Federated Analytics
Federated Learning for User Privacy
AI for Climate Change, Social Good
Reproducibility
Explainable AI: heatmap-based explanations
Explainable AI: human in the loop
Uncertainty in AI
Fairness in Machine Learning
Algorithmic Decision Making
Fairness in Predictive Modeling
and more…
Some of Our Past Responsible AI Speakers

Meg Kurdziolek, PhD
Meg is currently the Lead UXR for Intrinsic.ai, where she focuses her work on making it easier for engineers to adopt and automate with industrial robotics. She is a “Xoogler”, and prior to Intrinsic worked on the Explainable AI services on Google Cloud. Meg has had a varied career working for start-ups and large corporations alike, and she has published on topics such as user research, information visualization, educational-technology design, voice user interface (VUI) design, explainable AI (XAI), and human-robot interaction (HRI). Meg is also a proud alumnus of Virginia Tech, where she received her Ph.D. in Human-Computer Interaction.

David Talby, PhD
David Talby is the Chief Technology Officer at John Snow Labs, helping companies apply artificial intelligence to solve real-world problems in healthcare and life science. David is the creator of Spark NLP – the world’s most widely used natural language processing library in the enterprise.
He has extensive experience building and running web-scale software platforms and teams – in startups, for Microsoft’s Bing in the US and Europe, and to scale Amazon’s financial systems in Seattle and the UK.
David holds a Ph.D. in Computer Science and Master’s degrees in both Computer Science and Business Administration. He was named USA CTO of the Year by the Global 100 Awards and GameChangers Awards in 2022.

Jordan Boyd-Graber, PhD
Jordan is an associate professor in the University of Maryland Computer Science Department (tenure home), Institute of Advanced Computer Studies, iSchool, and Language Science Center. Previously, he was an assistant professor at Colorado’s Department of Computer Science (tenure granted in 2017). He was a graduate student at Princeton with David Blei.
His research focuses on making machine learning more useful, more interpretable, and able to learn and interact from humans. This helps users sift through decades of documents; discover when individuals lie, reframe, or change the topic in a conversation; or to compete against humans in games that are based in natural language.
If We Want AI to be Interpretable, We Need to Measure Interpretability(Talk)

Tom Shafer, PhD
Tom Shafer works as a Lead Data Scientist at Elder Research, a recognized leader in data science, machine learning, and artificial intelligence consulting since its founding in 1995. As a lead scientist, Tom contributes technically to a wide variety of projects across the company, mentors data scientists, and helps to direct the company’s technical vision. His current interests focus on Bayesian modeling, interpretable ML, and data science workflow. Before joining Elder Research, Tom completed a PhD in Physics at the University of North Carolina, modeling nuclear radioactive decays using high-performance computing.
Beyond Credit Scoring: Interpretable Models for Responsible Machine Learning(Talk)

Noah Giansiracusa, PhD
Noah Giansiracusa (PhD in math from Brown University) is a tenured associate professor of mathematics and data science at Bentley University, a business school near Boston. His research interests range from algebraic geometry to machine learning to empirical legal studies. After publishing the book How Algorithms Create and Prevent Fake News in July 2021, Noah has gotten more involved in public writing and policy discussions concerning data-driven algorithms and their role in society. He’s written op-eds for Barron’s, Boston Globe, Wired, Slate, and Fast Company and is currently working on a second book, Robin Hood Math: How to Fight Back When the World Treats You Like a Number, with a Foreword by Nobel Prize-winning economist Paul Romer.
Deepfakes: How’re They Made, Detected, and How They Impact Society(Tutorial)
See all our talks and hands-on workshop and training sessions
See all sessionsYou Will Meet
Top speakers and practitioners in Machine Learning and Deep Learning with an interest in Responsible AI
Data Scientists, AI experts Data Engineers, and Machine Learning Engineers
Decision-makers, team leads, and other influencers
Data Science and AI innovators looking to make an impact with responsible AI
CEOs, CTOs, CIOs and other c-suite executives seeking insights on responsbible AI
Software Developers focused on Machine Learning and Deep Learning
Industry leaders, business practitioners, and product developers seeking to understand trust and privacy in AI
Core contributors in the fields of Machine Learning and Deep Learning
Citizen data scientist and and Science Enthusiasts
Why Attend
Accelerate and broaden your knowledge of key areas in Responsible AI
With numerous introductory level workshops, get hands-on experience to quickly build up your skills
Post-conference, get access to recorded talks online and learn from over 100+ high-quality recording sessions that let you review content at your own pace
Take time out of your busy schedule to accelerate your knowledge of the latest advances in data science
Learn directly from world-class instructors who are the authors and contributors to many of the tools and languages used in data science today
Meet hiring companies, ranging from hot startups to Fortune 500, looking to hire professionals with data science skills at all levels
Get speaker insights and training in AI frameworks such as TensorFlow, MXNet, PyTorch, Spark, Storm, Drill, Keras, and other AI platforms
Connect with peers and top industry professionals at our many networking events to discover your next job, service, product, or startup.
Who should attend
The Responsible AI Track is where industry’s top creative minds gather to discuss and shape the most challenging social problems. Whether you are an expert, or just starting your journey, this is the conference for you.
Data scientists looking to build an understanding of ethical intelligent machines
Data scientists seeking to investigate and define potential adverse biases and effects, mitigation strategies, fairness objectives and validation of fairness
Anyone interested in understanding areas such as fairness, safety, privacy and transparency in artificial intelligence and data
Business professionals and industry experts looking to understand data science ethics in practice
Software engineers and technologists who need to develop algorithms to solve fundamental algorithmic fairness problems
CTO, CDS, and other managerial roles that require a bigger picture view of data science
Technologists in the field of AI Fairness and others looking to learn mitigation strategies, algorithmic advances, fairness objectives, and validation of fairness
Students and academics looking for more practical applied training in data science tools and techniques
ODSC EAST 2024 - April 23-25th
Register your interestODSC Newsletter
Stay current with the latest news and updates in open source data science. In addition, we’ll inform you about our many upcoming Virtual and in person events in Boston, NYC, Sao Paulo, San Francisco, and London. And keep a lookout for special discount codes, only available to our newsletter subscribers!