Selling Out Soon 20% Offer | Ends In
FOCUS AREA OVERVIEW
Pause for a moment to realize the number of machine learning models trained on crowdsourced data from social media and other web sources, and realize how easy it is to poison training data. This is one of the many treats raised when accessing machine learning safety Driven by concerns around foundational models, autonomous systems, and large-scale models, ML Safety is quickly becoming a key topic encompassing many areas of AI and ML. Adversarial attacks, backdoor model vulnerabilities, real-world deployment tail risks, risk monitoring, and boosting model defenses are a few of the topics that fall under the Machine Learning Safety umbrella.
ODSC East is one of the first applied data science and machine learning conferences to address this fast-trending topic.
TOPICS YOU'LL LEARN
Transparency & Explainability in Machine Learning
Differential Privacy & Federated Learning
Cybersecurity and Machine Learning
Idenifying Bias in Machine Learning
Data Privacy and Confidentiality
Safe Machine Learning & Deep Learning
Safe Autonomous Systems Control
Ethical and Legal Consequences of Unsafe Machine Learning
Engineering Safety in Machine Learning
Identifying & Fixing Vulnerabilities in the Machine Learning
Realiabilty in Critical Machine Learning Systems
Security Risks in Machine Learning and Deep Learning
Data & Poisoning Attacks in Machine Learning
Identifying Backdoor Attacks on Machine Learning
Deep learning and Adversarial Attacks
Adverserial Attacks on Autonomous Systems
Understanding Transfer Learning Attacks
Using Machine Learning to Detect Malicious Activity
Some of Our Past Machine Learning Safety & Security Speakers

Aric LaBarr, PhD
A Teaching Associate Professor in the Institute for Advanced Analytics, Dr. Aric LaBarr is passionate about helping people solve challenges using their data. There he helps design the innovative program to prepare a modern workforce to wisely communicate and handle a data-driven future at the nation’s first Master of Science in Analytics degree program. He teaches courses in predictive modeling, forecasting, simulation, financial analytics, and risk management. Previously, he was Director and Senior Scientist at Elder Research, where he mentored and led a team of data scientists and software engineers. As director of the Raleigh, NC office he worked closely with clients and partners to solve problems in the fields of banking, consumer product goods, healthcare, and government. Dr. LaBarr holds a B.S. in economics, as well as a B.S., M.S., and Ph.D. in statistics — all from NC State University.

Alexandra Ebert
Alexandra Ebert is a Responsible AI, synthetic data & privacy expert and serves as Chief Trust Officer at MOSTLY AI. As a member of the company’s senior leadership team, she is engaged in public policy issues in the emerging field of synthetic data and Ethical AI and is responsible for engaging with the privacy community, with regulators, the media, and with customers. She regularly speaks at international conferences on AI, privacy, and digital banking and hosts The Data Democratization Podcast, where she discusses emerging digital policy trends as well as Responsible AI and privacy best practices with regulators, policy experts and senior executives.
Apart from her work at MOSTLY AI, she serves as the chair of the IEEE Synthetic Data IC expert group and was pleased to be invited to join the group of AI experts for the #humanAIze initiative, which aims to make AI more inclusive and accessible to everyone.
Before joining the company, she researched GDPR’s impact on the deployment of artificial intelligence in Europe and its economic, societal, and technological consequences. Besides being an advocate for privacy protection, Alexandra is deeply passionate about Ethical AI and ensuring the fair and responsible use of machine learning algorithms. She is the co-author of an ICLR paper and a popular blog series on fairness in AI and fair synthetic data, which was featured in Forbes, IEEE Spectrum, and by distinguished AI expert Andrew Ng.
When Privacy Meets AI – Your Kick-Start Guide to Machine Learning with Synthetic Data(Tutorial)

Jordan Boyd-Graber, PhD
Jordan is an associate professor in the University of Maryland Computer Science Department (tenure home), Institute of Advanced Computer Studies, iSchool, and Language Science Center. Previously, he was an assistant professor at Colorado’s Department of Computer Science (tenure granted in 2017). He was a graduate student at Princeton with David Blei.
His research focuses on making machine learning more useful, more interpretable, and able to learn and interact from humans. This helps users sift through decades of documents; discover when individuals lie, reframe, or change the topic in a conversation; or to compete against humans in games that are based in natural language.
If We Want AI to be Interpretable, We Need to Measure Interpretability(Talk)

Sagar Samtani, PhD
Dr. Sagar Samtani is an Assistant Professor and Grant Thornton Scholar in the Department of Operations and Decision Technologies at Indiana University. Dr. Samtani graduated with his Ph.D. from the AI Lab from University of Arizona. Dr. Samtani’s research interests are in AI for Cybersecurity, developing deep learning approaches for cyber threat intelligence, vulnerability assessment, open-source software, AI risk management, and Dark Web analytics. He has received funding from NSF’s SaTC, CICI, and SFS programs and has published over 40 peer-reviewed articles in leading information systems, machine learning, and cybersecurity venues. He is deeply involved with industry, serving on the Board of Directors for the DEFCON AI Village and Executive Advisory Council for the CompTIA ISAO.

Moez Ali
Innovator, Technologist, and a Data Scientist turned Product Manager with proven track record of building and scaling data products, platforms, and communities. Experienced in building and leading teams of data scientists, data engineers, and product managers. Strongly opinionated tech visionary and a thought partner to C-level leadership.
Moez Ali is an inventor and creator of PyCaret. PyCaret is an open-source, low-code, machine learning software. Ranked in top 1%, 8M+ downloads, 7K+ GitHub stars, 100+ contributors, and 1000+ citations.
Globally recognized personality for open-source work on PyCaret. Keynote speaker and top ten most-read writer in the field of artificial intelligence. Teaching AI and ML courses at Cornell, NY and Queens University, CA. Currently building world’s first hyper-focused Data and ML Platform.
Automate Machine Learning Workflows with PyCaret 3.0(Workshop)

Dan Shiebler
As the Head of Machine Learning at Abnormal Security, Dan builds cybercrime detection algorithms to keep people and businesses safe. Before joining Abnormal Dan worked at Twitter: first as an ML researcher working on recommendation systems, and then as the head of web ads machine learning. Before Twitter Dan built smartphone sensor algorithms at TrueMotion and Computer Vision systems at the Serre Lab.

Tom Shafer, PhD
Tom Shafer works as a Lead Data Scientist at Elder Research, a recognized leader in data science, machine learning, and artificial intelligence consulting since its founding in 1995. As a lead scientist, Tom contributes technically to a wide variety of projects across the company, mentors data scientists, and helps to direct the company’s technical vision. His current interests focus on Bayesian modeling, interpretable ML, and data science workflow. Before joining Elder Research, Tom completed a PhD in Physics at the University of North Carolina, modeling nuclear radioactive decays using high-performance computing.
Beyond Credit Scoring: Interpretable Models for Responsible Machine Learning(Talk)

Andras Zsom, PhD
Andras Zsom is an Assistant Professor of the Practice and Director of Graduate Studies at the Data Science Initiative at Brown University, Providence, RI. He is teaching two mandatory courses in the data science master’s program, and helps the students navigate through their studies and curriculum. He also supervises interns on various research projects related to missing data, interpretability, and developing machine learning pipelines.
You Will Meet
Top speakers and practitioners in Machine Learning Safety
Data Scientists, Machine Learning Engineers, and AI Experts interested in risk in AI
Business professionals who want to understand safe machine learning
Core contributors in the fields of Machine Learning and Deep Learning
Software Developers focused on building safe machine learning and deep learning
Technologist seeking to better understand AI and machine learning risks and vulnerabilities
CEOs, CTOs, CIOs and other c-suite decision makers
Data Science Enthusiasts
Why Attend?
Immerse yourself in talks, tutorials, and workshops on Machine Learning and Deep Learning tools, topics, models and advanced trends
Expand your network and connect with like-minded attendees to discover how Machine Learning and Deep Learning knowledge can transform not only your data models but also your business and career
Meet and connect with the core contributors and top practitioners in the expanding and exciting fields of Machine Learning and Deep Learning
Learn how the rapid rise of intelligent machines is revolutionizing how we make sense of data in the real world and its coming impact on the domains of business, society, healthcare, finance, manufacturing, and more
ODSC EAST 2024 - April 23-25th
Register your interestODSC Newsletter
Stay current with the latest news and updates in open source data science. In addition, we’ll inform you about our many upcoming Virtual and in person events in Boston, NYC, Sao Paulo, San Francisco, and London. And keep a lookout for special discount codes, only available to our newsletter subscribers!