Abstract: We start with a brief story about what planes and machine learning have in common and how aviation improved to become the safest mode of transportation. Then, discuss what can be done to build a cockpit for machine learning that can make Artificial Intelligence the safest mode of decision-making. We clarify what value Interpretable Machine Learning, also known as Explainable A.I., can provide to improve M.L. practice in coming years, and define related terms such as Responsible A.I., and Ethical A.I. Then, outline a few ways in which Interpretable M.L. can be leveraged to make for safer, more reliable, accountable, and fairer decision-making. Lastly, explain what skillsets are needed the most to pilot A.I./M.L. in the future.
Bio: Serg Masís has been at the confluence of the internet, application development, and analytics for the last two decades. Currently, he's a Climate and Agronomic Data Scientist at Syngenta, a leading agribusiness company with a mission to improve global food security. Before that role, he co-founded a search engine startup, incubated by Harvard Innovation Labs, that combined the power of cloud computing and machine learning with principles in decision-making science to expose users to new places and events efficiently. Whether it pertains to leisure activities, plant diseases, or customer lifetime value, Serg is passionate about providing the often-missing link between data and decision-making — and machine learning interpretation helps bridge this gap more robustly. His book titled "Interpretable Machine Learning with Python" was released in April 2021 by UK-based publisher Packt.