Interpretable Machine Learning
While understanding and trusting models and their results is a hallmark of good (data) science, model interpretability is a serious legal mandate in the regulated verticals of banking, insurance, and other industries. Moreover, scientists, physicians, researchers, analysts, and humans in general have the right to understand and trust models and modeling results that affect their work and their lives. Today many organizations and individuals are embracing deep learning and machine learning algorithms but what happens when people want to explain these impactful, complex technologies to one-another or when these technologies inevitably make mistakes?
This talk presents several approaches beyond the error measures and assessment plots typically used to interpret deep learning and machine learning models and results. The talk will include:
- Data visualization techniques for representing high-degree interactions and nuanced data structures.
- Contemporary linear model variants that incorporate machine learning and are appropriate for use in regulated industry.
- Cutting edge approaches for explaining extremely complex deep learning and machine learning models.
Wherever possible, interpretability approaches are deconstructed into more basic components suitable for human story telling: complexity, scope, understanding and trust.
Bio:
Patrick Hall is a senior data scientist and product engineer at H2O.ai. Patrick works with H2O.ai customers to derive substantive business value from machine learning technologies. His product work at H2O.ai focuses on two important aspects of applied machine learning, model interpretability and model deployment. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning.
Prior to joining H2O.ai, Patrick held global customer facing roles and R & D research roles at SAS Institute. He holds multiple patents in automated market segmentation using clustering and deep neural networks. Patrick is the 11th person worldwide to become a Cloudera certified data scientist. He studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University.
https://www.linkedin.com/in/jpatrickhall/
Slides are here: https://www.slideshare.net/0xdata/interpretable-machine-learning