Highly correlated features can wreak havoc on your machine-learning model interpretations. To overcome this, we could rely on good feature selection. But there are still cases when a feature, although highly correlated, will provide some unique information leading to a more accurate model. So we need a method that can provide clear interpretations, even with multicollinearity. Thankfully we can rely on ALEs.
We give you the intuition for how ALEs are created, formally define the algorithm used to create ALEs and apply ALEs using Python and the Alibi Explain package. We will see that, unlike other XAI methods like SHAP, LIME, ICE Plots and Friedman's H-stat, ALEs give interpretations that are robust to multicollinearity.
🚀 Free Course 🚀
Signup here: https://mailchi.mp/40909011987b/signup
XAI course: https://adataodyssey.com/courses/xai-with-python/
SHAP course: https://adataodyssey.com/courses/shap-with-python/
🚀 Companion article with link to code (no-paywall link): 🚀
https://medium.com/data-science/deep-dive-on-accumulated-local-effect-plots-ales-with-python-0fc9698ed0ee?sk=e8e9ccb23edf2ad33dc60b1e16cf2751
🚀 Useful playlists 🚀
https://www.youtube.com/playlist?list=PLqDyyww9y-1SwNZ-6CmvfXDAOdLS7yUQ4
https://www.youtube.com/playlist?list=PLqDyyww9y-1SJgMw92x90qPYpHgahDLIK
https://www.youtube.com/playlist?list=PLqDyyww9y-1Q0zWbng6vUOG1p3oReE2xS
🚀 Get in touch 🚀
Medium: https://conorosullyds.medium.com/
Threads: https://www.threads.net/@conorosullyds
Twitter: https://twitter.com/conorosullyDS
Website: https://adataodyssey.com/
🚀 Chapters 🚀
00:00 Introduction
01:17 Intuition
04:39 Formal Algorithm
07:22 Python Code