Lab 6: Interpreting Black-Box Models (LIME, SHAP) and PCA
Lab 6: Interpreting Black-Box Models (LIME, SHAP) and PCA
Slides
The slides I showed this week can be found here.
Miscellaneous Notes
- Homework 2 due TONIGHT at 11:59pm
- Homework 1 grades released - ask Rithwick (rg4361@nyu.edu) any questions about grades
Topics Covered
- Interpreting black box models
- Local Interpretable Model-Agnostic Explanations (LIME)
- Shapley values (SHAP), specifically waterfall, beeswarm, dependence, and force plots
- Principal Component Analysis (PCA)
Code
This Jupyter notebook collates the code from today’s lab.