interpret
Let there be light.
InterpretML is an open-source python package for training interpretable models and explaining blackbox systems. Interpretability is essential for:
Model debugging - Why did my model make this mistake?
Detecting bias - Does my model discriminate?
Human-AI cooperation - How can I understand and trust the model's decisions?
Regulatory compliance - Does my model satisfy legal requirements?
High-risk applications - Healthcare, finance, judicial, ...
Historically, the most intelligible models were not very accurate, and the most accurate models were not intelligible. Microsoft Research has developed an algorithm called the Explainable Boosting Machine (EBM)* which has both high accuracy and intelligibility. EBM uses modern machine learning techniques like bagging and boosting to breathe new life into traditional GAMs (Generalized Additive Models). This makes them as accurate as random forests and gradient boosted trees, and also enhances their intelligibility and editability.
Notebook for reproducing table
Dataset/AUROC | Domain | Logistic Regression | Random Forest | XGBoost | Explainable Boosting Machine |
---|---|---|---|---|---|
Adult Income | Finance | .907±.003 | .903±.002 | .922±.002 | .928±.002 |
Heart Disease | Medical | .895±.030 | .890±.008 | .870±.014 | .916±.010 |
Breast Cancer | Medical | .995±.005 | .992±.009 | .995±.006 | .995±.006 |
Telecom Churn | Business | .804±.015 | .824±.002 | .850±.006 | .851±.005 |
Credit Fraud | Security | .979±.002 | .950±.007 | .981±.003 | .975±.005 |
In addition to EBM, InterpretML also supports methods like LIME, SHAP, linear models, partial dependence, decision trees and rule lists. The package makes it easy to compare and contrast models to find the best one for your needs.
* EBM is a fast implementation of GA2M. Details on the algorithm can be found here.
Python 3.5+ | Linux, Mac OS X, Windows
pip install -U interpret
Let's fit an Explainable Boosting Machine
from interpret.glassbox import ExplainableBoostingClassifier ebm = ExplainableBoostingClassifier() ebm.fit(X_train, y_train)# EBM supports pandas dataframes, numpy arrays, and handles "string" data natively.
Understand the model
from interpret import show ebm_global = ebm.explain_global() show(ebm_global)
Understand individual predictions
ebm_local = ebm.explain_local(X_test, y_test) show(ebm_local)
And if you have multiple models, compare them
show([logistic_regression, decision_tree])
Currently we're working on:
Multiclass Classification Support
Missing Values Support
Improved Categorical Encoding
...and lots more! Get in touch to find out more.
If you are interested contributing directly to the code base, please see CONTRIBUTING.md.
InterpretML was originally created by (equal contributions): Samuel Jenkins & Harsha Nori & Paul Koch & Rich Caruana
Many people have supported us along the way. Check out ACKNOWLEDGEMENTS.md!
We also build on top of many great packages. Please check them out!
plotly |dash |scikit-learn |lime |shap |salib |skope-rules |treeinterpreter |gevent |joblib |pytest |jupyter
InterpretML
"InterpretML: A Unified Framework for Machine Learning Interpretability" (H. Nori, S. Jenkins, P. Koch, and R.
Caruana 2019)
@article{nori2019interpretml,
title={InterpretML: A Unified Framework for Machine Learning Interpretability},
author={Nori, Harsha and Jenkins, Samuel and Koch, Paul and Caruana, Rich},
journal={arXiv preprint arXiv:1909.09223},
year={2019}
}
Paper link
Explainable Boosting
LIME
SHAP
Sensitivity Analysis
Partial Dependence
Open Source Software
There are multiple ways to get in touch:
Email us at interpret@microsoft.com
Or, feel free to raise a GitHub issue
还没有评论,说两句吧!
热门资源
Keras-ResNeXt
Keras ResNeXt Implementation of ResNeXt models...
seetafaceJNI
项目介绍 基于中科院seetaface2进行封装的JAVA...
spark-corenlp
This package wraps Stanford CoreNLP annotators ...
capsnet-with-caps...
CapsNet with capsule-wise convolution Project ...
inferno-boilerplate
This is a very basic boilerplate example for pe...
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com