Code for our AISTATS '22 paper: Improving Attribution Methods by Learning Submodular Functions.
-
Updated
Nov 5, 2023 - Jupyter Notebook
Code for our AISTATS '22 paper: Improving Attribution Methods by Learning Submodular Functions.
The source code for the journal paper: Spatio-Temporal Perturbations for Video Attribution, TCSVT-2021
squid repository for manuscript analysis
Metrics for evaluating interpretability methods.
Attribution (or visual explanation) methods for understanding video classification networks. Demo codes for WACV2021 paper: Towards Visually Explaining Video Understanding Networks with Perturbation.
Source code for the GAtt method in "Revisiting Attention Weights as Interpretations of Message-Passing Neural Networks".
Easy-to-use MIRAGE code for faithful answer attribution in RAG applications. Paper: https://arxiv.org/abs/2406.13663
surrogate quantitative interpretability for deepnets
Hacking SetFit so that it works with integrated gradients.
Explainable AI in Julia.
Code for the paper: Towards Better Understanding Attribution Methods. CVPR 2022.
On Explaining Your Explanations of BERT: An Empirical Study with Sequence Classification
Code for our ICML '19 paper: Neural Network Attributions: A Causal Perspective.
Interpretability for sequence generation models 🐛 🔍
Add a description, image, and links to the attribution-methods topic page so that developers can more easily learn about it.
To associate your repository with the attribution-methods topic, visit your repo's landing page and select "manage topics."