Tutorial on Representer Point Selection for Explaining Deep Neural Networks (CIFAR-10)
-
Updated
Dec 2, 2019 - Jupyter Notebook
Tutorial on Representer Point Selection for Explaining Deep Neural Networks (CIFAR-10)
XMLX GitHub configuration
explainable and interpretable methods for AI and data science
Explaining Model Behavior with Global Causal Analysis
Implementation of the Integrated Directional Gradients method for Deep Neural Network model explanations.
Optimizing Mind static website v1
A python library to agnostically explain multi-label black-box classifiers (tabular data)
A PyTorch implementation of constrained optimization and modeling techniques
JAX-based Model Explanation and Interpretation Library
Decision Trees to understand CNNs. Project for Neural Networks 2020 course at Sapienza.
Measuring Biases in Masked Language Models for PyTorch Transformers. Support for multiple social biases and evaluation measures.
PyTorch-based tools for visualizing and understanding the neurons of a GAN.
Getting explanations for predictions made by black box models.
XAI-Tris
Overview of machine learning interpretation techniques and their implementations
Investigate BERT on Non-linearity and Layer Commutativity
Pytorch-based tools for constructing a vocabulary of visual concepts in a GAN.
A baseline genetic algorithm for the discovery of counterfactuals, implemented in Python for ease of use and heavily leveraging NumPy for speed.
Interpretable AI with Safeguard AI (paper study, implement-code review)
This repository contains an implementation of DISC, an algorithm for learning DFAs for multiclass sequence classification.
Add a description, image, and links to the interpretable-ml topic page so that developers can more easily learn about it.
To associate your repository with the interpretable-ml topic, visit your repo's landing page and select "manage topics."