Skip to content

fzi-forschungszentrum-informatik/TSInterpret

Repository files navigation

TSInterpret Logo

tests PyPI PyPI - Downloads GitHub DOI badge

TSInterpret is a Python library for interpreting time series classification. The ambition is to faciliate the usage of time series interpretability methods. The Framework supports Sklearn, Tensorflow, Torch and in some cases predict functions. A listing of implemented algorithms and supported frameworks can be found in our Documentation. More information on our framework can be found in our paper.

💈 Installation

pip install TSInterpret

You can install the latest development version from GitHub as so:

pip install https://github.com/fzi-forschungszentrum-informatik/TSInterpret/archive/refs/heads/main.zip

🍫 Quickstart

The following example creates a simple Neural Network based on tensorflow and interprets the Classfier with Integrated Gradients and Temporal Saliency Rescaling [1]. For further examples check out the Documentation.

[1] Ismail, Aya Abdelsalam, et al. "Benchmarking deep learning interpretability in time series predictions." Advances in neural information processing systems 33 (2020): 6441-6452.

Import

import pickle
import numpy as np 
import matplotlib.pyplot as plt
import seaborn as snst
from tslearn.datasets import UCR_UEA_datasets
import tensorflow as tf 

Create Classifcation Model

This Section uses a pretrained Classification Model to illustrate the use of our package. For running the example, please clone our repository and comment the variable PATH_TO_YOUR_CLASSIFICATION_MODEL in. The code in this section can also be replaces with your personal classification model written in torch or tensorflow.

# Load data.
dataset='BasicMotions'
train_x,train_y, test_x, test_y=UCR_UEA_datasets().load_dataset(dataset)
enc1=sklearn.OneHotEncoder(sparse=False).fit(train_y.reshape(-1,1))
train_y=enc1.transform(train_y.reshape(-1,1))
test_y=enc1.transform(test_y.reshape(-1,1))

# Load a model.
#e.g., PATH_TO_YOUR_CLASSIFICATION_MODEL=f'./TSInterpret/ClassificationModels/models/{dataset}/cnn/{dataset}best_model.hdf5'
model_to_explain = tf.keras.models.load_model(PATH_TO_YOUR_CLASSIFICATION_MODEL)

Explain & Visualize Model

from TSInterpret.InterpretabilityModels.Saliency.TSR import TSR
int_mod=TSR(model_to_explain, train_x.shape[-2],train_x.shape[-1], method='IG',mode='time')
item= np.array([test_x[0,:,:]])
label=int(np.argmax(test_y[0]))

exp=int_mod.explain(item,labels=label,TSR =True)

%matplotlib inline  
int_mod.plot(np.array([test_x[0,:,:]]),exp)

Algorithm Results

🧐 Why a special package for the interpretability of time series predictors?

Compared to other data types like tabular, image, or natural language data, time series data is unintuitive to understand. Approaches to the explainability of tabular regression and classification often assume independent features. Compared to images or textual data, humans cannot intuitively and instinctively understand the underlying information contained in time series data. Further, research has shown that applying explainability algorithms for tabular, image, or natural language data often yields non-understandable and inaccurate explanations, as they do not consider the time component (e.g., highlighting many unconnected time-steps, instead of features or time slices [1]). Increasing research has focused on developing and adapting approaches to time series (survey: [2]). However, with no unified interface, accessibility to those methods is still an issue. TSInterpret tries to facilitate this by providing a PyPI package with a unified interface for multiple algorithms, documentation, and learning resources (notebooks) on the application.

[2] Rojat, Thomas, et al. "Explainable artificial intelligence (xai) on timeseries data: A survey." arXiv preprint arXiv:2104.00950 (2021).

👐 Contributing

Feel free to contribute in any way you like, we're always open to new ideas and approaches.

  • If you have questions, spotted a bug or ideas, feel free to open an issue.
  • Before opening a pull request, we also encourage users to open an issue for discussion.

Details on how to Contribute can be found here.

🏫 Affiliations

FZI Logo

Citation

If you use TSInterpret in your research, please consider citing it and the authors' original papers. The authors' original papers are cited in the documentation and the paper below.

@article{Höllig2023, 
doi = {10.21105/joss.05220}, 
url = {https://doi.org/10.21105/joss.05220}, 
year = {2023}, 
publisher = {The Open Journal}, 
volume = {8}, 
number = {85}, 
pages = {5220}, 
author = {Jacqueline Höllig and Cedric Kulbach and Steffen Thoma}, 
title = {TSInterpret: A Python Package for the Interpretability of Time Series Classification}, journal = {Journal of Open Source Software} }