We present experiments exploring the notions of global correctness and global robustness defined in our research paper:
Nathanaël Fijalkow and Mohit Kumar Gupta
The experiments are in Jupter notebook format:
- Random walk
- Analysis of an image classifier using a generative model
- Evaluating the global correctness
- Searching for Realistic Adversarial Examples: black-box approach
- Searching for Realistic Adversarial Examples: white-box approach
- Dependence on the generative model: disjoint training sets
All experiments use Tensorflow, and pre-trained models can be used (see /Models).
- Images generated in a random walk:
- Confidence score for images generated in a random walk:
- Classifier confidence score with images:
- Prediction accuracy (of the image classifer) across the generated images:
- Outliers (generated during evaluation):
- Linux or macOS
- Python 3/2
- Tensorflow
- Install python libraries tensorflow and numpy.
pip3 install tensorflow numpy
- Install Jupyter Notebook
pip3 install jupyter
- Clone this repo:
git clone https://github.com/mohitiitb/NeuralNetworkVerification_GlobalRobustness.git
cd NeuralNetworkVerification_GlobalRobustness
-
All experiments use pre-trained models (see /Models).
-
To generate/train a classifier (optional)
python3 train_classifer.py
- To generate/train a GAN (optional)
python3 train_gan.py
- Go through the jupyter notebooks they are self explanatory and easy to run.