Skip to content
This repository has been archived by the owner on Jun 20, 2022. It is now read-only.

Commit

Permalink
Update Readme
Browse files Browse the repository at this point in the history
  • Loading branch information
dbr7 committed Feb 26, 2019
1 parent d616727 commit 51b5df5
Showing 1 changed file with 33 additions and 14 deletions.
47 changes: 33 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,26 +15,45 @@ If you find this paper helpful, consider cite the paper:

## Introduction

This repository includes [all experimental results](https://goo.gl/Dq63fq), code for computing Surprise Adequacy (SA).
This archive includes code for computing Surprise Adequacy (SA) and Surprise Coverage (SC), which are basic components of the main experiments in the paper. Currently, the "run.py" script contains a simple example that calculates SA and SC of a test set and an adversarial set generated using FGSM method for the MNIST dataset, only considering the last hidden layer (activation_3). Layer selection can be easily changed by modifying `layer_names` in run.py.

- run.py - script processing SA with a benign dataset and adversarial examples (MNIST and CIFAR-10).
- sa.py - tools that fetch activation traces, compute LSA and DSA, and coverage.
- train_model.py - model training script for MNIST and CIFAR-10, keeping trained models in "model" directory (from [Ma et al.](https://github.com/xingjunm/lid_adversarial_subspace_detection)).
- model directory - saving models.
- tmp directory - saving activation traces and prediction arrays.
- adv directory - saving adversarial examples.

### Files and Directories

- `run.py` - Script processing SA with a benign dataset and adversarial examples (MNIST and CIFAR-10).
- `sa.py` - Tools that fetch activation traces, compute LSA and DSA, and coverage.
- `train_model.py` - Model training script for MNIST and CIFAR-10. It keeps the trained models in the "model" directory (code from [Ma et al.](https://github.com/xingjunm/lid_adversarial_subspace_detection)).
- `model` directory - Used for saving models.
- `tmp` directory - Used for saving activation traces and prediction arrays.
- `adv` directory - Used for saving adversarial examples.

### Command-line Options of run.py

- `-d` - The subject dataset (either mnist or cifar). Default is mnist.
- `-lsa` - If set, computes LSA.
- `-dsa` - If set, computes DSA.
- `-target` - The name of target input set. Default is `fsgm`.
- `-save_path` - The temporal save path of AT files. Default is tmp directory.
- `-batch_size` - Batch size. Default is 128.
- `-var_threshold` - Variance threshold. Default is 1e-5.
- `-upper_bound` - Upper bound of SA. Default is 2000.
- `-n_bucket` - The number of buckets for coverage. Default is 1000.
- `-num_classes` - The number of classes in dataset. Default is 10.
- `-is_classification` - Set if task is classification problem. Default is True.

### Generating Adversarial Examples

We used a framework of [Ma et al.](https://github.com/xingjunm/lid_adversarial_subspace_detection) to generate various adversarial examples (FGSM, BIM-A, BIM-B, JSMA, and C&W). Please refer to [craft_adv_samples.py](https://github.com/xingjunm/lid_adversarial_subspace_detection/blob/master/craft_adv_examples.py) in above repository, and put them in "adv" directory.
We used the framework by [Ma et al.](https://github.com/xingjunm/lid_adversarial_subspace_detection) to generate various adversarial examples (FGSM, BIM-A, BIM-B, JSMA, and C&W). Please refer to [craft_adv_samples.py](https://github.com/xingjunm/lid_adversarial_subspace_detection/blob/master/craft_adv_examples.py) in the above repository of Ma et al., and put them in the `adv` directory. For a basic usage example, there is an included adversarial set generated by the FSGM method for MNIST (See file ./adv/adv_mnist_fgsm.npy).

### Udacity Self-driving Car Challenge

To reproduce the result of [Udacity self-driving car challenge](https://github.com/udacity/self-driving-car/tree/master/challenges/challenge-2), please refer to the [DeepXplore](https://github.com/peikexin9/deepxplore) and [DeepTest](https://github.com/ARiSE-Lab/deepTest) repositories, which contain information about dataset, models ([Dave-2](https://github.com/peikexin9/deepxplore/tree/master/Driving), [Chauffeur](https://github.com/udacity/self-driving-car/tree/master/steering-models/community-models/chauffeur)), and synthetic data generation.
To reproduce the result of [Udacity self-driving car challenge](https://github.com/udacity/self-driving-car/tree/master/challenges/challenge-2), please refer to the [DeepXplore](https://github.com/peikexin9/deepxplore) and [DeepTest](https://github.com/ARiSE-Lab/deepTest) repositories, which contain information about the dataset, models ([Dave-2](https://github.com/peikexin9/deepxplore/tree/master/Driving), [Chauffeur](https://github.com/udacity/self-driving-car/tree/master/steering-models/community-models/chauffeur)), and synthetic data generation processes. It might take a few hours to get the dataset and the models due to their sizes.

## How to Use

Our implementation is based on Python 3.5.2, Tensorflow 1.9.0, Keras 2.2, Numpy 1.14.5.
Our implementation is based on Python 3.5.2, Tensorflow 1.9.0, Keras 2.2, Numpy 1.14.5. Details are listed in `requirements.txt`.

This is a simple example of installation and computing LSA or DSA of a test set and FGSM in MNIST dataset.

```bash
# install Python dependencies
Expand All @@ -52,12 +71,12 @@ python run.py -dsa

## Notes

- If you encounter "ValueError: Input contains NaN, infinity or a value too large for dtype
('float64')." error, you need to increase variance threshold. Please see the configuration details in the paper (Section IV-C).
- If you encounter `ValueError: Input contains NaN, infinity or a value too large for dtype ('float64').` error, you need to increase the variance threshold. Please refer to the configuration details in the paper (Section IV-C).
- Images were processed by clipping its pixels in between -0.5 and 0.5.
- If you want to select some layers, you can modify layers array in run.py.
- If you want to select specific layers, you can modify the layers array in `run.py`.
- Coverage may vary depending on the upper bound.

- [All experimental results](https://coinse.github.io/sadl/)

## References

- [DeepXplore](https://github.com/peikexin9/deepxplore)
Expand Down

0 comments on commit 51b5df5

Please sign in to comment.