Skip to content

Repository with the project of the Explainable and Reliable Artificial Intelligence course at UniTS (2024-2025).

Notifications You must be signed in to change notification settings

luispky/XAI-RAI-UniTS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– Explainable and Reliable Artificial Intelligence

πŸ“š University of Trieste, Academic Year 2024–2025

πŸŽ“ Data Science and Artificial Intelligence Master's Program

Logo


Table of Contents (Click to expand)

Authors Information

This project was developed by the following students:

πŸ‘€ Name Surname πŸŽ“ Student ID πŸ“§ UniTS Email πŸ“§ Gmail
Omar Cusma Fait SM3800018 [email protected] [email protected]
Luis Fernando Palacios Flores SM3800038 [email protected] [email protected]

About the Project

ℹ️ Generative Tools Notice ℹ️
Generative AI tools have assisted in this project's development. Specifically, they helped to refine code readability, clarify tool functionality, fix minor bugs, write documentation, and improve overall clarity. Nonetheless, the authors remain the primary creators of the ideas and retain full ownership of the creative process.

Project Description

πŸ” This project investigates the robustness of popular computer vision models trained on the ImageNet datasetβ€”AlexNet, ResNet50, Vision Transformer (ViT), and Swin Transformerβ€”against adversarial attacks and perturbations. It also inspects the reliability of the explanations these models generate under such conditions.

Project Structure

πŸ“‚ The project is organized into the following structure:

β”œβ”€β”€ data
β”‚   β”œβ”€β”€ imagenet_classes.txt        # ImageNet class labels
β”‚   β”œβ”€β”€ imagenet_class_index.json   # JSON with class indices
β”‚   └── images                      # Sample test images
β”œβ”€β”€ README.md                       # Project documentation
β”œβ”€β”€ requirements.txt                # Python dependencies
β”œβ”€β”€ xai-env.yml                     # Conda environment configuration
└── xai_rai_units                   # Source code and scripts
    β”œβ”€β”€ __init__.py                 # Package initializer
    β”œβ”€β”€ scripts
    β”‚   β”œβ”€β”€ explanations_perturbed_images.py  # Generate visual explanations
    β”‚   β”œβ”€β”€ main.py                             # Evaluate model robustness
    └── src                         # Core functionality and utilities
  • data/: Contains ImageNet class indices and sample test images.
  • requirements.txt: Lists Python dependencies needed for the project.
  • xai-env.yml: YAML configuration file for setting up a Conda environment.
  • explanations_config.yaml: Configuration file for the results_gen_explanations_noisy_images.py script.
  • xai_rai_units/: Contains all source code and scripts:
    • scripts/: Includes executable Python scripts.
      • explanations_perturbed_images.py: Generates visual explanations for perturbed images.
      • main.py: Main script to evaluate model robustness.
      • results_gen_explanations_noisy_images.py: Script to save results of explanations for noisy images.
    • src/: Core source code and utilities for the project.

Slides

πŸ“‘ View the project presentation slides here.

Built With

πŸ› οΈ This project leverages the following tools and libraries:

  • Python
  • PyTorch
  • Captum
  • Grad-CAM
  • Conda

Getting Started

Follow these steps to set up the project environment. πŸš€

Prerequisites

Install dependencies manually or using a Conda environment.

Manual Installation

πŸ“¦ Use pip to install dependencies from requirements.txt:

pip install -r requirements.txt

Conda Environment

🐍 Create and activate a Conda environment:

conda env create -f xai-env.yml
conda activate xai-env

Environment Configuration

Click to expand for detailed environment setup instructions πŸ€“

To ensure that all scripts run correctly, make sure your environment is set up properly:

  1. PYTHONPATH:
    Set the PYTHONPATH environment variable to include the root of this project. For example:

    export PYTHONPATH=$PYTHONPATH:/path/to/XAI-RAI-UniTS

    This allows Python to locate modules and packages within the xai_rai_units folder.

  2. Conda Environment in PATH:
    Ensure the path to your Conda environment is in your PATH. For example:

    export PATH=/path/to/anaconda3/envs/xai-env/bin:$PATH

    This helps ensure you are calling the correct Python interpreter and installed dependencies.

  3. VSCode Integration (Optional):
    If you are using Visual Studio Code with Conda, you can automate these environment variables:

    • Create a .env file in the root of the project with the following content:
      PYTHONPATH=/path/to/XAI-RAI-UniTS
      
    • Create or update .vscode/settings.json with:
      {
        "python.pythonPath": "/path/to/anaconda3/envs/xai-env/bin/python",
        "python.envFile": "${workspaceFolder}/.env"
      }

    With this setup, VSCode will automatically use your Conda environment and the specified Python path whenever you open this workspace.


Usage

explanations_perturbed_images.py

This script generates visual explanations for images using Explainable AI (XAI) methods such as Grad-CAM and Captum. The script applies noise to images, visualizes model explanations for both original and perturbed images, and displays the fractions of noise that cause prediction changes in the console.

Command-Line Arguments

Argument Type Default Description
--library str "gradcam" Library for generating explanations (gradcam or captum).
--method str "GradCAM" Explanation method (e.g., GradCAM, LayerGradCam).
--model_name str "resnet50" Pre-trained model to use (alexnet, resnet50, etc.).
--sample_images int 5 Number of images to process.
--perturbation_name str "Gaussian" Name of the perturbation method to use (e.g., Identity, Blur).
--n_perturbations int 5 Number of perturbed images to generate for analysis.
--magnitude float 0.2 Maximum noise magnitude for image perturbation.
--seed int 24 Random seed for reproducibility.

Example Usage

python xai_rai_units/scripts/explanations_perturbed_images.py \
  --library gradcam \
  --method GradCAM \
  --model_name resnet50 \
  --sample_images 5 \
  --perturbation_name Gaussian \
  --n_perturbations 5 \
  --magnitude 0.2 \
  --seed 24

Supported Models and Explanation Methods

Models

πŸ“Š Model Name πŸ–₯️ Code
AlexNet alexnet
ResNet50 resnet50
Swin Transformer swin_transformer
Vision Transformer vit

Explanation Methods

πŸ” Grad-CAM Variants 🎯 Captum Methods
GradCAM LayerGradCam (only alexnet and resnet50)
GradCAM++ GuidedGradCam (only alexnet and resnet50)
XGradCAM LayerConductance
EigenCAM DeepLift
HiResCAM LayerDeepLift

main.py

Overview

The file main.py is the primary entry point for evaluating model robustness under adversarial attacks and various perturbations.

The script can run on a specific model (e.g., alexnet) or iterate through all supported models (via --model_name all). By default, it displays plots interactively, but you can choose to save them to disk with the --show_figures=False argument.

Perturbation Techniques

  • Identity Perturbation: πŸͺž Produces identical images without any modifications as a baseline for comparison.
  • Gaussian Noise: πŸ“ˆ Adds random noise to the image.
  • Image Blurring: πŸ“· Gradually reduces image sharpness.
  • Occlusion: πŸŒ“ Adds black rectangles to obscure parts of the image.
  • Void Perturbation: 🌫️ Gradually darkens edges towards the center.
  • Opposite Gradient: πŸ”€ Alters the image using gradients of the opposite direction.

These techniques add noise to the image (in pixel space $[0, 1]$) in a fixed random direction, creating a sequence of perturbed images until the desired noise magnitude is reached.

Example Usage

python xai_rai_units/scripts/main.py \
  --library gradcam \
  --method GradCAM \
  --sample_images 5 \
  --n_perturbations 30 \
  --magnitude 0.1 \
  --seed 42 \
  --model_name alexnet \
  --show_figures
  • --library gradcam selects the Grad-CAM library for explanations.
  • --method GradCAM specifies which explanation technique to apply (e.g., GradCAM, LayerGradCam, etc.).
  • --sample_images 5 indicates how many images to randomly sample from the local dataset.
  • --n_perturbations 30 defines the number of intermediate images generated between the original image and the fully perturbed version.
  • --magnitude 0.1 controls the intensity of the perturbation.
  • --seed 42 guarantees reproducibility by fixing the random seed.
  • --model_name alexnet selects which model to run; use all to iterate over all supported models.
  • --show_figures will display the resulting plots interactively (default behavior). Omit or set to --show_figures=false to save the figures to FIGURES_DIR/robustness.

(back to top)


Acknowledgments

(back to top)


Releases

No releases published

Packages

No packages published

Languages