-
Notifications
You must be signed in to change notification settings - Fork 5
Attack Modules
LeakPro is designed to simulate various privacy attacks on machine learning models, providing insights into potential vulnerabilities. This section outlines the core attack scenarios implemented in LeakPro, their underlying mechanisms, and how they help evaluate model privacy risks.
LeakPro supports the following primary attack scenarios, each targeting specific privacy risks:
- Membership Inference Attacks: Assess whether specific records were part of a model's training dataset.
- Data Reconstruction Attacks: Attempt to recover original training data from model outputs.
- Federated Learning Attacks: Target models trained in federated learning settings, exploiting shared gradients.
- Attacks on Synthetic Data: Evaluate privacy risks when using synthetic data as a replacement for real datasets.
These scenarios enable a comprehensive evaluation of privacy threats across various data modalities, including images, text, tabular data, and graphs.
LeakPro's attack modules are implemented through a modular design, allowing users to configure and extend the following components:
- Attack Types: Label-only and logit-based attacks are supported for more realistic threat simulations.
- Data Modalities: Each attack is compatible with multiple data formats such as images, text, tabular data, and graphs.
- Evaluation Metrics: Standard metrics like accuracy, precision, recall, and specific privacy risk scores are used to assess model vulnerability.
- Extensibility: Researchers can implement new attack methods by extending LeakPro's open-source modules.
By integrating these attack modules, LeakPro provides a robust platform for simulating and evaluating real-world privacy risks in machine learning models.
Explore the specific attack implementations in the dedicated sections for each scenario!