Please put the downloaded weights in a local directory called "weights" under each model directory (or change location in the config file).
Code is taken from here. Download weights from here.
Code is taken from here. Download weights from here.
Code is taken from here. Download weights from here. (Weights file is already included in this repository under landmark_detection/pytorch_face_landmark/weights).
Code is taken from here. Weights are downloaded automatically on the first run.
Note: this model is more accurate, however, it is a lot larger than MobileFaceNet and requires a large memory GPU to be able to backpropagate when training the adversarial mask.
With stylegan2-encoder-pytorch: We have a good performance, but cannot manipulation
With idinvert_pytorch: We have a bad performance, but can manipulation (with correct boundary)
purify adv image
use it to conduct adv attack with diffusion model
##ddim train our diffusion model on NIO dataset, which can bu used by DiffPure and DiffusionCLIP
Install the required packages in req.txt.
Configurations can be changed in the config file.
Run the patch/train.py file.
Run the patch/test.py file. Specify the location of the adversarial mask image in main function.