Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Submit to Foolbox and Robust Vision Benchmark #1

Open
wielandbrendel opened this issue Aug 15, 2017 · 4 comments
Open

Submit to Foolbox and Robust Vision Benchmark #1

wielandbrendel opened this issue Aug 15, 2017 · 4 comments

Comments

@wielandbrendel
Copy link

I am the co-author of Foolbox and coordinator of the Robust Vision Benchmark. The results you report are amazing, in particularly so for a blackbox attack. I'd like to encourage you to implement your algorithm into Foolbox, a Python package that we recently released to ease the use of available adversarial attacks. Foolbox already implements many common attacks, has a simple API, is usable across a range of deeplearning packages (e.g. Tensorflow, Theano, PyTorch, etc.) and has already attracted significant adoption in the community. Your attack method could find much wider adoption if it is made available in Foolbox.

Second, your attack method might also be an interesting contender for the Robust Vision Benchmark in which we pitch DNNs against adversarial attacks. We started this benchmark a couple of days ago and you could make this an awesome public showcase for the performance of your algorithm. In particular, your attack would be tested against many different network models (current and future) without your intervention.

I'll be happy to help with technical questions. If you'd implement ZOO in Foolbox then I could automatically include your algorithm in our benchmark without any additional intervention from your side (i.e. you would not need to prepare and submit a Docker container with the attack). Let me know if that sounds interesting to you.

@huanzhang12
Copy link
Owner

@wielandbrendel Thank you for your interests in our algorithm! Yes I will be glad to implement it into Foolbox. To start with, I have two simple questions and hope you can answer:

  1. Does Foolbox implement Carlini and Wagner's attack? If yes it will be very helpful, because our attack can share a lot of code with it;
  2. Does Foolbox has a framework for blackbox attack? I briefly take a look at the documentation and it seems to mainly focus on white-box attack (with a model).

Thank you!

@wielandbrendel
Copy link
Author

wielandbrendel commented Aug 16, 2017

@huanzhang12 Thanks for your response!

  1. Carlini and Wagner: I started to implement that attack just today. Not sure yet how long that will take though. I can let you know once the implementation is finished.

  2. Foolbox is not focused on white-box attacks. In fact, I believe that blackbox attacks will become much more important simply because whitebox attacks are so easy to disarm. We implemented several blackbox attacks (mostly simple noise attacks though) and the only structural difference between whitebox and blackbox attacks in Foolbox is that the latter do not call the gradient function of the model ;-). Take a look at the super-simple ContrastAttack to get a feeling.

Looking forward!

@huanzhang12
Copy link
Owner

@wielandbrendel Thanks for providing those details! Let me know when you finish the implementation of Carlini and Wagner's attack. It shouldn't be hard to extend it to my attack. Thanks!

@wielandbrendel
Copy link
Author

It's been some time but the Foolbox finally contains a (very beautiful) implementation of the Carlini & Wagner attack: https://github.com/bethgelab/foolbox/blob/master/foolbox/attacks/carlini_wagner.py

We would still be interested to have your ZOO attack as part of Foolbox! Please let me know if we can help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants