-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Submit to Foolbox and Robust Vision Benchmark #1
Comments
@wielandbrendel Thank you for your interests in our algorithm! Yes I will be glad to implement it into Foolbox. To start with, I have two simple questions and hope you can answer:
Thank you! |
@huanzhang12 Thanks for your response!
Looking forward! |
@wielandbrendel Thanks for providing those details! Let me know when you finish the implementation of Carlini and Wagner's attack. It shouldn't be hard to extend it to my attack. Thanks! |
It's been some time but the Foolbox finally contains a (very beautiful) implementation of the Carlini & Wagner attack: https://github.com/bethgelab/foolbox/blob/master/foolbox/attacks/carlini_wagner.py We would still be interested to have your ZOO attack as part of Foolbox! Please let me know if we can help. |
I am the co-author of Foolbox and coordinator of the Robust Vision Benchmark. The results you report are amazing, in particularly so for a blackbox attack. I'd like to encourage you to implement your algorithm into Foolbox, a Python package that we recently released to ease the use of available adversarial attacks. Foolbox already implements many common attacks, has a simple API, is usable across a range of deeplearning packages (e.g. Tensorflow, Theano, PyTorch, etc.) and has already attracted significant adoption in the community. Your attack method could find much wider adoption if it is made available in Foolbox.
Second, your attack method might also be an interesting contender for the Robust Vision Benchmark in which we pitch DNNs against adversarial attacks. We started this benchmark a couple of days ago and you could make this an awesome public showcase for the performance of your algorithm. In particular, your attack would be tested against many different network models (current and future) without your intervention.
I'll be happy to help with technical questions. If you'd implement ZOO in Foolbox then I could automatically include your algorithm in our benchmark without any additional intervention from your side (i.e. you would not need to prepare and submit a Docker container with the attack). Let me know if that sounds interesting to you.
The text was updated successfully, but these errors were encountered: