-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Submission for issue #87 #155
base: master
Are you sure you want to change the base?
Conversation
Hi, please find below a review submitted by one of the reviewers: Score: 7 |
Hi, please find below a review submitted by one of the reviewers: Score: 6
Confidence : 4 |
Hi, please find below a review submitted by one of the reviewers: Score: 5 Overall, the paper is a bit unpolished --- there are still places where the authors have marked reminders to edit sections in all caps, and various typos are scattered throughout the document. One of these places is the description of the PA-GAN algorithm, making it a bit hard to parse. But it is possible to understand the essence of PA-GAN by reading the paper. The authors produce their own code for the replication, which is nice. They experiment on all 4 datasets that the authors considered. The best thing about this paper is that they contacted the authors on several occasions to clarify aspects of their paper, and it turns out that the author's implementation differed from what they stated in the paper. That said, I have a couple of problems with the paper. One is that they do not perform all of the experiments in the paper, only the ones using NS GAN. This may be due to computational resources, but this is not really explained in the paper --- instead, the paper brushes this a bit under the rug saying that they use NS GAN because it is used in the original paper. An important part of replication is stating to what extent you are replicating, given your computational budget. I'd like to see an honest discussion of this in the paper. (This also goes for the fact that the authors did not experiment with any other hyperparameters, or vary the random seed). I also take issue with the claim that the authors replicated the paper successfully. The FID number they report, 26.3, is the same as the NS-GAN result without PA reported in the original paper. Also, judging by the graphs the PA approach only seems to improve relative to the baseline on 2 of the 4 datasets. I think this would be worth discussing in more detail. Overall, I really appreciate this replication for providing open-source code and for probing into the details of the author's original implementation. However, it does not seem like a full replication to me, only a limited one. I'd really like for this aspect to be discussed in more detail in the paper, rather than being swept under the rug. |
#87