-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gradient of a Discriminator in optimizing a Generator #179
Comments
您好,您的邮件小皮已经收到啦!!!
|
@YoojLee could you understand it now? I am having the same question |
@ahmedemam576 Well not clearly, but I guess it might be up to which dataset you use.. still not sure tho. |
@YoojLee Did you solved the problem? I think this discussions may helpful. https://discuss.pytorch.org/t/how-to-turn-off-gradient-during-gan-training/39886 |
您好,您的邮件小皮已经收到啦!!!
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
In the code above, generated images are detached from computational graph when optimizing D due to the gradient of G should not be calculated when optimizing D. I was quite sure that, as same as optimizing D, Discriminator should be isolated from G when optimizing G (by controlling requires_grad attributes of parameters in D or something).
However, I am confused that the discriminator is apparent to be not detached from the generator in your code. Do I understand something wrong? I will wait for your reply. Thanks a lot!
The text was updated successfully, but these errors were encountered: