-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A question about your loss_coteaching function #4
Comments
I also observed the same issue, but I tested using log_softmax only once. I did not observe the difference in the results. Please let me know if you observe different results. One more issue is that in return function of loss, the loss value is normalized twice |
OK~ |
I don't observe that y_1, y_2 are already the log_softmax results, have the code has been changed and fixed? @nihaomiao @bbdamodaran |
This was fixed in 4d32d1c. Note that even if the results were already |
Hello, I find that in your class "loss_coteaching", the parameters you passed are y_1, y_2, which are already the log_softmax results. But you then used the cross_entropy, which has combined the log_softmax and nll_loss. This will use log_softmax twice. I am not sure whether I am wrong. Or do this problem not occur in your pytorch version?
By the way, even though using log_softmax function twice, your code is still right~
The text was updated successfully, but these errors were encountered: