Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disparity between Eq.(18) in the paper and code implementation #75

Open
cailile opened this issue Sep 2, 2024 · 1 comment
Open

Disparity between Eq.(18) in the paper and code implementation #75

cailile opened this issue Sep 2, 2024 · 1 comment

Comments

@cailile
Copy link

cailile commented Sep 2, 2024

Dear authors, I notice there may be some disparities between Eq.(18) and implementation:

  1. In robust_loss.py, Ln.92, the robust regression loss is implemented as:
image

Compared to what is defined in Eq.(18) of the paper:
image

there is an extra power term **2 on the cs term.

  1. The paper mentioned that c is chosen to be 0.03, but in train_roma_outdoor.py, Ln.220, the value passed to c is 1e-4:
image
  1. The first minus sign in Eq.(18) (indicated by the red circle above) should be removed.

Could you kindly help to clarify? Thanks!

@Parskatt
Copy link
Owner

Parskatt commented Nov 1, 2024

1.(and 3) You are correct, eq. 16 is correct but forgot to flip the sign for the KL divergence, thanks.
2. The value we mention in the paper is in pixels, while the codebase is in normalized coordinates. At a resolution of 560 we get $560/2 10^{-4} \approx 0.03 $

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants