-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training set and test set setting? #54
Comments
Hi, We use a subset of the scenes that should match the ones used by loftr. See megadepth.py for specifics. Table 5 is on megadepth1500 which you can get from loftr as well. The exact settings should be the ones we use in the eval scripts in experiments. Let me know if you have any issues reproducing. I sometimes get about 0.5 lower than the reported numbers, the eval is stochastic. |
When we do the benchmarking, the AUC5, AUC10, and AUC20 values of the original RoMa are 0.622, 0.764, and 0.862, respectively. But, in the paper, they recorded as 62.6, 76.7, 86.3. The differences are 0.004, 0.003, and 0.001. Do you think the difference are normal? |
Yes, this seems reasonable. When I ran the original test run I think I got lucky with the auc5. There might also exist some tiny differences due to updates to codebase and so on. |
This job is great and effective! And I have some doubts about this field because I am not very familiar with conventions. Are the values of other methods in this table redeployed and run or are they referenced from the original author's paper, such as PMatch. I noticed that their code seems to be unavailable. Can we directly reference these values when publishing our paper, if we use the same code and settings for evaluation? I have not yet published a paper, and I am very puzzled about this. @Parskatt Thank you very much if you can answer. |
Hi @stilcrad numbers are referenced from the original paper. Typically this is ok. Agree that it’s unfortunate that pmatch code is unavailable. Good luck with the paper 😀 |
I thought you got
5% lower shouldnt be possible, no. |
Do you use eval_roma_outdoor.py for paper results? I am not sure why results are {"auc_5": 0.6221797024846101, "auc_10": 0.7651434566786843, "auc_20": 0.8623855457790823, "map_5": 0.8589333333333333, "map_10": 0.8977333333333333, "map_20": 0.9329333333333334} or {"auc_5": 0.6221797024846101, "auc_10": 0.7651434566786843, "auc_20": 0.8623855457790823, "map_5": 0.8589333333333333, "map_10": 0.8977333333333333, "map_20": 0.9329333333333334} |
Yes, this seems reasonable. Its not 5% difference, its 0.4% diff. |
@Parskatt Dear author, thanks so much for sharing this great work! I have a question for reproducing the result. I downloaded the megadepth_test_1500.tar (https://drive.google.com/drive/folders/1nTkK1485FuwqA0DbZrK2Cl0WnXadUZdc) and scene info files (https://github.com/zju3dv/LoFTR/tree/master/assets/megadepth_test_1500_scene_info) from loftr, and then I ran the
It is strange. Do you have any suggestion on this? |
@nfyfamr I dont have my machine in front of me atm so can't rerun the tests myself, but here's a checklist for what should be enabled to reproduce results. A) check that resolution (672, 672) (1152, 1152) or similar for coarse and upsample resolutions respectively. I have previously reproduced the performance, of the paper, but as PRs etc come in regressions can occur. Finally, there might be other packages you have installed that are interfering. I suggest making a clean conda environment (python3.10) and using pip install -e . |
Does the megadepth training dataset has some missing files in Undistorted_SfM? Since some of them just have sceneinfo and no matching Undistorted_SfM. When I set k<N during training will get data loading missing error. |
Are you using our train split or all scene infos? All scenes in our train split should have undistorted sfm scenes. Your undistorted sfm com3s from running undistort on something of course. Please check that all relevant images are there. There might also be people who have asked similar questions in the loftr repo, please also check there. |
@Parskatt Thanks for the advice. I checked them and used clean conda environment. But the result was the same before. Could you let me know the commit id you used at your evaluation? |
Great, it works. Now I can reproduce the values. Thanks a lot! |
Oh, then some bug must have been introduced... |
@nfyfamr Can you verify whether you ran mega_1500 or mega1500_poselib in your eval on the main branch? I saw that I accidentally pushed mega1500_poselib as default. There might be some bug there. If that was the case, could you please rerun with mega_1500 and see if you reproduce numbers? Thanks. |
I should probably add some CI or similar to force test to be repros before pushing to main... Lesson learned maybe. |
I have previously obtained results with commit 36389ef.
Hope this helps your work. |
@nfyfamr Could you rerun with the latest commit? I found that a recent PR had accidentally broken "upsample_preds" which is used for mega1500. As a bonus, I fixed the mesh grid params so you won't get annoying warnings about that anymore :p |
I got these values for the latest commit (e658800). auc_5 looks have a little more value drop than the paper by about 0.6%, but I think this is due to the upsample resolution. Currently, the default setting for it is (1344,1344). Anyway, now it looks to work as expected. Thanks for the hot fix.
|
Yes, this looks reasonable to me. |
Hi Author,
2.When prep_info's corresponding Undistorted_sfm has both dense0 and dense1, dense0 should be choosed or dense1 should be choosed?
The text was updated successfully, but these errors were encountered: