Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it easy to support the training of images with both (H, W) and (W, H) sizes in the dataset? #64

Open
tb2-sy opened this issue Mar 14, 2024 · 2 comments

Comments

@tb2-sy
Copy link

tb2-sy commented Mar 14, 2024

1455adcf7e2e93ed111371aeaa30b9e2
Dear authors,
Thank you for the great work. I have a question about the dataset training.
I tried to use the ActorHQ dataset for training on neus2, but the characteristics of the images in this dataset are images with both (H, W) and (W, H) placement, as shown in the picture above. Can your code support this? format data for training? Because I noticed that in the json example you provided, the size of the image in each frame is consistent.

@brianneoberson
Copy link

Hi, we're you able to run it on images with different sizes? I am also trying to test on a dataset that has images of different sizes, and I specify the height and width per frame in the transform.json, but I am getting errors.

@tb2-sy
Copy link
Author

tb2-sy commented Jul 25, 2024

Hi, we're you able to run it on images with different sizes? I am also trying to test on a dataset that has images of different sizes, and I specify the height and width per frame in the transform.json, but I am getting errors.

I didn't solve this problem and gave up experimenting with this method.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants