Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does vos_inference.py support multi-GPU running? #228

Open
lzkzls opened this issue Aug 16, 2024 · 2 comments
Open

Does vos_inference.py support multi-GPU running? #228

lzkzls opened this issue Aug 16, 2024 · 2 comments

Comments

@lzkzls
Copy link

lzkzls commented Aug 16, 2024

Thanks for the open source. I run vos_inference.py with my own video dataset on a machine with four v100 graphics cards (each with 32G memory). I detected that the program is only running on one card. Does vos_inference.py support multi-GPU running? How should I modify the code?

@ronghanghu
Copy link
Contributor

ronghanghu commented Aug 17, 2024

Hi @lzkzls, currently vos_inference.py only runs on a single GPU. However, since it mostly just loop over the video list, you may split your video list into multiple chunks and use the --video_list_file parameter in https://github.com/facebookresearch/segment-anything-2/blob/7e1596c0b6462eb1d1ba7e1492430fed95023598/tools/vos_inference.py#L231 to perform inference on a separate chunk of the dataset in each GPU and save their prediction PNG files into the output directory.

@EricLina
Copy link

Hello, this issue maybe helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants