Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Invalid device string: 'cuda:None' #15

Open
MichaelMedek opened this issue Aug 8, 2024 · 0 comments
Open

RuntimeError: Invalid device string: 'cuda:None' #15

MichaelMedek opened this issue Aug 8, 2024 · 0 comments

Comments

@MichaelMedek
Copy link

MichaelMedek commented Aug 8, 2024

During training (Tesla V100-PCIE-16GB) I get the following error

Train:   0%|                                                                                                                                           | 0/10 [00:00<?, ?it/s]Traceback (most recent call last):
  File "/anaconda/envs/rtfm/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/anaconda/envs/rtfm/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/dev-medekm-gpu/code/Users/michael.medek/rtfm/rtfm/finetune.py", line 451, in <module>
    main(
  File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/dev-medekm-gpu/code/Users/michael.medek/rtfm/rtfm/finetune.py", line 408, in main
    results = train(
  File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/dev-medekm-gpu/code/Users/michael.medek/rtfm/rtfm/train_utils.py", line 274, in train
    batch[key] = batch[key].to(f"cuda:{local_rank}")
RuntimeError: Invalid device string: 'cuda:None'
Train:   0%| 

Which traces to here

batch[key] = batch[key].to(f"cuda:{local_rank}")

where local_rank is None, thus Invalid device string: 'cuda:None'. How is this supposed to work? The default of the function is local_rank=None which should be invalid, since it must be int, right? In evaluate() there is only local_rank: int.

By adding

local_rank = 0
rank = 0
print("WARNING! Overwriting local_rank and rank to 0!")

this issue is worked around.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant