We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.期望能用其他加速卡来切,而不仅仅是nv卡来切权重。因为默认的tools/checkpoint_util.py 里会设计到nv编译的逻辑,其他卡不支持。 2.多机分布式支持切割权重。因为有的加速卡没有配置共享存储,模型一大,拷贝权重就很不方便,期望能有多机切割权重的功能。 3.降低host端的峰值内存。由于不同机器上host端的内存不一样,nv机器上的内存有1T,单机就能切;但对于某些host端内存比较小的,比如512G的情况下,切割权重会出现oom,因此期望增加降峰值内存的功能,比如load 一层layer,就save 一层layer。
The text was updated successfully, but these errors were encountered:
We have discovered a more effective and unified approach to this. This issue can be addressed in the future.
Sorry, something went wrong.
No branches or pull requests
1.期望能用其他加速卡来切,而不仅仅是nv卡来切权重。因为默认的tools/checkpoint_util.py 里会设计到nv编译的逻辑,其他卡不支持。
2.多机分布式支持切割权重。因为有的加速卡没有配置共享存储,模型一大,拷贝权重就很不方便,期望能有多机切割权重的功能。
3.降低host端的峰值内存。由于不同机器上host端的内存不一样,nv机器上的内存有1T,单机就能切;但对于某些host端内存比较小的,比如512G的情况下,切割权重会出现oom,因此期望增加降峰值内存的功能,比如load 一层layer,就save 一层layer。
The text was updated successfully, but these errors were encountered: