-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: safetensors_rust.SafetensorError: device privateuseone:0 is invalid #559
Comments
Hey, with DirectML its really hard to get SDXL/Pony models working as it requires a lot more vram than usual. |
Did everything as in tutorial, now when i try to choose the model entire webui just crashes venv "D:\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe" To create a public link, set The above exception was the direct cause of the following exception: Traceback (most recent call last): Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json. The above exception was the direct cause of the following exception: Traceback (most recent call last): Failed to create model quickly; will retry using slow method. |
Hey, as you only have 16GB of RAM you need to increase the Windows Pagefile. |
I don't even have 24000 left on my SSD. I guess I'll just use another model |
Nevermind, even waifu diffusion isn't loading venv "D:\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe" To create a public link, set Stable diffusion model failed to load Stable diffusion model failed to load |
Checklist
What happened?
When trying to run any other model than Stable Diffusion 1.5 (Pony Diffusion V6XL) i get safetensor error
Steps to reproduce the problem
What should have happened?
WebUI should load the model
What browsers do you use to access the UI ?
Other
Sysinfo
sysinfo-2024-11-03-14-59.json
Console logs
Additional information
I have 16 GB of RAM and RX 5700 with 8 GB VRAM, also using directml
The text was updated successfully, but these errors were encountered: