Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: AMD 7900 XTX wont start #521

Open
3 of 6 tasks
zarigata opened this issue Aug 14, 2024 · 8 comments
Open
3 of 6 tasks

[Bug]: AMD 7900 XTX wont start #521

zarigata opened this issue Aug 14, 2024 · 8 comments

Comments

@zarigata
Copy link

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

wont start whatever i do.......

Steps to reproduce the problem

1 download
2 COMMANDLINE_ARGS= --use-directml --medvram --opt-sub-quad-attention --opt-split-attention --no-half-vae --upcast-sampling

What should have happened?

should start and run....

What browsers do you use to access the UI ?

Mozilla Firefox, Google Chrome

Sysinfo

H:\AMD stable\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities instead.
rank_zero_deprecation(
Traceback (most recent call last):
File "H:\AMD stable\stable-diffusion-webui-amdgpu\launch.py", line 48, in
main()
File "H:\AMD stable\stable-diffusion-webui-amdgpu\launch.py", line 29, in main
filename = launch_utils.dump_sysinfo()
File "H:\AMD stable\stable-diffusion-webui-amdgpu\modules\launch_utils.py", line 698, in dump_sysinfo
text = sysinfo.get()
File "H:\AMD stable\stable-diffusion-webui-amdgpu\modules\sysinfo.py", line 46, in get
res = get_dict()
File "H:\AMD stable\stable-diffusion-webui-amdgpu\modules\sysinfo.py", line 119, in get_dict
"Extensions": get_extensions(enabled=True, fallback_disabled_extensions=config.get('disabled_extensions', [])),
AttributeError: 'str' object has no attribute 'get'

Console logs

venv "H:\AMD stable\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-2-g395ce8dc
Commit hash: 395ce8dc2cb01282d48074a89a5e6cb3da4b59ab
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
H:\AMD stable\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --use-directml --medvram --opt-sub-quad-attention --opt-split-attention --no-half-vae --upcast-sampling
Traceback (most recent call last):
  File "H:\AMD stable\stable-diffusion-webui-amdgpu\launch.py", line 48, in <module>
    main()
  File "H:\AMD stable\stable-diffusion-webui-amdgpu\launch.py", line 44, in main
    start()
  File "H:\AMD stable\stable-diffusion-webui-amdgpu\modules\launch_utils.py", line 686, in start
    import webui
  File "H:\AMD stable\stable-diffusion-webui-amdgpu\webui.py", line 13, in <module>
    initialize.imports()
  File "H:\AMD stable\stable-diffusion-webui-amdgpu\modules\initialize.py", line 35, in imports
    from modules import shared_init
  File "H:\AMD stable\stable-diffusion-webui-amdgpu\modules\shared_init.py", line 8, in <module>
    from modules.zluda import initialize_zluda
  File "H:\AMD stable\stable-diffusion-webui-amdgpu\modules\zluda.py", line 5, in <module>
    import onnxruntime as ort
  File "H:\AMD stable\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnxruntime\__init__.py", line 57, in <module>
    raise import_capi_exception
  File "H:\AMD stable\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnxruntime\__init__.py", line 23, in <module>
    from onnxruntime.capi._pybind_state import ExecutionMode  # noqa: F401
  File "H:\AMD stable\stable-diffusion-webui-amdgpu\venv\lib\site-packages\onnxruntime\capi\_pybind_state.py", line 32, in <module>
    from .onnxruntime_pybind11_state import *  # noqa
ImportError: DLL load failed while importing onnxruntime_pybind11_state: A dynamic link library (DLL) initialization routine failed.
Press any key to continue . . .

Additional information

No response

@CS1o
Copy link

CS1o commented Aug 14, 2024

Hey, please dont use DirectML for this GPU.
You can run it with ZLUDA. Its much faster and has a better Vram management.
I have the same card too and no issues.
You find my AMD Automatic1111 with ZLUDA Guide here:
https://github.com/CS1o/Stable-Diffusion-Info/wiki/Installation-Guides

@zarigata
Copy link
Author

Hey, please dont use DirectML for this GPU. You can run it with ZLUDA. Its much faster and has a better Vram management. I have the same card too and no issues. You find my AMD Automatic1111 with ZLUDA Guide here: https://github.com/CS1o/Stable-Diffusion-Info/wiki/Installation-Guides

ive folowed this guide and yet no sucess.... maybe there is a new update in Zluda?

@zarigata
Copy link
Author

`stderr: error: subprocess-exited-with-error

Preparing metadata (pyproject.toml) did not run successfully.
exit code: 1

[17 lines of output]

  • meson setup C:\Users\zarigata\AppData\Local\Temp\pip-install-rld2l6i6\scikit-image_1fead003d3cc4b6db02665cc03107916 C:\Users\zarigata\AppData\Local\Temp\pip-install-rld2l6i6\scikit-image_1fead003d3cc4b6db02665cc03107916.mesonpy-3ge7pess -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=C:\Users\zarigata\AppData\Local\Temp\pip-install-rld2l6i6\scikit-image_1fead003d3cc4b6db02665cc03107916.mesonpy-3ge7pess\meson-python-native-file.ini
    The Meson build system
    Version: 1.5.1
    Source dir: C:\Users\zarigata\AppData\Local\Temp\pip-install-rld2l6i6\scikit-image_1fead003d3cc4b6db02665cc03107916
    Build dir: C:\Users\zarigata\AppData\Local\Temp\pip-install-rld2l6i6\scikit-image_1fead003d3cc4b6db02665cc03107916.mesonpy-3ge7pess
    Build type: native build
    Project name: scikit-image
    Project version: 0.21.0

..\meson.build:1:0: ERROR: Unable to detect linker for compiler clang -Wl,--version
stdout:
stderr: clang: warning: unable to find a Visual Studio installation; try running Clang from a developer command prompt [-Wmsvc-not-found]
clang: error: unable to execute command: program not executable
clang: error: linker command failed with exit code 1 (use -v to see invocation)

A full log can be found at C:\Users\zarigata\AppData\Local\Temp\pip-install-rld2l6i6\scikit-image_1fead003d3cc4b6db02665cc03107916.mesonpy-3ge7pess\meson-logs\meson-log.txt
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

Encountered error while generating package metadata.

See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

Press any key to continue . . .`

@CS1o
Copy link

CS1o commented Aug 16, 2024

Hey, please provide a full cmd log. Then i see where the issue might be,
Also open up a cmd and run
pip cache purge
Then delete the venv folder and relaunch the webui-user.bat

@zarigata
Copy link
Author

now it starts but it crashes the computer after the last step.... it goes VERY fast but the last step crashes the PC

@CS1o
Copy link

CS1o commented Aug 21, 2024

now it starts but it crashes the computer after the last step.... it goes VERY fast but the last step crashes the PC

WHat model and resolution and other txt2img settings did you used when it crashes?
Please provide more information.

@zarigata
Copy link
Author

now it starts but it crashes the computer after the last step.... it goes VERY fast but the last step crashes the PC

WHat model and resolution and other txt2img settings did you used when it crashes? Please provide more information.

ive used all sizes of resulotions, ffor now the only one that dont really crash hard the PC is the 512*512, ive used stable difusion 1.5 ,openjourney, opendalle, XDSL open journey and some from civic.AI, the crashes are random, what i belive it is the last step that it hangs for a moment and the gpu goes to 100% no mater the steps or size, is there a correct sampling method(for AMD usage)?

venv "J:\Invoke\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe" WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.10.1-amd-2-g395ce8dc Commit hash: 395ce8dc2cb01282d48074a89a5e6cb3da4b59ab Using ZLUDA in J:\Invoke\stable-diffusion-webui-amdgpu\.zluda Skipping onnxruntime installation. You are up to date with the most recent release. no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. J:\Invoke\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_onlyhas been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it frompytorch_lightning.utilitiesinstead. rank_zero_deprecation( Launching Web UI with arguments: --use-zluda --update-check --skip-ort CivitAI Browser+: Aria2 RPC started 找不到llama_cpp模块 Loading weights [e80275529b] from J:\Invoke\stable-diffusion-webui-amdgpu\models\Stable-diffusion\jibMixRealisticXL_v140CrystalClarity.safetensors Creating model from config: J:\Invoke\stable-diffusion-webui-amdgpu\repositories\generative-models\configs\inference\sd_xl_base.yaml J:\Invoke\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning:resume_downloadis deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, useforce_download=True`.
warnings.warn(
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Startup time: 88.8s (prepare environment: 154.9s, initialize shared: 2.2s, other imports: 3.4s, load scripts: 1.4s, initialize extra networks: 0.1s, create ui: 1.7s, gradio launch: 0.4s).
Applying attention optimization: Doggettx... done.
Model loaded in 32.4s (load weights from disk: 0.9s, create model: 1.5s, apply weights to model: 28.4s, apply half(): 0.1s, move model to device: 0.2s, load textual inversion embeddings: 0.3s, calculate empty prompt: 0.9s).
Reusing loaded model jibMixRealisticXL_v140CrystalClarity.safetensors [e80275529b] to load openjourney-v4.ckpt [02e37aad9f]
Loading weights [02e37aad9f] from J:\Invoke\stable-diffusion-webui-amdgpu\models\Stable-diffusion\openjourney-v4.ckpt
Creating model from config: J:\Invoke\stable-diffusion-webui-amdgpu\configs\v1-inference.yaml
J:\Invoke\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True.
warnings.warn(
Applying attention optimization: Doggettx... done.
Model loaded in 2.1s (create model: 0.4s, apply weights to model: 1.4s, calculate empty prompt: 0.2s).
`

@CS1o
Copy link

CS1o commented Sep 3, 2024

That could be a vae related issue. The VAE Step is the last Step.
Best is to install the "Tiled DIffusion & Tiled VAE" (Multidiffusion) Extension via the Extensions Tab.
https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111

After the install, restart the webui completely.
Then only Enable the Tiled VAE function in txt2img and try again.

Let me know if it worked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants