Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Image fails to generate with ONNX enabled #531

Open
3 of 6 tasks
Diridibindy opened this issue Sep 3, 2024 · 7 comments
Open
3 of 6 tasks

[Bug]: Image fails to generate with ONNX enabled #531

Diridibindy opened this issue Sep 3, 2024 · 7 comments

Comments

@Diridibindy
Copy link

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

When trying to generate an image it fails.

Steps to reproduce the problem

  1. Enable ONNX
  2. Try to generate an image

What should have happened?

The image should've generated

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

sysinfo-2024-09-03-03-29.json

Console logs

venv "D:\Tools\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.10.1
Commit hash: d8b7380b18d044d2ee38695c58bae3a786689cf3
Installing torch and torchvision
Requirement already satisfied: torch==2.3.1 in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (2.3.1)
Requirement already satisfied: torchvision in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (0.18.1)
Requirement already satisfied: torch-directml in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (0.2.4.dev240815)
Requirement already satisfied: filelock in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (from torch==2.3.1) (3.15.4)
Requirement already satisfied: typing-extensions>=4.8.0 in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (from torch==2.3.1) (4.12.2)
Requirement already satisfied: sympy in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (from torch==2.3.1) (1.13.2)
Requirement already satisfied: networkx in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (from torch==2.3.1) (3.3)
Requirement already satisfied: jinja2 in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (from torch==2.3.1) (3.1.4)
Requirement already satisfied: fsspec in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (from torch==2.3.1) (2024.6.1)
Requirement already satisfied: mkl<=2021.4.0,>=2021.1.1 in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (from torch==2.3.1) (2021.4.0)
Requirement already satisfied: numpy in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (from torchvision) (1.26.2)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (from torchvision) (9.5.0)
Requirement already satisfied: intel-openmp==2021.* in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch==2.3.1) (2021.4.0)
Requirement already satisfied: tbb==2021.* in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch==2.3.1) (2021.13.1)
Requirement already satisfied: MarkupSafe>=2.0 in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (from jinja2->torch==2.3.1) (2.1.5)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in d:\tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages (from sympy->torch==2.3.1) (1.3.0)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check --use-directml --reinstall-torch
ONNX: version=1.19.0 provider=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Loading weights [821aa5537f] from D:\Tools\stable-diffusion-webui-amdgpu\models\Stable-diffusion\autismmixSDXL_autismmixPony(1).safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Creating model from config: D:\Tools\stable-diffusion-webui-amdgpu\repositories\generative-models\configs\inference\sd_xl_base.yaml
Startup time: 22.1s (prepare environment: 29.7s, initialize shared: 3.1s, load scripts: 1.0s, create ui: 0.5s, gradio launch: 1.1s).
creating model quickly: OSError
Traceback (most recent call last):
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_errors.py", line 304, in hf_raise_for_status
    response.raise_for_status()
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 402, in cached_file
    resolved_file = hf_hub_download(
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_deprecation.py", line 101, in inner_f
    return f(*args, **kwargs)
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1240, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1347, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1854, in _raise_on_head_call_error
    raise head_call_error
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1751, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1673, in get_hf_file_metadata
    r = _request_wrapper(
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 376, in _request_wrapper
    response = _request_wrapper(
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 400, in _request_wrapper
    hf_raise_for_status(response)
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_errors.py", line 352, in hf_raise_for_status
    raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-66d681b8-507b9ddc6d4a9dd716fdd90f;c380cdb6-456d-468e-9b18-a8588cb91577)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\Diri\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\Diri\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\Diri\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "D:\Tools\stable-diffusion-webui-amdgpu\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "D:\Tools\stable-diffusion-webui-amdgpu\modules\shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "D:\Tools\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "D:\Tools\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 831, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "D:\Tools\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "D:\Tools\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\diffusion.py", line 61, in __init__
    self.conditioner = instantiate_from_config(
  File "D:\Tools\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "D:\Tools\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 88, in __init__
    embedder = instantiate_from_config(embconfig)
  File "D:\Tools\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "D:\Tools\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 361, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "D:\Tools\stable-diffusion-webui-amdgpu\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 3213, in from_pretrained
    resolved_config_file = cached_file(
  File "D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 425, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.
Applying attention optimization: sub-quadratic... done.
Model loaded in 42.9s (load weights from disk: 1.2s, load config: 0.2s, create model: 16.9s, apply weights to model: 19.9s, apply half(): 0.6s, move model to device: 0.2s, hijack: 0.3s, calculate empty prompt: 3.5s).
*** Error completing request
*** Arguments: ('task(kzid83zm6p47i2t)', <gradio.routes.Request object at 0x000001DC0A18DED0>, 'cat. cute', '', [], 1, 1, 7, 1024, 800, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "D:\Tools\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "D:\Tools\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "D:\Tools\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "D:\Tools\stable-diffusion-webui-amdgpu\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "D:\Tools\stable-diffusion-webui-amdgpu\modules\processing.py", line 849, in process_images
        res = process_images_inner(p)
      File "D:\Tools\stable-diffusion-webui-amdgpu\modules\processing.py", line 917, in process_images_inner
        shared.sd_model.scheduler = sd_samplers.create_sampler(p.sampler_name, shared.sd_model)
      File "D:\Tools\stable-diffusion-webui-amdgpu\modules\sd_samplers.py", line 42, in create_sampler
        sampler = config.constructor.from_config(model.scheduler.config)
    AttributeError: 'function' object has no attribute 'from_config'

Additional information

No response

@Ming2k8-Coder
Copy link

Ming2k8-Coder commented Sep 3, 2024 via email

@Diridibindy
Copy link
Author

With a 1.5 model i get this if i try SDXL ONNX
venv "D:\Tools\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.10.1
Commit hash: d8b7380
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities instead.
rank_zero_deprecation(
Launching Web UI with arguments: --opt-sub-quad-attention --lowvram --disable-nan-check --use-directml
ONNX: version=1.19.0 provider=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Startup time: 19.4s (prepare environment: 28.1s, initialize shared: 3.0s, load scripts: 1.0s, create ui: 0.6s, gradio launch: 0.3s).
Fetching 11 files: 100%|███████████████████████████████████████████████████████████████████████| 11/11 [00:00<?, ?it/s]
Loading pipeline components...: 0%| | 0/6 [00:00<?, ?it/s]D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn(
Some weights of the model checkpoint were not used when initializing CLIPTextModel:
['text_model.embeddings.position_ids']
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:04<00:00, 1.28it/s]
Applying attention optimization: sub-quadratic... done.
WARNING: ONNX implementation works best with SD.Next. Please consider migrating to SD.Next.
D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_attn_mask_utils.py:86: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_shape[-1] > 1 or self.sliding_window is not None:
D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_attn_mask_utils.py:162: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if past_key_values_length > 0:
D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\models\clip\modeling_clip.py:797: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
encoder_states = () if output_hidden_states else None
D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\models\clip\modeling_clip.py:802: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if output_hidden_states:
D:\Tools\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\models\clip\modeling_clip.py:825: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if output_hidden_states:
ONNX: Failed to convert model: model='mistoonAnime_v30.safetensors', error=z_(): incompatible function arguments. The following argument types are supported:
1. (self: torch._C.Node, arg0: str, arg1: torch.Tensor) -> torch._C.Node

Invoked with: %343 : Tensor = onnx::Constant(), scope: transformers.models.clip.modeling_clip.CLIPTextModel::/transformers.models.clip.modeling_clip.CLIPTextTransformer::text_model/transformers.models.clip.modeling_clip.CLIPEncoder::encoder/transformers.models.clip.modeling_clip.CLIPEncoderLayer::layers.0/transformers.models.clip.modeling_clip.CLIPSdpaAttention::self_attn
, 'value', 0.125
(Occurred when translating scaled_dot_product_attention).
Fetching 11 files: 100%|███████████████████████████████████████████████████████████████████████| 11/11 [00:00<?, ?it/s]
Loading pipeline components...: 0%| | 0/6 [00:00<?, ?it/s]Some weights of the model checkpoint were not used when initializing CLIPTextModel:
['text_model.embeddings.position_ids']
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:03<00:00, 1.58it/s]
*** Error completing request
*** Arguments: ('task(t8onr13m4708nao)', <gradio.routes.Request object at 0x000001FEEFDF8E50>, 'cat', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'PNDM', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "D:\Tools\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "D:\Tools\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "D:\Tools\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\Tools\stable-diffusion-webui-amdgpu\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "D:\Tools\stable-diffusion-webui-amdgpu\modules\processing.py", line 849, in process_images
res = process_images_inner(p)
File "D:\Tools\stable-diffusion-webui-amdgpu\modules\processing.py", line 917, in process_images_inner
shared.sd_model.scheduler = sd_samplers.create_sampler(p.sampler_name, shared.sd_model)
File "D:\Tools\stable-diffusion-webui-amdgpu\modules\sd_samplers.py", line 38, in create_sampler
if model.is_sdxl and config.options.get("no_sdxl", False):
AttributeError: 'NoneType' object has no attribute 'get'


@Ming2k8-Coder
Copy link

Ming2k8-Coder commented Sep 3, 2024 via email

@Diridibindy
Copy link
Author

I am using an integrated GPU on ryzen 5 5500u

@Ming2k8-Coder
Copy link

Ming2k8-Coder commented Sep 3, 2024 via email

@Diridibindy
Copy link
Author

I am using it just fine without ONNX. The issue only arrises with ONNX.

@Ming2k8-Coder
Copy link

Ming2k8-Coder commented Sep 3, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants