Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 [Bug] Expected input tensors to have type Half, found type float #2113

Open
thesword53 opened this issue Jul 13, 2023 · 12 comments
Open
Assignees
Labels
bug Something isn't working

Comments

@thesword53
Copy link

Bug Description

TensorRT throws error about fp32 tensors input despite I am using fp16 tensors as input.

I attached the file IFRNet.py adapted from https://github.com/ltkong218/IFRNet/blob/main/models/IFRNet.py

To Reproduce

Steps to reproduce the behavior:

  1. Compile model with fp16 inputs and fp16 dtype
  2. Infer model with fp16 tensors

Expected behavior

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • Torch-TensorRT Version (e.g. 1.0.0): 1.4.0
  • PyTorch Version (e.g. 1.0): 3.
  • CPU Architecture: x86_64
  • OS (e.g., Linux): Arch Linux
  • How you installed PyTorch (conda, pip, libtorch, source): Arch Linux AUR
  • Build command you used (if compiling from source):
  • Are you using local sources or building from archives:
  • Python version: 3.11.4
  • CUDA version: 12.2
  • GPU models and configuration: RTX 2080 SUPER
  • Any other relevant information:

Additional context

WARNING: [Torch-TensorRT] - For input embt.1, found user specified input dtype as Half. The compiler is going to use the user setting Half
WARNING: [Torch-TensorRT] - Mean converter disregards dtype
WARNING: [Torch-TensorRT] - Mean converter disregards dtype
WARNING: [Torch-TensorRT] - Mean converter disregards dtype
WARNING: [Torch-TensorRT] - Trying to record the value 162 with the ITensor (Unnamed Layer* 79) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 185 with the ITensor (Unnamed Layer* 101) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Trying to record the value 43 with the ITensor (Unnamed Layer* 17) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 67 with the ITensor (Unnamed Layer* 39) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Trying to record the value 43 with the ITensor (Unnamed Layer* 17) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 67 with the ITensor (Unnamed Layer* 39) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Trying to record the value 43 with the ITensor (Unnamed Layer* 17) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 67 with the ITensor (Unnamed Layer* 39) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Input 0 of engine __torch___wrappers_ifrnet_models_IFRNet_Model_trt_engine_0x5604f02a32e0 was found to be on cpu but should be on cuda:0. This tensor is being moved by the runtime but for performance considerations, ensure your inputs are all on GPU and open an issue here (https://github.com/pytorch/TensorRT/issues) if this warning persists.
WARNING: [Torch-TensorRT] - Input 1 of engine __torch___wrappers_ifrnet_models_IFRNet_Model_trt_engine_0x5604f02a32e0 was found to be on cpu but should be on cuda:0. This tensor is being moved by the runtime but for performance considerations, ensure your inputs are all on GPU and open an issue here (https://github.com/pytorch/TensorRT/issues) if this warning persists.
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: [Error thrown at /usr/src/debug/python-pytorch-tensorrt/TensorRT/core/runtime/execute_engine.cpp:136] Expected inputs[i].dtype() == expected_type to be true but got false
Expected input tensors to have type Half, found type float

IFRNet.py.gz

@thesword53 thesword53 added the bug Something isn't working label Jul 13, 2023
@narendasan
Copy link
Collaborator

I dont see the torch-tensorrt code in the link you shared.

@bowang007 Keep an eye on this, might be related to some of your PRs

@leizaf
Copy link

leizaf commented Jul 25, 2023

I'm also having this issue

@thesword53
Copy link
Author

I also noticed a simple sum between 2 fp16 tensors implicitly cast them to a fp32 tensor.

@JXQI
Copy link

JXQI commented Sep 23, 2023

I'm also having this issue, how to slove it?

@janblumenkamp
Copy link

I am encountering the same issue.

@bowang007
Copy link
Collaborator

This PR can help resolve above issue.
Thanks!

@Eliza-and-black
Copy link

This PR can help resolve above issue. Thanks!

@bowang007 Is there any update for your commit? It seems fail in a few check. Eagerly looking forward to your update.

@johnzlli
Copy link

also having this issue!

@johnzlli
Copy link

This PR can help resolve above issue. Thanks!

image
There is a new error with this PR. Is there any update?

@bowang007
Copy link
Collaborator

Hi @johnzlli , can you try using dynamo path instead?
We are now supporting Dynamo since Torchscript path is being deprecated.
Thanks!

@johnzlli
Copy link

Hi @johnzlli , can you try using dynamo path instead? We are now supporting Dynamo since Torchscript path is being deprecated. Thanks!

Thanks for your reply! Dynamo is a great work, but there is no way to export the compiled model. So that we have to still use torchscript.

@aadishjoshi09
Copy link

having similar issue! please fix this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

9 participants