Error at export to ONNX-FINN (Caching of output shapes is required to export QuantConvNd) #424
Unanswered
balabengba
asked this question in
Q&A
Replies: 2 comments 1 reply
-
Hi, did you find a solution? |
Beta Was this translation helpful? Give feedback.
1 reply
-
When I met this problem, my solution is remove the functions of "self.conv=qnn.QuantConv2d" that won't be used later on. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone, I'm trying to implement segmentation NN as described in the FINN-R paper with Brevitas and FINN. I describe the NN only by QuantConv2d.
When I export to ONNX-FINN, I get this error message:
RuntimeError: Caching of output shapes is required to export QuantConvNd
This is my export command:
bo.export_finn_onnx(model , export_path=onnx_filename, input_shape=(1, 3, 512, 512) )
Could you help me with that?
torch : 1.10.0
brevitas: 0.7.0
Thanks in advance for your help.
Beta Was this translation helpful? Give feedback.
All reactions