-
Hi, do the FINN framework support the ONNX format converted from the TFLite model using this tf2onnx converter? |
Beta Was this translation helpful? Give feedback.
Answered by
heborras
Feb 28, 2022
Replies: 1 comment 1 reply
-
Hi @amroybd, |
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
amroybd
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi @amroybd,
currently FINN only supports quantized models, which were trained and exported by Brevitas.
In the future, we are planning to add support for models trained by QKeras, in collaboration with our colleagues at hls4ml.
Using the tf2onnx converter as it is currently will likely result in a network, which uses a different quantization to what FINN would expect from Brevitas or QONNX.