Skip to content

onnx / tensorflow-onnx and FINN framework #560

Answered by heborras
amroybd asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @amroybd,
currently FINN only supports quantized models, which were trained and exported by Brevitas.
In the future, we are planning to add support for models trained by QKeras, in collaboration with our colleagues at hls4ml.
Using the tf2onnx converter as it is currently will likely result in a network, which uses a different quantization to what FINN would expect from Brevitas or QONNX.

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@amroybd
Comment options

Answer selected by amroybd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants