Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Working with Custom model #73

Open
ppccww0201 opened this issue Jan 5, 2024 · 1 comment
Open

Working with Custom model #73

ppccww0201 opened this issue Jan 5, 2024 · 1 comment

Comments

@ppccww0201
Copy link

ppccww0201 commented Jan 5, 2024

Hi.

I tried the vgg10-radioml example. I synthesized the vgg10-radioml model with the command ./run-docker.sh build_custom $FINN_EXAMPLES/build/vgg10-radioml using FINN.

Ultimately, I want to export a custom model, constructed based on Brevitas (or any other framework), as ONNX and synthesize it with FINN, rather than using the provided vgg10-radioml ONNX. I have few questions and I don't necessarily need answers to all of them. I hope to receive advice that helps me achieve my ultimate goal.

  1. After exporting my custom Brevitas model to ONNX and replacing the ONNX file in $FINN_EXAMPLES/build/vgg10-radioml, I encountered numerous errors. Is the $FINN_EXAMPLES/build/vgg10-radioml/build.py specifically written for the vgg10-radioml model? Can I not synthesize my custom model using this build.py code?

  2. If there is code available for the vgg10-radioml model before exporting it to ONNX, I think I can build my custom model based on that and synthesize it with FINN smoothly. The vgg10-radioml model code likely incorporates an understanding of FINN and adheres to many FINN constraints, making it easier to transplant my custom model. Could I obtain the code for the vgg10-radioml model? (Even if it's not based on Brevitas)

Thank you.

@fpjentzsch
Copy link
Contributor

Hi,
I think the FINN Github discussions might be a better place to discuss this: https://github.com/Xilinx/finn/discussions

The model should be based on this notebook and you can refer to this related thread for how to "pre-process" the exported model for FINN, although the export in the "FINN-ONNX" format has since been deprecated in favor of the QONNX export.

In general, you will most likely need to customize the build steps to fit your new model, as you can tell by the different "custom steps" that are used for the different finn-examples. For example, the VGG10 requires additional transformations to deal with the 1D convolution, while a ResNet will require additional care to streamline residual connections.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants