From 0d685df84e312ac58329292726bff5c81526ac70 Mon Sep 17 00:00:00 2001 From: auphelia Date: Tue, 30 Apr 2024 12:42:10 +0100 Subject: [PATCH 1/2] [GTSRB] Add README and rename notebook --- build/gtsrb/README.md | 26 +++++++++++++++++++ ...=> 7_traffic_sign_recognition_gtsrb.ipynb} | 0 2 files changed, 26 insertions(+) create mode 100644 build/gtsrb/README.md rename finn_examples/notebooks/{6_traffic_sign_recognition_gtsrb.ipynb => 7_traffic_sign_recognition_gtsrb.ipynb} (100%) diff --git a/build/gtsrb/README.md b/build/gtsrb/README.md new file mode 100644 index 0000000..0d12d1b --- /dev/null +++ b/build/gtsrb/README.md @@ -0,0 +1,26 @@ +# Brevitas GTSRB example + +This is the binarized CNV topology from the paper [FINN: A Framework for Fast, Scalable Binarized Neural Network Inference](https://arxiv.org/abs/1612.07119) which is trained +on the [German Traffic Sign Recognition Benchmark (GTSRB)](https://benchmark.ini.rub.de/gtsrb_news.html) dataset. + +## Build bitfiles for GTSRB + +0. Ensure you have performed the *Setup* steps in the top-level README for setting up the FINN requirements and environment variables. + +1. Run the `download-model.sh` script under the `models` directory to download the pretrained QONNX model. You should have e.g. `gtsrb/models/cnv_1w1a_gtsrb.onnx` as a result. + +2. Launch the build as follows: +```SHELL +# update this according to where you cloned this repo: +FINN_EXAMPLES=/path/to/finn-examples +# cd into finn submodule +cd $FINN_EXAMPLES/build/finn +# launch the build on the gtsrb folder +./run-docker.sh build_custom $FINN_EXAMPLES/build/gtsrb +``` + +5. The generated outputs will be under `gtsrb/output__`. You can find a description of the generated files [here](https://finn-dev.readthedocs.io/en/latest/command_line.html#simple-dataflow-build-mode). + +## Where did the ONNX model files come from? + +The model is part of the QONNX model zoo and gets directly downloaded from [here](https://github.com/fastmachinelearning/qonnx_model_zoo/tree/feature/gtsrb_cnv/models/GTSRB/Brevitas_CNV1W1A). diff --git a/finn_examples/notebooks/6_traffic_sign_recognition_gtsrb.ipynb b/finn_examples/notebooks/7_traffic_sign_recognition_gtsrb.ipynb similarity index 100% rename from finn_examples/notebooks/6_traffic_sign_recognition_gtsrb.ipynb rename to finn_examples/notebooks/7_traffic_sign_recognition_gtsrb.ipynb From e0f4bb30ea58bdde8ac6e6c7699e80652a6a4f1e Mon Sep 17 00:00:00 2001 From: auphelia Date: Fri, 3 May 2024 10:26:06 +0100 Subject: [PATCH 2/2] [GTSRB+VGG] Cleanup readme for build --- build/gtsrb/README.md | 4 ++-- build/vgg10-radioml/README.md | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/build/gtsrb/README.md b/build/gtsrb/README.md index 0d12d1b..f5179a1 100644 --- a/build/gtsrb/README.md +++ b/build/gtsrb/README.md @@ -7,7 +7,7 @@ on the [German Traffic Sign Recognition Benchmark (GTSRB)](https://benchmark.ini 0. Ensure you have performed the *Setup* steps in the top-level README for setting up the FINN requirements and environment variables. -1. Run the `download-model.sh` script under the `models` directory to download the pretrained QONNX model. You should have e.g. `gtsrb/models/cnv_1w1a_gtsrb.onnx` as a result. +1. Run the `download-model.sh` script under the `models` directory to download the pretrained QONNX model. You should have `gtsrb/models/cnv_1w1a_gtsrb.onnx` as a result. 2. Launch the build as follows: ```SHELL @@ -19,7 +19,7 @@ cd $FINN_EXAMPLES/build/finn ./run-docker.sh build_custom $FINN_EXAMPLES/build/gtsrb ``` -5. The generated outputs will be under `gtsrb/output__`. You can find a description of the generated files [here](https://finn-dev.readthedocs.io/en/latest/command_line.html#simple-dataflow-build-mode). +3. The generated outputs will be under `gtsrb/output__`. You can find a description of the generated files [here](https://finn-dev.readthedocs.io/en/latest/command_line.html#simple-dataflow-build-mode). ## Where did the ONNX model files come from? diff --git a/build/vgg10-radioml/README.md b/build/vgg10-radioml/README.md index 17a4524..18df19d 100755 --- a/build/vgg10-radioml/README.md +++ b/build/vgg10-radioml/README.md @@ -12,7 +12,7 @@ Due to the 1-dimensional topology in VGG10 we use a specialized build script tha 0. Ensure you have performed the *Setup* steps in the top-level README for setting up the FINN requirements and environment variables. -1. Run the `download_vgg10.sh` script under the `models` directory to download the pretrained VGG10 ONNX model. You should have e.g. `vgg10-radioml/models/radioml_w4a4_small_tidy.onnx` as a result. +1. Run the `download_vgg10.sh` script under the `models` directory to download the pretrained VGG10 ONNX model. You should have `vgg10-radioml/models/radioml_w4a4_small_tidy.onnx` as a result. 2. Launch the build as follows: ```SHELL @@ -24,7 +24,7 @@ cd $FINN_EXAMPLES/build/finn ./run-docker.sh build_custom $FINN_EXAMPLES/build/vgg10 ``` -5. The generated outputs will be under `vgg10-radioml/output__`. You can find a description of the generated files [here](https://finn-dev.readthedocs.io/en/latest/command_line.html#simple-dataflow-build-mode). +3. The generated outputs will be under `vgg10-radioml/output__`. You can find a description of the generated files [here](https://finn-dev.readthedocs.io/en/latest/command_line.html#simple-dataflow-build-mode). ## Where did the ONNX model files come from?