Skip to content

Commit

Permalink
Merge pull request #88 from Xilinx/feature/gtsrb
Browse files Browse the repository at this point in the history
[GTSRB] Add README and rename notebook
  • Loading branch information
auphelia authored May 3, 2024
2 parents 050a5ec + e0f4bb3 commit e18241a
Show file tree
Hide file tree
Showing 3 changed files with 28 additions and 2 deletions.
26 changes: 26 additions & 0 deletions build/gtsrb/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Brevitas GTSRB example

This is the binarized CNV topology from the paper [FINN: A Framework for Fast, Scalable Binarized Neural Network Inference](https://arxiv.org/abs/1612.07119) which is trained
on the [German Traffic Sign Recognition Benchmark (GTSRB)](https://benchmark.ini.rub.de/gtsrb_news.html) dataset.

## Build bitfiles for GTSRB

0. Ensure you have performed the *Setup* steps in the top-level README for setting up the FINN requirements and environment variables.

1. Run the `download-model.sh` script under the `models` directory to download the pretrained QONNX model. You should have `gtsrb/models/cnv_1w1a_gtsrb.onnx` as a result.

2. Launch the build as follows:
```SHELL
# update this according to where you cloned this repo:
FINN_EXAMPLES=/path/to/finn-examples
# cd into finn submodule
cd $FINN_EXAMPLES/build/finn
# launch the build on the gtsrb folder
./run-docker.sh build_custom $FINN_EXAMPLES/build/gtsrb
```

3. The generated outputs will be under `gtsrb/output_<topology>_<board>`. You can find a description of the generated files [here](https://finn-dev.readthedocs.io/en/latest/command_line.html#simple-dataflow-build-mode).

## Where did the ONNX model files come from?

The model is part of the QONNX model zoo and gets directly downloaded from [here](https://github.com/fastmachinelearning/qonnx_model_zoo/tree/feature/gtsrb_cnv/models/GTSRB/Brevitas_CNV1W1A).
4 changes: 2 additions & 2 deletions build/vgg10-radioml/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Due to the 1-dimensional topology in VGG10 we use a specialized build script tha

0. Ensure you have performed the *Setup* steps in the top-level README for setting up the FINN requirements and environment variables.

1. Run the `download_vgg10.sh` script under the `models` directory to download the pretrained VGG10 ONNX model. You should have e.g. `vgg10-radioml/models/radioml_w4a4_small_tidy.onnx` as a result.
1. Run the `download_vgg10.sh` script under the `models` directory to download the pretrained VGG10 ONNX model. You should have `vgg10-radioml/models/radioml_w4a4_small_tidy.onnx` as a result.

2. Launch the build as follows:
```SHELL
Expand All @@ -24,7 +24,7 @@ cd $FINN_EXAMPLES/build/finn
./run-docker.sh build_custom $FINN_EXAMPLES/build/vgg10
```

5. The generated outputs will be under `vgg10-radioml/output_<topology>_<board>`. You can find a description of the generated files [here](https://finn-dev.readthedocs.io/en/latest/command_line.html#simple-dataflow-build-mode).
3. The generated outputs will be under `vgg10-radioml/output_<topology>_<board>`. You can find a description of the generated files [here](https://finn-dev.readthedocs.io/en/latest/command_line.html#simple-dataflow-build-mode).

## Where did the ONNX model files come from?

Expand Down

0 comments on commit e18241a

Please sign in to comment.