Skip to content

Commit

Permalink
Merge pull request #928 from Xilinx/refactor/rtl_integration
Browse files Browse the repository at this point in the history
Refactoring of RTL/HLS component integration
  • Loading branch information
auphelia authored Mar 27, 2024
2 parents 0ecbe6a + b9d4e62 commit 68b1a6d
Show file tree
Hide file tree
Showing 225 changed files with 18,996 additions and 11,058 deletions.
6 changes: 6 additions & 0 deletions AUTHORS.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,3 +28,9 @@ Contributors
* Matthias Gehre (@mgehre-amd)
* Hugo Le Blevec (@hleblevec)
* Patrick Geel (@patrickgeel)
* John Monks (@jmonks-amd)
* Tim Paine (@timkpaine)
* Linus Jungemann (@LinusJungemann)
* Shashwat Khandelwal (@shashwat1198)
* Ian Colbert (@i-colbert)
* Rachit Garg (@rstar900)
10 changes: 0 additions & 10 deletions CHANGELOG.rst

This file was deleted.

56 changes: 55 additions & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,60 @@ Please follow the steps below and be sure that your contribution complies with o
1. The <a href="https://github.com/Xilinx/finn" target="_blank">main branch</a> should always be treated as stable and clean. Only hot fixes are allowed to be pull-requested. The hot fix is supposed to be very important such that without this fix, a lot of things will break.
2. For new features, smaller bug fixes, doc updates, and many other fixes, users should pull request against the <a href="https://github.com/Xilinx/finn/tree/dev" target="_blank">development branch</a>.

3. We will review your contribution and, if any additional fixes or modifications are
3. Sign Your Work

Please use the *Signed-off-by* line at the end of your patch which indicates that you accept the Developer Certificate of Origin (DCO) defined by https://developercertificate.org/ reproduced below::

```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```

You can enable Signed-off-by automatically by adding the `-s` flag to the `git commit` command.

Here is an example Signed-off-by line which indicates that the contributor accepts DCO:

```
This is my commit message
Signed-off-by: Jane Doe <jane.doe@example.com>
```

4. We will review your contribution and, if any additional fixes or modifications are
necessary, may provide feedback to guide you. When accepted, your pull request will
be merged to the repository. If you have more questions please contact us.
3 changes: 2 additions & 1 deletion LICENSE.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
Copyright (c) 2020, Xilinx
Copyright (C) 2020-2022, Xilinx, Inc.
Copyright (C) 2022-2024, Advanced Micro Devices, Inc.
All rights reserved.

Redistribution and use in source and binary forms, with or without
Expand Down
7 changes: 3 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,12 @@



<img align="left" src="https://raw.githubusercontent.com/Xilinx/finn/github-pages/docs/img/finn-stack.png" alt="drawing" style="margin-right: 20px" width="250"/>
<img align="left" src="https://raw.githubusercontent.com/Xilinx/finn/github-pages/docs/img/finn-stack.PNG" alt="drawing" style="margin-right: 20px" width="250"/>

[![GitHub Discussions](https://img.shields.io/badge/discussions-join-green)](https://github.com/Xilinx/finn/discussions)
[![ReadTheDocs](https://readthedocs.org/projects/finn/badge/?version=latest&style=plastic)](http://finn.readthedocs.io/)

FINN is an experimental framework from Xilinx Research Labs to explore deep neural network
inference on FPGAs.
FINN is an experimental framework from Integrated Communications and AI Lab of AMD Research & Advanced Development to explore deep neural network inference on FPGAs.
It specifically targets <a href="https://github.com/maltanar/qnn-inference-examples" target="_blank">quantized neural
networks</a>, with emphasis on
generating dataflow-style architectures customized for each network.
Expand All @@ -28,7 +27,7 @@ Please see the [Getting Started](https://finn.readthedocs.io/en/latest/getting_s

## Documentation

You can view the documentation on [readthedocs](https://finn.readthedocs.io) or build them locally using `python setup.py doc` from inside the Docker container. Additionally, there is a series of [Jupyter notebook tutorials](https://github.com/Xilinx/finn/tree/main/notebooks), which we recommend running from inside Docker for a better experience.
You can view the documentation on [readthedocs](https://finn.readthedocs.io). Additionally, there is a series of [Jupyter notebook tutorials](https://github.com/Xilinx/finn/tree/main/notebooks), which we recommend running from inside Docker for a better experience.

## Community

Expand Down
5 changes: 3 additions & 2 deletions docker/Dockerfile.finn
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Copyright (c) 2021, Xilinx
# Copyright (C) 2021-2022, Xilinx, Inc.
# Copyright (C) 2022-2024, Advanced Micro Devices, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
Expand Down Expand Up @@ -27,7 +28,7 @@
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

FROM ubuntu:jammy-20230126
LABEL maintainer="Yaman Umuroglu <yamanu@xilinx.com>"
LABEL maintainer="Jakoba Petri-Koenig <jakoba.petri-koenig@amd.com>, Yaman Umuroglu <yaman.umuroglu@amd.com>"

ARG XRT_DEB_VERSION="xrt_202220.2.14.354_22.04-amd64-xrt"

Expand Down
10 changes: 5 additions & 5 deletions docs/finn/brevitas_export.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ Brevitas Export
:scale: 70%
:align: center

FINN expects an ONNX model as input. This can be a model trained with `Brevitas <https://github.com/Xilinx/brevitas>`_. Brevitas is a PyTorch library for quantization-aware training and the FINN Docker image comes with several `example Brevitas networks <https://github.com/Xilinx/brevitas/tree/master/src/brevitas_examples/bnn_pynq>`_. Brevitas provides an export of a quantized network in ONNX representation in several flavors.
Two of the Brevitas-exported ONNX variants can be ingested by FINN:

* FINN-ONNX: Quantized weights exported as tensors with additional attributes to mark low-precision datatypes. Quantized activations exported as MultiThreshold nodes.
* QONNX: All quantization is represented using Quant, BinaryQuant or Trunc nodes. QONNX must be converted into FINN-ONNX by :py:mod:`finn.transformation.qonnx.convert_qonnx_to_finn`
FINN expects an ONNX model as input. This can be a model trained with `Brevitas <https://github.com/Xilinx/brevitas>`_. Brevitas is a PyTorch library for quantization-aware training and the FINN Docker image comes with several `example Brevitas networks <https://github.com/Xilinx/brevitas/tree/master/src/brevitas_examples/bnn_pynq>`_.
Brevitas provides an export of a quantized network in QONNX representation, which is the format that can be ingested by FINN.
In a QONNX graph, all quantization is represented using Quant, BinaryQuant or Trunc nodes.
QONNX must be converted into FINN-ONNX by :py:mod:`finn.transformation.qonnx.convert_qonnx_to_finn`. FINN-ONNX is the intermediate representation (IR) FINN uses internally.
In this IR, quantized weights are indicated through tensors with additional attributes to mark low-precision datatypes and quantized activations are expressed as MultiThreshold nodes.

To work with either type of ONNX model, it is loaded into a :ref:`modelwrapper` provided by FINN.

Expand Down
48 changes: 28 additions & 20 deletions docs/finn/command_line.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,15 @@ two command line entry points for productivity and ease-of-use:
Jupyter notebook as a starting point, visualizing the model at intermediate
steps and adding calls to new transformations as needed.
Once you have a working flow, you can implement a command line entry for this
by using the "advanced mode" described here.
by using the "advanced mode".


Simple dataflow build mode
--------------------------

This mode is intended for simpler networks whose topologies resemble the
FINN end-to-end examples.
It runs a fixed build flow spanning tidy-up, streamlining, HLS conversion
It runs a fixed build flow spanning tidy-up, streamlining, HW conversion
and hardware synthesis.
It can be configured to produce different outputs, including stitched IP for
integration in Vivado IPI as well as bitfiles.
Expand All @@ -43,7 +43,9 @@ To use it, first create a folder with the necessary configuration and model file
3. Create a JSON file with the build configuration. It must be named ``dataflow_build_dir/dataflow_build_config.json``.
Read more about the build configuration options on :py:mod:`finn.builder.build_dataflow_config.DataflowBuildConfig`.
You can find an example .json file under ``src/finn/qnn-data/build_dataflow/dataflow_build_config.json``
4. (Optional) create a JSON file with the folding configuration. It must be named ``dataflow_build_dir/folding_config.json``.
4. (Optional) create a JSON file with the specialize layers configuration. It must be named ``dataflow_build_dir/specialize_layers_config.json``
You can find an example .json file under ``src/finn/qnn-data/build_dataflow/specialize_layers_config.json``.
5. (Optional) create a JSON file with the folding configuration. It must be named ``dataflow_build_dir/folding_config.json``.
You can find an example .json file under ``src/finn/qnn-data/build_dataflow/folding_config.json``.
Instead of specifying the folding configuration, you can use the `target_fps` option in the build configuration
to control the degree of parallelization for your network.
Expand All @@ -59,25 +61,28 @@ as it goes through numerous steps:

.. code-block:: none
Building dataflow accelerator from /home/maltanar/sandbox/build_dataflow/model.onnx
Building dataflow accelerator from build_dataflow/model.onnx
Outputs will be generated at output_tfc_w1a1_Pynq-Z1
Build log is at output_tfc_w1a1_Pynq-Z1/build_dataflow.log
Running step: step_tidy_up [1/16]
Running step: step_streamline [2/16]
Running step: step_convert_to_hls [3/16]
Running step: step_create_dataflow_partition [4/16]
Running step: step_target_fps_parallelization [5/16]
Running step: step_apply_folding_config [6/16]
Running step: step_generate_estimate_reports [7/16]
Running step: step_hls_codegen [8/16]
Running step: step_hls_ipgen [9/16]
Running step: step_set_fifo_depths [10/16]
Running step: step_create_stitched_ip [11/16]
Running step: step_measure_rtlsim_performance [12/16]
Running step: step_make_pynq_driver [13/16]
Running step: step_out_of_context_synthesis [14/16]
Running step: step_synthesize_bitfile [15/16]
Running step: step_deployment_package [16/16]
Running step: step_qonnx_to_finn [1/19]
Running step: step_tidy_up [2/19]
Running step: step_streamline [3/19]
Running step: step_convert_to_hw [4/19]
Running step: step_create_dataflow_partition [5/19]
Running step: step_specialize_layers [6/19]
Running step: step_target_fps_parallelization [7/19]
Running step: step_apply_folding_config [8/19]
Running step: step_minimize_bit_width [9/19]
Running step: step_generate_estimate_reports [10/19]
Running step: step_hw_codegen [11/19]
Running step: step_hw_ipgen [12/19]
Running step: step_set_fifo_depths [13/19]
Running step: step_create_stitched_ip [14/19]
Running step: step_measure_rtlsim_performance [15/19]
Running step: step_out_of_context_synthesis [16/19]
Running step: step_synthesize_bitfile [17/19]
Running step: step_make_pynq_driver [18/19]
Running step: step_deployment_package [19/19]
You can read a brief description of what each step does on
Expand All @@ -99,6 +104,7 @@ The following outputs will be generated regardless of which particular outputs a
* ``build_dataflow.log`` is the build logfile that will contain any warnings/errors
* ``time_per_step.json`` will report the time (in seconds) each build step took
* ``final_hw_config.json`` will contain the final (after parallelization, FIFO sizing etc) hardware configuration for the build
* ``template_specialize_layers_config.json`` is an example json file that can be used to set the specialize layers config
* ``intermediate_models/`` will contain the ONNX file(s) produced after each build step


Expand Down Expand Up @@ -206,3 +212,5 @@ You can launch the desired custom build flow using:
This will mount the specified folder into the FINN Docker container and launch
the build flow. If ``<name-of-build-flow>`` is not specified it will default to ``build``
and thus execute ``build.py``. If it is specified, it will be ``<name-of-build-flow>.py``.

If you would like to learn more about advance builder settings, please have a look at `our tutorial about this topic <https://github.com/Xilinx/finn/blob/main/notebooks/advanced/4_advanced_builder_settings.ipynb>`_.
2 changes: 1 addition & 1 deletion docs/finn/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
# -- Project information -----------------------------------------------------

project = "FINN"
copyright = "2020, Xilinx"
copyright = "2020-2022, Xilinx, 2022-2024, AMD"
author = "Y. Umuroglu and J. Petri-Koenig"


Expand Down
31 changes: 12 additions & 19 deletions docs/finn/developers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Power users may also find this information useful.
Prerequisites
================

Before starting to do development on FINN it's a good idea to start
Before starting to do development on FINN it is a good idea to start
with understanding the basics as a user. Going through all of the
:ref:`tutorials` is strongly recommended if you haven't already done so.
Additionally, please review the documentation available on :ref:`internals`.
Expand Down Expand Up @@ -61,7 +61,7 @@ further detailed below:
Docker images
===============

If you want to add new dependencies (packages, repos) to FINN it's
If you want to add new dependencies (packages, repos) to FINN it is
important to understand how we handle this in Docker.

The finn.dev image is built and launched as follows:
Expand All @@ -70,7 +70,7 @@ The finn.dev image is built and launched as follows:

2. run-docker.sh launches the build of the Docker image with `docker build` (unless ``FINN_DOCKER_PREBUILT=1``). Docker image is built from docker/Dockerfile.finn using the following steps:

* Base: PyTorch dev image
* Base: Ubuntu 22.04 LTS image
* Set up apt dependencies: apt-get install a few packages for verilator and
* Set up pip dependencies: Python packages FINN depends on are listed in requirements.txt, which is copied into the container and pip-installed. Some additional packages (such as Jupyter and Netron) are also installed.
* Install XRT deps, if needed: For Vitis builds we need to install the extra dependencies for XRT. This is only triggered if the image is built with the INSTALL_XRT_DEPS=1 argument.
Expand All @@ -84,9 +84,9 @@ The finn.dev image is built and launched as follows:

4. Entrypoint script (docker/finn_entrypoint.sh) upon launching container performs the following:

* Source Vivado settings64.sh from specified path to make vivado and vivado_hls available.
* Download PYNQ board files into the finn root directory, unless they already exist.
* Source Vitits settings64.sh if Vitis is mounted.
* Source Vivado settings64.sh from specified path to make vivado and vitis_hls available.
* Download board files into the finn root directory, unless they already exist or ``FINN_SKIP_BOARD_FILES=1``.
* Source Vitis settings64.sh if Vitis is mounted.

5. Depending on the arguments to run-docker.sh a different application is launched. run-docker.sh notebook launches a Jupyter server for the tutorials, whereas run-docker.sh build_custom and run-docker.sh build_dataflow trigger a dataflow build (see documentation). Running without arguments yields an interactive shell. See run-docker.sh for other options.

Expand All @@ -106,7 +106,7 @@ Linting
We use a pre-commit hook to auto-format Python code and check for issues.
See https://pre-commit.com/ for installation. Once you have pre-commit, you can install
the hooks into your local clone of the FINN repo.
It's recommended to do this **on the host** and not inside the Docker container:
It is recommended to do this **on the host** and not inside the Docker container:

::

Expand All @@ -119,7 +119,7 @@ you may have to fix it manually, then run `git commit` once again.
The checks are configured in .pre-commit-config.yaml under the repo root.

Testing
=======
========

Tests are vital to keep FINN running. All the FINN tests can be found at https://github.com/Xilinx/finn/tree/main/tests.
These tests can be roughly grouped into three categories:
Expand All @@ -132,7 +132,7 @@ These tests can be roughly grouped into three categories:

Additionally, qonnx, brevitas and finn-hlslib also include their own test suites.
The full FINN compiler test suite
(which will take several hours to run and require a PYNQ board) can be executed
(which will take several hours to run) can be executed
by:

::
Expand All @@ -146,7 +146,7 @@ requiring Vivado or as slow-running tests:

bash ./run-docker.sh quicktest

When developing a new feature it's useful to be able to run just a single test,
When developing a new feature it is useful to be able to run just a single test,
or a group of tests that e.g. share the same prefix.
You can do this inside the Docker container
from the FINN root directory as follows:
Expand Down Expand Up @@ -178,16 +178,9 @@ FINN provides two types of documentation:
* manually written documentation, like this page
* autogenerated API docs from Sphinx

Everything is built using Sphinx, which is installed into the finn.dev
Docker image. You can build the documentation locally by running the following
inside the container:

::

python setup.py docs
Everything is built using Sphinx.

You can view the generated documentation on build/html/index.html.
The documentation is also built online by readthedocs:
The documentation is built online by readthedocs:

* finn.readthedocs.io contains the docs from the master branch
* finn-dev.readthedocs.io contains the docs from the dev branch
Expand Down
6 changes: 5 additions & 1 deletion docs/finn/end_to_end_flow.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,11 @@
End-to-End Flow
***************

The following image shows an example end-to-end flow in FINN, starting from a trained PyTorch/Brevitas network and going all the way to a running FPGA accelerator.
The following image shows an example end-to-end flow in FINN for a PYNQ board.
Please note that you can build an IP block for your neural network **for every Xilinx-AMD FPGA**, but we only provide automatic system integration for a limited number of boards.
However, you can use Vivado to integrate an IP block generated by FINN into your own design.

The example flow in this image starts from a trained PyTorch/Brevitas network and goes all the way to a running FPGA accelerator.
As you can see in the picture, FINN has a high modularity and has the property that the flow can be stopped at any point and the intermediate result can be used for further processing or other purposes. This enables a wide range of users to benefit from FINN, even if they do not use the whole flow.

.. image:: ../../notebooks/end2end_example/bnn-pynq/finn-design-flow-example.svg
Expand Down
Loading

0 comments on commit 68b1a6d

Please sign in to comment.