Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pyxsi enablement for alternative rtl simulation #1213

Draft
wants to merge 53 commits into
base: dev
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
42cbf47
[Deps] add pyxsi fork and its deps
maltanar Aug 8, 2024
b6dba8a
[Util] add util to fetch Vivado path
maltanar Aug 8, 2024
1468dbe
[Sim] add a first draft of pyxsi-based simulation, untested
maltanar Aug 8, 2024
6907167
[Sim] try enabling pyxsi logging and tracing
maltanar Aug 8, 2024
5a5461c
[Op] experiment with enabling rtlsim with pyxsi
maltanar Aug 8, 2024
a50cdb6
[Deps] update pyxsi
maltanar Aug 14, 2024
77680dd
[Sim] update pyxsi API to include verilog spec, assume precompiled
maltanar Aug 14, 2024
1804c47
[Deps] ensure pyxsi is compiled + set envvars for its use
maltanar Aug 19, 2024
64e9fc1
[Sim] remove LD_LIBRARY_PATH setting from pyxsi util, doesn't work
maltanar Aug 19, 2024
d480e49
[Deps] update pyxsi
maltanar Aug 20, 2024
3560933
[AddStreams] set rtlsim_cycles for pyxsi prototyping
maltanar Aug 20, 2024
212927c
[Infra] only trigger pyxsi compile if Vivado detected
maltanar Aug 21, 2024
4fd5610
[Sim] move pyxsiutils to own repo, update deps, fix entrypoint chk
maltanar Aug 21, 2024
6e4d085
[pyxsi] trialling RPC server as LD_LIBRARY_PATH workaround
maltanar Sep 9, 2024
4558314
[Infra] remove global LD_LIBRARY_PATH setting from entrypoint
maltanar Oct 1, 2024
aff109f
[pyxsi] add license text to rpcserver&client
maltanar Oct 1, 2024
39ac22e
[Infra] start pyxsi rpc server in entrypoint
maltanar Oct 3, 2024
396cb8a
[Deps] update pyxsi
maltanar Oct 3, 2024
948b5b3
[Infra] redirect pyxsi rpcserver outputs to own logfile
maltanar Oct 3, 2024
159e228
[pyxsi] rework RPC interface to exclude rtlsim_multi_io
maltanar Oct 3, 2024
6250c8f
[Infra] don't launch pyxsi RPC server on startup (will be as needed)
maltanar Oct 4, 2024
42cbe0f
[pyxsi] start (and then terminate when done) one pyxsi RPC server per…
maltanar Oct 4, 2024
db2d9a1
Merge branch 'dev' into feature/pyxsi_integration
auphelia Oct 11, 2024
890ce81
[rtlsim] pyxsi for node-by-node rtlsim enablement via attribute
maltanar Oct 10, 2024
c04a4a8
[Deps] update pyxsi
maltanar Oct 10, 2024
b266a8a
[pyxsi] expose close_rtlsim() to exit cleanly
maltanar Oct 10, 2024
ab3c36f
[rtlsim] call close_rtlsim() to exit cleanly from pyxsi
auphelia Oct 11, 2024
fe7a942
[pyxsi] redirect rpcserver out to file for visible logging
maltanar Oct 10, 2024
655b904
[HLS] use subcore path only if it exists
maltanar Oct 10, 2024
acd88e5
[pyxsi] slightly more reliable start procedure for RPC server
maltanar Oct 10, 2024
c817adb
[prepare rtlsim] move rtlsim prep to rtlbackend for rtl layers
auphelia Oct 11, 2024
8d2daaf
[pyxsi] Make pyxsi import optional to allow GHA to work without insta…
auphelia Oct 14, 2024
1cc5f11
[PrepareRTLSim] Clean-up functions in rtl thresholding and move funct…
auphelia Oct 15, 2024
20e8b59
[rtlsim] enable stitched-IP rtlsim with pyxsi
maltanar Oct 11, 2024
cb6e08d
[Deps] update pyxsi
maltanar Oct 14, 2024
7f09b73
[Deps] update pyxsi
maltanar Oct 16, 2024
7018cfe
Closing the handle if the simulation times out
STFleming Oct 14, 2024
812bde4
[stitchedIP-rtlsim] Default rtlsim backend metadata prop to pyverilator
auphelia Oct 17, 2024
25443da
[prepare rtlsim] Clean-up functions in rtl mvu and vvu
auphelia Oct 22, 2024
afbf1fe
[execute node] switch to rtlsim multi io for all custom ops
auphelia Oct 22, 2024
de5cf61
[rtlsim] Delete obsolete rtlsim fct
auphelia Oct 22, 2024
743644b
Merge branch 'dev' into feature/pyxsi_integration
auphelia Oct 22, 2024
2437687
[prepare rtlsim] Clean up new rtl op fcts
auphelia Oct 25, 2024
cc30757
[AddStreams] Enable both rtlsim backends
auphelia Oct 25, 2024
e1a411b
[xsi] add a firs C++ templatet draft for driving XSI rtlsim directly
maltanar Oct 21, 2024
600b999
[xsi] remodel C++ driver from rtlsim_multi_io in pyxsi
maltanar Oct 21, 2024
2223fed
[xsi] add comments to XSI C++ sim driver and Python fxn
maltanar Oct 22, 2024
02f9c4b
[xsi] fix missing template, parse and return results dict
maltanar Oct 22, 2024
2ae55c1
[xsi] util functions to read signals in C++ template
maltanar Oct 22, 2024
71a0964
[xsi] don't call prep_rtlsim_io_dict for dummy data
maltanar Oct 22, 2024
9dba7c2
[FIFO] introduce XSI-based FIFO sizing
maltanar Oct 22, 2024
69df427
[xsi] make tracing optional
maltanar Oct 22, 2024
eab118d
[FIFO] remove stitched IP and rtlsim metadata after FIFO sizing
maltanar Oct 22, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 8 additions & 2 deletions docker/Dockerfile.finn
Original file line number Diff line number Diff line change
Expand Up @@ -65,12 +65,18 @@ RUN apt-get update && \
python-is-python3 \
python3-pip \
python3-setuptools-scm \
python3-venv
python3-venv \
pybind11-dev \
libfmt-dev \
libboost-dev \
libjansson-dev \
libgetdata-dev \
libtinfo5
RUN echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config
RUN locale-gen "en_US.UTF-8"

# install Verilator from source to get the right version
RUN apt-get install -y git perl make autoconf g++ flex bison ccache libgoogle-perftools-dev numactl perl-doc libfl2 libfl-dev zlib1g zlib1g-dev
RUN apt-get install -y git perl make autoconf g++-10 flex bison ccache libgoogle-perftools-dev numactl perl-doc libfl2 libfl-dev zlib1g zlib1g-dev
RUN git clone https://github.com/verilator/verilator
RUN cd verilator && \
git checkout v4.224 && \
Expand Down
16 changes: 16 additions & 0 deletions docker/finn_entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,21 @@ else
fi
fi

if [ -z "${XILINX_VIVADO}" ]; then
yecho "pyxsi will be unavailable since Vivado was not found"
else
if [ -f "${FINN_ROOT}/deps/pyxsi/pyxsi.so" ]; then
gecho "Found pyxsi at ${FINN_ROOT}/deps/pyxsi/pyxsi.so"
else
OLDPWD=$(pwd)
cd ${FINN_ROOT}/deps/pyxsi
touch .dockerenv
make
cd $OLDPWD
fi
export PYTHONPATH=$PYTHONPATH:${FINN_ROOT}/deps/pyxsi:${FINN_ROOT}/deps/pyxsi/py
fi

if [ -f "$HLS_PATH/settings64.sh" ];then
# source Vitis HLS env.vars
source $HLS_PATH/settings64.sh
Expand All @@ -129,6 +144,7 @@ if [ -d "$FINN_ROOT/.Xilinx" ]; then
mkdir "$HOME/.Xilinx/Vivado/"
cp "$FINN_ROOT/.Xilinx/Vivado/Vivado_init.tcl" "$HOME/.Xilinx/Vivado/"
gecho "Found Vivado_init.tcl and copied to $HOME/.Xilinx/Vivado/Vivado_init.tcl"

else
yecho "Unable to find $FINN_ROOT/.Xilinx/Vivado/Vivado_init.tcl"
fi
Expand Down
4 changes: 4 additions & 0 deletions fetch-repos.sh
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ XIL_BDF_COMMIT="8cf4bb674a919ac34e3d99d8d71a9e60af93d14e"
RFSOC4x2_BDF_COMMIT="13fb6f6c02c7dfd7e4b336b18b959ad5115db696"
KV260_BDF_COMMIT="98e0d3efc901f0b974006bc4370c2a7ad8856c79"
EXP_BOARD_FILES_MD5="226ca927a16ea4ce579f1332675e9e9a"
PYXSI_COMMIT="dc074bc1b3ecc2ab884531565d1aca6aa33ea5b9"

QONNX_URL="https://github.com/fastmachinelearning/qonnx.git"
FINN_EXP_URL="https://github.com/Xilinx/finn-experimental.git"
Expand All @@ -51,6 +52,7 @@ AVNET_BDF_URL="https://github.com/Avnet/bdf.git"
XIL_BDF_URL="https://github.com/Xilinx/XilinxBoardStore.git"
RFSOC4x2_BDF_URL="https://github.com/RealDigitalOrg/RFSoC4x2-BSP.git"
KV260_BDF_URL="https://github.com/Xilinx/XilinxBoardStore.git"
PYXSI_URL="https://github.com/maltanar/pyxsi.git"

QONNX_DIR="qonnx"
FINN_EXP_DIR="finn-experimental"
Expand All @@ -63,6 +65,7 @@ AVNET_BDF_DIR="avnet-bdf"
XIL_BDF_DIR="xil-bdf"
RFSOC4x2_BDF_DIR="rfsoc4x2-bdf"
KV260_SOM_BDF_DIR="kv260-som-bdf"
PYXSI_DIR="pyxsi"

# absolute path to this script, e.g. /home/user/bin/foo.sh
SCRIPT=$(readlink -f "$0")
Expand Down Expand Up @@ -126,6 +129,7 @@ fetch_repo $AVNET_BDF_URL $AVNET_BDF_COMMIT $AVNET_BDF_DIR
fetch_repo $XIL_BDF_URL $XIL_BDF_COMMIT $XIL_BDF_DIR
fetch_repo $RFSOC4x2_BDF_URL $RFSOC4x2_BDF_COMMIT $RFSOC4x2_BDF_DIR
fetch_repo $KV260_BDF_URL $KV260_BDF_COMMIT $KV260_SOM_BDF_DIR
fetch_repo $PYXSI_URL $PYXSI_COMMIT $PYXSI_DIR

# Can skip downloading of board files entirely if desired
if [ "$FINN_SKIP_BOARD_FILES" = "1" ]; then
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -404,6 +404,7 @@
"child_model = child_model.transform(CreateStitchedIP(test_fpga_part, target_clk_ns))\n",
"child_model = child_model.transform(PrepareRTLSim())\n",
"child_model.set_metadata_prop(\"exec_mode\",\"rtlsim\")\n",
"child_model.set_metadata_prop(\"rtlsim_backend\",\"pyverilator\")\n",
"child_model.save(build_dir + \"/tfc_w1_a1_dataflow_child.onnx\");"
]
},
Expand Down
2 changes: 2 additions & 0 deletions src/finn/builder/build_dataflow_steps.py
Original file line number Diff line number Diff line change
Expand Up @@ -250,6 +250,8 @@ def prepare_for_stitched_ip_rtlsim(verify_model, cfg):
# set top-level prop for stitched-ip rtlsim and launch
verify_model.set_metadata_prop("exec_mode", "rtlsim")
# TODO make configurable
verify_model.set_metadata_prop("rtlsim_backend", "pyverilator")
# TODO make configurable
# verify_model.set_metadata_prop("rtlsim_trace", "trace.vcd")
return verify_model

Expand Down
64 changes: 29 additions & 35 deletions src/finn/core/onnx_exec.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,44 +52,38 @@ def execute_onnx(model, input_dict, return_full_exec_context=False, start_node=N
model_exec_mode = model.get_metadata_prop("exec_mode")
if (model_exec_mode is None) or (model_exec_mode == ""):
return execute_onnx_base(model, input_dict, return_full_exec_context, start_node, end_node)
elif model_exec_mode == "rtlsim":
# check sanity of model and then use stitched IP for rtlsim
if not model.check_all_tensor_shapes_specified():
raise Exception("Found unspecified tensor shapes, try infer_shapes")
ret = model.analysis(ta.nodes_topologically_sorted)
assert (
ret["nodes_topologically_sorted"] is True
), """Nodes must be
topologically sorted."""

if not model.check_all_tensor_shapes_specified():
raise Exception("Found unspecified tensor shapes, try infer_shapes")
ret = model.analysis(ta.nodes_topologically_sorted)
assert (
ret["nodes_topologically_sorted"] is True
), """Nodes must be
topologically sorted."""

graph = model.graph
# first, we need to make sure that every variable required by the graph has
# some buffer associated with it. this includes graph inputs (which includes
# the input data as well as the trained parameters) and the graph ValueInfo
# (intermediate tensors between layers)
# this is provided by the execution_context, which is a dict of np.ndarray
execution_context = model.make_empty_exec_context()
# fill in any inputs provided to this function
for inp_name in input_dict.keys():
if inp_name in execution_context:
if execution_context[inp_name].shape == input_dict[inp_name].shape:
execution_context[inp_name] = input_dict[inp_name]
else:
raise Exception(
"Shape mismatch for provided input %s: found %s expected %s "
% (
inp_name,
str(execution_context[inp_name].shape),
str(input_dict[inp_name].shape),
graph = model.graph
# first, we need to make sure that every variable required by the graph has
# some buffer associated with it. this includes graph inputs (which includes
# the input data as well as the trained parameters) and the graph ValueInfo
# (intermediate tensors between layers)
# this is provided by the execution_context, which is a dict of np.ndarray
execution_context = model.make_empty_exec_context()
# fill in any inputs provided to this function
for inp_name in input_dict.keys():
if inp_name in execution_context:
if execution_context[inp_name].shape == input_dict[inp_name].shape:
execution_context[inp_name] = input_dict[inp_name]
else:
raise Exception(
"Shape mismatch for provided input %s: found %s expected %s "
% (
inp_name,
str(execution_context[inp_name].shape),
str(input_dict[inp_name].shape),
)
)
)

# check if model has an execution mode set
# if None, execute model node by node using execute_node()
# if set to "rtlsim" execute model using pyverilator
model_exec_mode = model.get_metadata_prop("exec_mode")
if (model_exec_mode is None) or (model_exec_mode == ""):
return execute_onnx_base()
elif model_exec_mode == "rtlsim":
# use stitched IP for rtlsim
rtlsim_exec(model, execution_context)
else:
Expand Down
Loading
Loading