Skip to content

Commit

Permalink
[torch-frontend] refator requirements.txt (#261)
Browse files Browse the repository at this point in the history
* split `requirements.txt` to `build-requirements.txt`,
`test-requirements.txt`, `torch-cpu-requirements.txt` and
`torch-cuda-requirements.txt`
  • Loading branch information
qingyunqu authored Aug 23, 2024
1 parent b9a143b commit e02e4d6
Show file tree
Hide file tree
Showing 13 changed files with 67 additions and 28 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/torch-frontend-ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,5 +41,5 @@ jobs:
- name: Checkout byteir repo
uses: actions/checkout@v3
- name: Build and test TorchFrontend
run: ./frontends/torch-frontend/scripts/build_and_test.sh
run: python3 -m pip install -r ./frontends/torch-frontend/torch-cuda-requirements.txt && ./frontends/torch-frontend/scripts/build_and_test.sh
shell: bash
10 changes: 5 additions & 5 deletions frontends/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,21 +2,21 @@

ByteIR Frontends includes Tensorflow, PyTorch, and ONNX.

Each of them can generates mhlo dialects from the corresponding frontend.
Each of them can generates stablehlo dialects from the corresponding frontend.

Each frontend can be built independently with the corresponding requirement and dependencies.
Note it may or may not be guaranteed using the same version of dependencies, e.g. LLVM, with the other frontend, due to convenience of development.

But each frontend will be guaranteed to generate compatible mhlo format with the ByteIR compiler.
But each frontend will be guaranteed to generate compatible stablehlo format with the ByteIR compiler.

## [TensorFlow](tf-frontend/README.md)
tf graph --> tf dialect --> mhlo dialect pipeline
tf graph --> tf dialect --> stablehlo dialect pipeline

## [PyTorch](torch-frontend/README.md)
PyTorch --> torch dialect --> mhlo dialect pipeline
PyTorch --> torch dialect --> stablehlo dialect pipeline

## [ONNX](onnx-frontend/README.md)
onnx graph --> onnx dialect --> mhlo dialect
onnx graph --> onnx dialect --> stablehlo dialect



Expand Down
2 changes: 1 addition & 1 deletion frontends/torch-frontend/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Torch Frontend
torch-frontend is a project to build customized torch model --> torch dialect --> mhlo dialect pipeline, where we could add extended dialect and passes.
torch-frontend is a project to build customized torch model --> torch dialect --> stablehlo dialect pipeline, where we could add extended dialect and passes.


## Quick Start
Expand Down
Original file line number Diff line number Diff line change
@@ -1,20 +1,23 @@
# torch and torchvision
# cpu torch and torchvision
# --extra-index-url https://download.pytorch.org/whl/cpu
# --pre
# torch==2.1.0+cpu
# torchvision==0.16.0+cpu

# cuda torch and torchvision
# --extra-index-url https://download.pytorch.org/whl/cu118
# --pre
# torch==2.1.0+cu118
# torchvision==0.16.0+cu118

# cuda torch and torchvision nightly
# --extra-index-url https://download.pytorch.org/whl/nightly/cu118
# --pre
# torch==2.1.0.dev20230820+cu118
# torchvision==0.16.0.dev20230820+cu118

transformers==4.29.2


# The following copied from torch-mlir
numpy

# Build requirements.
pybind11
Expand All @@ -25,9 +28,3 @@ ninja
pyyaml
packaging

# Test Requirements
pillow
pytest==8.1.0
dill
multiprocess
expecttest
5 changes: 3 additions & 2 deletions frontends/torch-frontend/examples/demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,10 @@
### Steps to run
1. Use docker with `debian>=11`, `python==3.9`, `cuda>=11.8` or build docker image with [Dockerfile](../../../../docker/Dockerfile).
2. Download ByteIR latest release and unzip it.
3. Install ByteIR components:
* python3 -m pip install -r ByteIR/requirements.txt
3. Install ByteIR components and dependency:
* python3 -m pip install ByteIR/*.whl
* cd /path/to/demo
* python3 -m pip install -r requirements.txt
4. Run training demo:
* python3 main.py \<model-name\> <--flash>
* **model-name:** ["gpt2", "bloom-560m", "llama", "opt-1.3b", "nanogpt"]
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
# cuda torch
--extra-index-url https://download.pytorch.org/whl/cu118
--pre
torch==2.1.0+cu118
Expand Down
27 changes: 26 additions & 1 deletion frontends/torch-frontend/scripts/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,26 @@
set -e
set -x

while [[ $# -gt 0 ]]; do
case $1 in
--disable-jit-ir)
TORCH_FRONTEND_ENABLE_JIT_IR_IMPORTER=OFF
shift
;;
--no-test)
TORCH_FRONTEND_TEST=OFF
shift
;;
*)
echo "Invalid option: $1"
exit 1
;;
esac
done

TORCH_FRONTEND_ENABLE_JIT_IR_IMPORTER=${TORCH_FRONTEND_ENABLE_JIT_IR_IMPORTER:-ON}
TORCH_FRONTEND_TEST=${TORCH_FRONTEND_TEST:-ON}

# path to script
CUR_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
# path to byteir root
Expand All @@ -21,11 +41,16 @@ cmake -S . \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER=gcc \
-DCMAKE_CXX_COMPILER=g++ \
-DTORCH_FRONTEND_ENABLE_JIT_IR_IMPORTER=${TORCH_FRONTEND_ENABLE_JIT_IR_IMPORTER} \
-DCMAKE_CXX_FLAGS="-Wno-unused-but-set-parameter -Wno-unused-but-set-variable" \
-DPython3_EXECUTABLE=$(which python3)

cmake --build ./build --target all

PYTHONPATH=build/python_packages/:build/torch_mlir_build/python_packages/torch_mlir python3 -m pytest torch-frontend/python/test
if [[ $TORCH_FRONTEND_TEST == "ON" ]]; then
python3 -m pip install -r test-requirements.txt
install_mhlo_tools
PYTHONPATH=build/python_packages/:build/torch_mlir_build/python_packages/torch_mlir python3 -m pytest torch-frontend/python/test
fi

popd
2 changes: 2 additions & 0 deletions frontends/torch-frontend/scripts/build_and_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,8 @@ cmake -S . \
cmake --build ./build --target all

if [[ $TORCH_FRONTEND_TEST == "ON" ]]; then
python3 -m pip install -r test-requirements.txt
install_mhlo_tools
PYTHONPATH=build/python_packages/:build/torch_mlir_build/python_packages/torch_mlir TORCH_DISABLE_NATIVE_FUNCOL=1 python3 -m pytest torch-frontend/python/test
fi

Expand Down
10 changes: 4 additions & 6 deletions frontends/torch-frontend/scripts/envsetup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,8 @@ function apply_patches() {
function prepare_for_build_with_prebuilt() {
pushd ${PROJ_DIR}
# install requirements
python3 -m pip install -r requirements.txt -r torch-requirements.txt
python3 -m pip install --no-cache-dir torch==2.1.0+cu118 torchvision==0.16.0+cu118 -f https://download.pytorch.org/whl/torch_stable.html
install_mhlo_tools
python3 -m pip install -r build-requirements.txt
# python3 -m pip install --no-cache-dir torch==2.1.0+cu118 torchvision==0.16.0+cu118 -f https://download.pytorch.org/whl/torch_stable.html

# initialize submodule
git submodule update --init -f $TORCH_MLIR_ROOT
Expand All @@ -49,9 +48,8 @@ function prepare_for_build_with_prebuilt() {
function prepare_for_build() {
pushd ${PROJ_DIR}
# install requirements
python3 -m pip install -r requirements.txt -r torch-requirements.txt
python3 -m pip install --no-cache-dir torch==2.1.0+cu118 torchvision==0.16.0+cu118 -f https://download.pytorch.org/whl/torch_stable.html
install_mhlo_tools
python3 -m pip install -r build-requirements.txt
# python3 -m pip install --no-cache-dir torch==2.1.0+cu118 torchvision==0.16.0+cu118 -f https://download.pytorch.org/whl/torch_stable.html

# initialize submodule
git submodule update --init --recursive -f $TORCH_MLIR_ROOT
Expand Down
9 changes: 9 additions & 0 deletions frontends/torch-frontend/test-requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
numpy
transformers==4.29.2

# Test Requirements
pillow
pytest==8.1.0
dill
multiprocess
expecttest
4 changes: 4 additions & 0 deletions frontends/torch-frontend/torch-cpu-requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
--extra-index-url https://download.pytorch.org/whl/cpu
--pre
torch==2.1.0+cpu
torchvision==0.16.0+cpu
4 changes: 4 additions & 0 deletions frontends/torch-frontend/torch-cuda-requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
--extra-index-url https://download.pytorch.org/whl/cu118
--pre
torch==2.1.0+cu118
torchvision==0.16.0+cu118
2 changes: 1 addition & 1 deletion tests/build_and_test_e2e.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,13 @@ bash scripts/compiler/build_and_test.sh --no-test
# build runtime
bash scripts/runtime/build_and_test.sh --cuda --python --no-test
# build torch_frontend
pip3 install -r $ROOT_PROJ_DIR/frontends/torch-frontend/torch-cuda-requirements.txt
bash frontends/torch-frontend/scripts/build_and_test.sh --no-test

pip3 install $ROOT_PROJ_DIR/external/AITemplate/python/dist/*.whl
pip3 install $ROOT_PROJ_DIR/compiler/build/python/dist/*.whl
pip3 install $ROOT_PROJ_DIR/runtime/python/dist/*.whl
pip3 install $ROOT_PROJ_DIR/frontends/torch-frontend/build/torch-frontend/python/dist/*.whl
pip3 install -r $ROOT_PROJ_DIR/frontends/torch-frontend/torch-requirements.txt
pip3 install flash_attn==2.5.3
source scripts/prepare.sh
install_mhlo_tools
Expand Down

0 comments on commit e02e4d6

Please sign in to comment.