Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tests][aot] Add test for externalized parameters #202

Open
wants to merge 28 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
0327398
Reorder iter args to match ordering of init args and outputs (#161)
harsh-nod Sep 24, 2024
4e95351
[ExportedProgram] Add mutable attribute to buffer (#123)
chrsmcgrr Sep 24, 2024
65eb532
[TKW] Add xfail decorator for unaligned shape (#163)
raikonenfnu Sep 24, 2024
909411a
[TKW] Fix indexing of Reduction and GetResult to enable post-tile op.…
raikonenfnu Sep 24, 2024
d37c6a4
Get GEMMs working without minimize_global_loads (#167)
harsh-nod Sep 26, 2024
04a4ba5
Add first draft of introduction (#168)
harsh-nod Sep 26, 2024
7686157
[TKW] igemm shared mem tests (#171)
Hardcode84 Sep 26, 2024
0e16d54
[TKW] Implement support for multiple iter args on Reduction (#166)
raikonenfnu Sep 27, 2024
192a786
Handle complex element type in torch.vtensor conversion (#175)
sogartar Sep 27, 2024
92ad900
[TKW] Rework vector mask generation (#172)
Hardcode84 Sep 30, 2024
621cbe1
Enable import_symbolic_shape_expressions in the FxImporter. (#179)
stellaraccident Sep 30, 2024
84320ea
Add code to construct pipelined loop from schedule (#160)
harsh-nod Oct 1, 2024
553e929
Add support for dynamic dims (#178)
harsh-nod Oct 1, 2024
9ed388a
[TKW] Fix sympy expr lowering and add some more igemm test shapes (#184)
Hardcode84 Oct 3, 2024
0f00c6d
Add benchmark support for e2e tests (#183)
erman-gurses Oct 3, 2024
e0a8fdf
[TKW] Thread Shape analysis (#186)
raikonenfnu Oct 3, 2024
d98e521
Disable benchmarking on all e2e tests for now (#189)
harsh-nod Oct 3, 2024
a04ea80
Set `fail-fast: false` (#190)
Hardcode84 Oct 3, 2024
207efd9
[TKW] IGEMM Benchmarking (#187)
Hardcode84 Oct 4, 2024
64b7d27
[TKW] Update IR interpreter (#182)
Hardcode84 Oct 4, 2024
7617c94
[TKW] Implement broadcastOp class, lowering and insertion (#176)
raikonenfnu Oct 4, 2024
83bbc40
Add ability to dump intermediates (#194)
harsh-nod Oct 4, 2024
39acab8
Split TK CI from main CI (#195)
Hardcode84 Oct 4, 2024
4fec47c
Add parameterization for benchmark flag (#192)
erman-gurses Oct 4, 2024
da3436d
Add padding to reduce shared memory bank conflicts (#193)
harsh-nod Oct 4, 2024
b0ef345
Rename `shark-turbine` -> `iree.turbine` (#197)
Hardcode84 Oct 5, 2024
796f3a5
[TKW] Minor bug fix expansion to handle reduction and MMA at same tim…
raikonenfnu Oct 5, 2024
47720f2
[tests][aot] Add test for externalized parameters
vinayakdsci Oct 7, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
74 changes: 74 additions & 0 deletions .github/workflows/ci-tk.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
name: "TK CI"

on:
pull_request:
push:
branches:
- main

concurrency:
# A PR number if a pull request and otherwise the commit hash. This cancels
# queued and in-progress runs for the same PR (presubmit) or commit
# (postsubmit). The workflow name is prepended to avoid conflicts between
# different workflows.
group: ${{ github.workflow }}-${{ github.event.number || github.sha }}
cancel-in-progress: true

jobs:
test:
name: "Unit Tests and Type Checking"
strategy:
fail-fast: false
matrix:
version: [3.11]
os: [ubuntu-latest, nodai-amdgpu-mi300-x86-64]
runs-on: ${{matrix.os}}
env:
PIP_CACHE_DIR: "${{ github.workspace }}/.pip-cache"
steps:
- name: "Setting up Python"
id: setup_python
uses: actions/setup-python@v3
with:
python-version: ${{matrix.version}}

- name: "Checkout Code"
uses: actions/checkout@v3

- name: Cache Pip Packages
uses: actions/cache@v4
id: cache-pip
with:
path: ${{ env.PIP_CACHE_DIR }}
key: pip-${{ steps.setup_python.outputs.python-version }}-${{ hashFiles('*requirements.txt') }}

- name: Install pip deps
run: |
python -m pip install --no-compile --upgrade pip
# Note: We install in three steps in order to satisfy requirements
# from non default locations first. Installing the PyTorch CPU
# wheels saves multiple minutes and a lot of bandwidth on runner setup.
pip install --no-compile -r pytorch-cpu-requirements.txt
pip install --no-cache-dir -r iree-requirements-ci.txt
pip install -r requirements.txt -e .

- name: Run unit tests
if: ${{ !cancelled() }}
run: |
pytest -n 4 --capture=tee-sys -vv ./tests/kernel/wave/

- name: Run e2e tests on MI300
if: "contains(matrix.os, 'mi300') && !cancelled()"
run: |
export WAVE_RUN_E2E_TESTS=1
pytest -n 4 --capture=tee-sys -vv ./tests/kernel/wave/

- name: Run LIT tests
if: ${{ !cancelled() }}
run: |
lit lit_tests/ -v

- name: MyPy Type Checking
if: ${{ !cancelled() }}
run: |
mypy
12 changes: 3 additions & 9 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,10 @@ jobs:
test:
name: "Unit Tests and Type Checking"
strategy:
fail-fast: false
matrix:
version: [3.11]
os: [ubuntu-latest, nodai-amdgpu-mi300-x86-64]
os: [ubuntu-latest]
runs-on: ${{matrix.os}}
env:
PIP_CACHE_DIR: "${{ github.workspace }}/.pip-cache"
Expand Down Expand Up @@ -54,14 +55,7 @@ jobs:
- name: Run unit tests
if: ${{ !cancelled() }}
run: |
pytest -n 4 .

- name: Run e2e tests on MI300
if: "contains(matrix.os, 'mi300') && !cancelled()"
run: |
export WAVE_RUN_E2E_TESTS=1
export TEST_PARAMS_PATH=./tests/kernel/wave/test_param.json
pytest -n 4 ./tests/kernel/wave/
pytest -n 4 --capture=tee-sys -vv .

- name: Run LIT tests
if: ${{ !cancelled() }}
Expand Down
18 changes: 4 additions & 14 deletions .github/workflows/perf.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,10 @@ jobs:
test:
name: "Unit Tests and Type Checking"
strategy:
fail-fast: false
matrix:
version: [3.11]
os: [ubuntu-latest, nodai-amdgpu-mi300-x86-64]
os: [nodai-amdgpu-mi300-x86-64]
runs-on: ${{matrix.os}}
env:
PIP_CACHE_DIR: "${{ github.workspace }}/.pip-cache"
Expand Down Expand Up @@ -53,21 +54,10 @@ jobs:
pip install --no-compile -r pytorch-cpu-requirements.txt
pip install --no-cache-dir -r iree-requirements-ci.txt
pip install -r requirements.txt -e .
- name: Run unit tests
if: ${{ !cancelled() }}
run: |
pytest -n 4 .

- name: Run e2e tests on MI300
if: "contains(matrix.os, 'mi300') && !cancelled()"
run: |
export WAVE_RUN_E2E_TESTS=1
export TEST_PARAMS_PATH="tests/kernel/wave/test_param.json"
pytest -n 1 ./tests/kernel/wave/
- name: Run LIT tests
if: ${{ !cancelled() }}
run: |
lit lit_tests/ -v
- name: MyPy Type Checking
if: ${{ !cancelled() }}
run: |
mypy
pytest -n 1 --capture=tee-sys -vv ./tests/kernel/wave/
1 change: 1 addition & 0 deletions .github/workflows/test_build_release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ jobs:
test:
name: "Test Build Release Process"
strategy:
fail-fast: false
matrix:
version: [3.11]
os: [ubuntu-latest]
Expand Down
2 changes: 1 addition & 1 deletion MANIFEST.in
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ include README.md
include requirements.txt
include pytorch-cpu-requirements.txt
include version_info.json
include shark_turbine/ops/templates/*.mlir
include iree/turbine/ops/templates/*.mlir
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Turbine provides a collection of tools:

* *AOT Export*: For compiling one or more `nn.Module`s to compiled, deployment
ready artifacts. This operates via both a simple one-shot export API (Already upstreamed to [torch-mlir](https://github.com/llvm/torch-mlir/blob/main/python/torch_mlir/extras/fx_importer.py))
for simple models and an underlying [advanced API](shark_turbine/aot/compiled_module.py) for complicated models
for simple models and an underlying [advanced API](iree/turbine/aot/compiled_module.py) for complicated models
and accessing the full features of the runtime.
* *Eager Execution*: A `torch.compile` backend is provided and a Turbine Tensor/Device
is available for more native, interactive use within a PyTorch session.
Expand Down
4 changes: 1 addition & 3 deletions build_tools/build_release.py
Original file line number Diff line number Diff line change
Expand Up @@ -159,10 +159,8 @@ def main():
print("Downloading remaining requirements")
download_requirements(REPO_ROOT / "requirements.txt")

print("Building shark-turbine")
build_wheel(REPO_ROOT)
print("Building iree-turbine")
build_wheel(REPO_ROOT, env={"TURBINE_PACKAGE_NAME": "iree-turbine"})
build_wheel(REPO_ROOT)


if __name__ == "__main__":
Expand Down
2 changes: 1 addition & 1 deletion examples/aot_mlp/mlp_export_dynamic.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
import torch
import torch.nn as nn

import shark_turbine.aot as aot
import iree.turbine.aot as aot


class MLP(nn.Module):
Expand Down
2 changes: 1 addition & 1 deletion examples/aot_mlp/mlp_export_simple.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
import torch
import torch.nn as nn

import shark_turbine.aot as aot
import iree.turbine.aot as aot


class MLP(nn.Module):
Expand Down
47 changes: 0 additions & 47 deletions examples/llama2_inference/README.md

This file was deleted.

Loading
Loading