Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes for LinalgExt website reference page. #2

Merged
merged 1 commit into from
Feb 22, 2024

Conversation

ScottTodd
Copy link

Do we want to call this "IREELinalgExt" or just "LinalgExt"? I like without the "IREE" here 🤔

@hanhanW hanhanW merged commit eee50b0 into hanhanW:collapse-linalg-ext-v3 Feb 22, 2024
46 checks passed
@ScottTodd ScottTodd deleted the linalgext-docs branch February 22, 2024 00:02
hanhanW pushed a commit that referenced this pull request Sep 18, 2024
…#18302)

Reverts iree-org#17751. A few of the new tests are failing on
various platforms:

* Timeouts (after 60 seconds) in
`iree/tests/e2e/attention/e2e_attention_cpu_f16_f16_f16_large_llvm-cpu_local-task`
on GitHub-hosted Windows and macOS runners
*
https://github.com/iree-org/iree/actions/runs/10468974350/job/28990992473#step:8:2477
*
https://github.com/iree-org/iree/actions/runs/10468947894/job/28990909629#step:9:3076
    
    ```
1529/1568 Test iree-org#969:
iree/tests/e2e/attention/e2e_attention_cpu_f16_f16_f16_large_llvm-cpu_local-task
.............................***Timeout 60.07 sec
---
TEST[attention_2_2048_256_512_128_dtype_f16_f16_f16_f16_2_2048_256_512_128_256_1.0_0]
---
    Attention shape (BATCHxMxK1xK2xN): 2x2048x256x512x256x128
    ```

* Compilation error on arm64:
https://github.com/iree-org/iree/actions/runs/10468944505/job/28990909321#step:4:9815:

    ```
[415/1150] Generating
/work/build-arm64/tests/e2e/attention/e2e_attention_cpu_f16_f16_f16_medium_llvm-cpu_local-task_attention.vmfb
from
e2e_attention_cpu_f16_f16_f16_medium_llvm-cpu_local-task_attention.mlir
FAILED:
tests/e2e/attention/e2e_attention_cpu_f16_f16_f16_medium_llvm-cpu_local-task_attention.vmfb
/work/build-arm64/tests/e2e/attention/e2e_attention_cpu_f16_f16_f16_medium_llvm-cpu_local-task_attention.vmfb
cd /work/build-arm64/tests/e2e/attention &&
/work/build-arm64/tools/iree-compile --output-format=vm-bytecode
--mlir-print-op-on-diagnostic=false --iree-hal-target-backends=llvm-cpu
/work/build-arm64/tests/e2e/attention/e2e_attention_cpu_f16_f16_f16_medium_llvm-cpu_local-task_attention.mlir
-o
/work/build-arm64/tests/e2e/attention/e2e_attention_cpu_f16_f16_f16_medium_llvm-cpu_local-task_attention.vmfb
--iree-hal-executable-object-search-path=\"/work/build-arm64\"
--iree-llvmcpu-embedded-linker-path=\"/work/build-arm64/llvm-project/bin/lld\"
--iree-llvmcpu-wasm-linker-path=\"/work/build-arm64/llvm-project/bin/lld\"

/work/build-arm64/tests/e2e/attention/e2e_attention_cpu_f16_f16_f16_medium_llvm-cpu_local-task_attention.mlir:4:14:
error: Yield operand #2 is not equivalent to the corresponding iter
bbArg
      %result1 = iree_linalg_ext.attention {
                 ^

/work/build-arm64/tests/e2e/attention/e2e_attention_cpu_f16_f16_f16_medium_llvm-cpu_local-task_attention.mlir:1:1:
note: called from
func.func @attention_2_1024_128_256_64_dtype_f16_f16_f16_f16(%query:
tensor<2x1024x128xf16>, %key: tensor<2x256x128xf16>, %value:
tensor<2x256x64xf16>, %scale: f32) -> tensor<2x1024x64xf16> {
    ^

/work/build-arm64/tests/e2e/attention/e2e_attention_cpu_f16_f16_f16_medium_llvm-cpu_local-task_attention.mlir:4:14:
error: failed to run translation of source executable to target
executable for backend #hal.executable.target<"llvm-cpu",
"embedded-elf-arm_64", {cpu = "generic", cpu_features = "+reserve-x18",
data_layout =
"e-m:e-i8:8:32-i16:16:32-i64:64-i128:128-n32:64-S128-Fn32",
native_vector_size = 16 : i64, target_triple =
"aarch64-unknown-unknown-eabi-elf"}>
      %result1 = iree_linalg_ext.attention {
                 ^

/work/build-arm64/tests/e2e/attention/e2e_attention_cpu_f16_f16_f16_medium_llvm-cpu_local-task_attention.mlir:1:1:
note: called from
func.func @attention_2_1024_128_256_64_dtype_f16_f16_f16_f16(%query:
tensor<2x1024x128xf16>, %key: tensor<2x256x128xf16>, %value:
tensor<2x256x64xf16>, %scale: f32) -> tensor<2x1024x64xf16> {
    ^
    failed to translate executables
    ```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants