Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate llvm-project at 063c42e919c0 #14725

Merged
merged 14 commits into from
Aug 23, 2023
Merged

Integrate llvm-project at 063c42e919c0 #14725

merged 14 commits into from
Aug 23, 2023

Conversation

hanhanW
Copy link
Contributor

@hanhanW hanhanW commented Aug 17, 2023

  • Reset third_party/llvm-project: 063c42e919c01d7e64c1af5a10898fc84b06dfe8 (2023-08-16 10:45:54 -0700): [clang-format] Handle NamespaceMacro string arg for FixNamespaceComments

Cherry-pick commits:

Revert commits:

@stellaraccident
Copy link
Collaborator

We've got a lot of commits landing today. Let's see how things look but probably better to leave this tomorrow.

@hanhanW
Copy link
Contributor Author

hanhanW commented Aug 17, 2023

We've got a lot of commits landing today. Let's see how things look but probably better to leave this tomorrow.

okay, SG. I will rebase it tomorrow.

@hanhanW
Copy link
Contributor Author

hanhanW commented Aug 18, 2023

There are integration tests failure about tosa::ReduceMax should take axis as i32 type. I've fixed some C++ file to use i32 type attribute, but it does not help. Is it because we use pre-built binary? How do I fix it to unblock the integrate?

@pzread @rsuderman could you help on it if you have contexts? Thanks

@hanhanW
Copy link
Contributor Author

hanhanW commented Aug 21, 2023

There are some failures about TOSA attribute type should be i32 (not i64). @jpienaar do you know who can help on this?

Failed Tests (11):
  TENSORFLOW_TESTS :: iree_tfl_tests/east_text_detector.run
  TENSORFLOW_TESTS :: iree_tfl_tests/gpt2.run
  TENSORFLOW_TESTS :: iree_tfl_tests/llvmcpu_mobilenet_v1.run
  TENSORFLOW_TESTS :: iree_tfl_tests/llvmcpu_mobilenet_v3-large_uint8.run
  TENSORFLOW_TESTS :: iree_tfl_tests/llvmcpu_resnet_50_int8.run
  TENSORFLOW_TESTS :: iree_tfl_tests/mnasnet.run
  TENSORFLOW_TESTS :: iree_tfl_tests/mobilenet_v3.run
  TENSORFLOW_TESTS :: iree_tfl_tests/person_detect.run
  TENSORFLOW_TESTS :: iree_tfl_tests/vmvx_mobilebert_tf2_quant.run
  TENSORFLOW_TESTS :: iree_tfl_tests/vmvx_person_detect.run
  TENSORFLOW_TESTS :: iree_tfl_tests/vulkan_mobilenet_v1.run

@jpienaar
Copy link
Member

There are some failures about TOSA attribute type should be i32 (not i64). @jpienaar do you know who can help on this?

Failed Tests (11):
  TENSORFLOW_TESTS :: iree_tfl_tests/east_text_detector.run
  TENSORFLOW_TESTS :: iree_tfl_tests/gpt2.run
  TENSORFLOW_TESTS :: iree_tfl_tests/llvmcpu_mobilenet_v1.run
  TENSORFLOW_TESTS :: iree_tfl_tests/llvmcpu_mobilenet_v3-large_uint8.run
  TENSORFLOW_TESTS :: iree_tfl_tests/llvmcpu_resnet_50_int8.run
  TENSORFLOW_TESTS :: iree_tfl_tests/mnasnet.run
  TENSORFLOW_TESTS :: iree_tfl_tests/mobilenet_v3.run
  TENSORFLOW_TESTS :: iree_tfl_tests/person_detect.run
  TENSORFLOW_TESTS :: iree_tfl_tests/vmvx_mobilebert_tf2_quant.run
  TENSORFLOW_TESTS :: iree_tfl_tests/vmvx_person_detect.run
  TENSORFLOW_TESTS :: iree_tfl_tests/vulkan_mobilenet_v1.run

For these we'll need to bump the TF nightly version. @rsuderman was looking at doing it, I'm not sure how far along he is and if we can do this along with the integrate or do in 2 steps.

@MaheshRavishankar
Copy link
Contributor

There are some failures about TOSA attribute type should be i32 (not i64). @jpienaar do you know who can help on this?

Failed Tests (11):
  TENSORFLOW_TESTS :: iree_tfl_tests/east_text_detector.run
  TENSORFLOW_TESTS :: iree_tfl_tests/gpt2.run
  TENSORFLOW_TESTS :: iree_tfl_tests/llvmcpu_mobilenet_v1.run
  TENSORFLOW_TESTS :: iree_tfl_tests/llvmcpu_mobilenet_v3-large_uint8.run
  TENSORFLOW_TESTS :: iree_tfl_tests/llvmcpu_resnet_50_int8.run
  TENSORFLOW_TESTS :: iree_tfl_tests/mnasnet.run
  TENSORFLOW_TESTS :: iree_tfl_tests/mobilenet_v3.run
  TENSORFLOW_TESTS :: iree_tfl_tests/person_detect.run
  TENSORFLOW_TESTS :: iree_tfl_tests/vmvx_mobilebert_tf2_quant.run
  TENSORFLOW_TESTS :: iree_tfl_tests/vmvx_person_detect.run
  TENSORFLOW_TESTS :: iree_tfl_tests/vulkan_mobilenet_v1.run

For these we'll need to bump the TF nightly version. @rsuderman was looking at doing it, I'm not sure how far along he is and if we can do this along with the integrate or do in 2 steps.

If this is just a TOSA issue, maybe just mark these as XFAIL, file a bug with this list. When the TFNightly gets bumped we can enable those again.

@hanhanW
Copy link
Contributor Author

hanhanW commented Aug 21, 2023

There are other similar failures in e2e model compilation.

@github-actions
Copy link

github-actions bot commented Aug 22, 2023

Abbreviated Benchmark Summary

@ commit 646fa07a5e43f6892f7cd2d91def16f9175f4e34 (no previous benchmark results to compare)

Raw Latencies

Benchmark Name Average Latency (ms) Median Latency (ms) Latency Standard Deviation (ms)
BertForMaskedLMTF(stablehlo) [cuda-sm\_80-linux\_gnu-cuda][default-flags] cuda(none)[full-inference,default-flags] with zeros @ a2-highgpu-1g[gpu] 6.329 6.293 0.113
BertLargeTF(stablehlo) [cuda-sm\_80-linux\_gnu-cuda][default-flags] cuda(none)[full-inference,default-flags] with zeros @ a2-highgpu-1g[gpu] 10.523 10.511 0.036
BertLargefp16PTBatch1(linalg) [cuda-sm\_80-linux\_gnu-cuda][default-flags] cuda(none)[full-inference,default-flags] with zeros @ a2-highgpu-1g[gpu] 12.457 12.456 0.003

[Top 3 out of 40 results showed]

No improved or regressed compilation metrics 🏖️

For more information:

Source Workflow Run

@MaheshRavishankar
Copy link
Contributor

Thanks a lot @hanhanW . This was quite a lift.

@stellaraccident maybe you should make a go/no-go on this.. basically TOSA models need to be regenerated after this integrate. Not sure the disruption it will cause downstream.

It looks good to me.

@hcindyl
Copy link
Contributor

hcindyl commented Aug 22, 2023

Thanks a lot @hanhanW . This was quite a lift.

@stellaraccident maybe you should make a go/no-go on this.. basically TOSA models need to be regenerated after this integrate. Not sure the disruption it will cause downstream.

It looks good to me.

The breakage in tosa also means iree-samples tflitehub regression will be broken after this, as well as any other downstream projects that uses tflite as input.

@MaheshRavishankar
Copy link
Contributor

Thanks a lot @hanhanW . This was quite a lift.
@stellaraccident maybe you should make a go/no-go on this.. basically TOSA models need to be regenerated after this integrate. Not sure the disruption it will cause downstream.
It looks good to me.

The breakage in tosa also means iree-samples tflitehub regression will be broken after this, as well as any other downstream projects that uses tflite as input.

I have no idea how else to do this apart from landing this and having downstream projects also adapt here...

@benvanik
Copy link
Collaborator

SGTM - such breakages happen - at least there's no cycles :)

@hanhanW
Copy link
Contributor Author

hanhanW commented Aug 22, 2023

Others seem to be depended on this, we should land this and re-enable the rest as a second step?

@hanhanW
Copy link
Contributor Author

hanhanW commented Aug 22, 2023

I think this is ready to land if someone can stamp it.

hanhanW and others added 6 commits August 22, 2023 17:57
* Reset third_party/llvm-project: 063c42e919c01d7e64c1af5a10898fc84b06dfe8 (2023-08-16 10:45:54 -0700): [clang-format] Handle NamespaceMacro string arg for FixNamespaceComments
Co-authored-by: TatWai Chong <tatwai.chong@arm.com>
@hanhanW hanhanW merged commit 42e54ab into main Aug 23, 2023
55 checks passed
@hanhanW hanhanW deleted the bump-llvm-20230817 branch August 23, 2023 06:10
sleffler pushed a commit to AmbiML/sparrow-scripts-full that referenced this pull request Sep 6, 2023
To match iree-org/iree#14725

Change-Id: I504b1451a1a52b1a63c2aaffc5525e7b86d8ff05
GitOrigin-RevId: 977ca36f08aeb5de40e794a438403b43870ffc36
stellaraccident pushed a commit that referenced this pull request Sep 24, 2023
* iree: 42e54ab Integrate llvm-project at 063c42e919c0 (#14725) (Tue
Aug 22 23:10:14 2023 -0700)
* xla: ac612bfa4 Ensure that CompileOptions serializes
deterministically. (Wed Aug 23 11:36:04 2023 -0700)
* jax: d1547ca45 Ensure that CompileOptions serializes
deterministically. (Wed Aug 23 11:34:21 2023 -0700)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants