forked from llvm/torch-mlir
-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AutoBump] Merge with fixes of 77d7f644 (Jun 13, needs LLVM bump) (66) #303
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
A PR landed when moving away from a deprecated cast function. Updated the corresponding lines to pass.
1. Support conv_transpose1d and conv_transpose3d 2. Fix bugs of convertTransposedConv func in lib/Conversion/TorchToStablehlo/Linear.cpp
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
This addresses 7 of the model failures I'm seeing in the test suite. See [Shark-Turbine issue llvm#566](nod-ai/SHARK-ModelDev#566). Need the op ```linalg.conv_2d_ngchw_gfchw_q``` to be added upstream before merging this. See [llvm-project PR #92136 ](llvm/llvm-project#92136). A small additional expansion to operand quantization is included in this patch to address a model failure that occurs when unblocking the quantized group convolutions in one of these onnx models.
@kuhar mentioned in the previous PR that we should use ld.lld. I kept using ld because for my LLD version, it worked. After updating to a new LLD version, that became necessary.
fixes nod-ai/SHARK-ModelDev#653 --------- Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
Resolving `bool` literals can result in a type change to uint8. This needs to be converted back to the expected type before returning to the wrapped `torch` operators.
This patch adds two `memref` passes to `torch-mlir-opt`, which already occur in the pass pipeline `torch-backend-to-linalg-on-tensors-backend-pipeline`. Additionally, necessary op interface external models are included to address issue llvm#3352.
Support lowering unsigned integer type to stablehlo as discussed in llvm#2184. The things I do in this PR: 1. create `setupBackendTypeConversionForStablehlo()`, `createFuncBackendTypeConversionForStablehloPass` and `createFinalizingBackendTypeConversionForStablehloPass`. 2. remove `InferTypeOpInterface` from `torch_c.to_builtin_tensor`, because it's different result type between linalg backend and stablehlo backend: ``` // linalg backend func.func @forward(%arg0: !torch.vtensor<[3],ui8>) -> tensor<3xf32> { %c = torch_c.to_builtin_tensor %arg0 : (!torch.vtensor<[3], ui8> -> tensor<3xi8> %0 = tensor.empty() : tensor<3xf32> %1 = linalg.generic {indexing_maps = [#map, #map], iterator_types = ["parallel"]} ins(%arg0 : tensor<3xi8>) outs(%0 : tensor<3xf32>) { ^bb0(%in: i8, %out: f32): %2 = arith.uitofp %in : i8 to f32 linalg.yield %2 : f32 } -> tensor<3xf32> return %1 : tensor<3xf32> } // stablehlo backend func.func @forward(%arg0: !torch.vtensor<[3],ui8>) -> tensor<3xf32> { %c = torch_c.to_builtin_tensor %arg0 : (!torch.vtensor<[3], ui8> -> tensor<3xui8> %0 = stablehlo.convert %arg0 : (tensor<3xui8> -> tensor<3xf32> return %0 : tensor<3xf32> } ``` 3. fix stablehlo and linalg's conversion
llvm#3367 and llvm#3364 introduced new dependencies, causing the [Bazel workflow](https://github.com/llvm/torch-mlir/actions/workflows/bazelBuildAndTest.yml) to fail. These need to be fixed in Bazel.
This commit also adds the Torch declaration for aten.max_unpool2d and aten.max_unpool3d op. The TorchToLinalg lowering for the same will be added in a follow-up commit. Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
This commit also fixes the average pool op' test failing for OnnxToLinalg lowering. Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
Set PyTorch and TorchVision version to nightly release 2024-05-14. Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
Inspired by PyTorch decompositions.py. See https://github.com/pytorch/pytorch/blob/ec58f1f74ebcec744d2ab90ad34abd09c1018e92/torch/_decomp/decompositions.py#L3923-L4086 Only support paddingMode=0 or 1 and interpolationMode=0 or 1
Torch Dialect with symbolic shape expressions: ```ll module { func.func @main(%arg0: !torch.vtensor<[?,?,3],f32>, %arg1: !torch.vtensor<[?,?,3],f32>) -> !torch.vtensor<[?,?,3],f32> { %0 = torch.symbolic_int "s0" {min_val = 5, max_val = 10} : !torch.int %1 = torch.symbolic_int "s1" {min_val = 0, max_val = 100} : !torch.int %2 = torch.symbolic_int "s3" {min_val = 0, max_val = 50} : !torch.int torch.bind_symbolic_shape %arg0, [%0, %1], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32> torch.bind_symbolic_shape %arg1, [%0, %2], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32> %3 = torch.aten.tanh %arg0 : !torch.vtensor<[?,?,3],f32> -> !torch.vtensor<[?,?,3],f32> torch.bind_symbolic_shape %3, [%0, %1], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32> %4 = torch.aten.sigmoid %arg1 : !torch.vtensor<[?,?,3],f32> -> !torch.vtensor<[?,?,3],f32> torch.bind_symbolic_shape %4, [%0, %2], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32> %5 = torch.prim.ListConstruct %3, %3, %4 : (!torch.vtensor<[?,?,3],f32>, !torch.vtensor<[?,?,3],f32>, !torch.vtensor<[?,?,3],f32>) -> !torch.list<vtensor> %int1 = torch.constant.int 1 %6 = torch.aten.cat %5, %int1 : !torch.list<vtensor>, !torch.int -> !torch.vtensor<[?,?,3],f32> torch.bind_symbolic_shape %6, [%0, %1, %2], #affine_map<()[s0, s1, s2] -> (s0, s1 * 2 + s2, 3)> : !torch.vtensor<[?,?,3],f32> return %6 : !torch.vtensor<[?,?,3],f32> } } ``` For reference, this is the TorchDynamo exported program with symbolic shape expressions that the above Torch dialect program is imported from: ```py ExportedProgram: class GraphModule(torch.nn.Module): def forward(self, x: "f32[s0, s1, 3]", y: "f32[s0, s3, 3]"): # File: /home/sambhav.jain/workspaces/cruise/src/3p/torch-mlir/test/python/fx_importer/symbolic_shape_expr_test.py:31 in forward, code: a = torch.tanh(x) tanh: "f32[s0, s1, 3]" = torch.ops.aten.tanh.default(x); x = None # File: /home/sambhav.jain/workspaces/cruise/src/3p/torch-mlir/test/python/fx_importer/symbolic_shape_expr_test.py:32 in forward, code: b = torch.sigmoid(y) sigmoid: "f32[s0, s3, 3]" = torch.ops.aten.sigmoid.default(y); y = None # File: /home/sambhav.jain/workspaces/cruise/src/3p/torch-mlir/test/python/fx_importer/symbolic_shape_expr_test.py:33 in forward, code: return torch.cat((a, a, b), dim=1) cat: "f32[s0, 2*s1 + s3, 3]" = torch.ops.aten.cat.default([tanh, tanh, sigmoid], 1); tanh = sigmoid = None return (cat,) Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='y'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='cat'), target=None)]) Range constraints: {s0: ValueRanges(lower=5, upper=10, is_bool=False), s1: ValueRanges(lower=0, upper=100, is_bool=False), s3: ValueRanges(lower=0, upper=50, is_bool=False)} ``` Huge credit to @stellaraccident for the inputs that helped evaluate the various design options and arrive at the representation of choice. - [x] Op definitions for symbolic_int and bind_symbolic_shape ops - [x] fx_importer updates to import range constraints + create symbolic_int ops - [x] fx_importer changes for AffineMapAttr building + adding bind_symbolic_shape ops - [x] custom printer/parser for inlined AffineMap expressions in mlir assembly - [x] Dialect lit test - [x] fx_importer python lit tests - [ ] Cleanup pass to remove these ops (can add in a follow-on)
This is needed after llvm#3372.
Supports asymmetric padding by performing a torch.nn.functional.pad on the input before performing the convolution. Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
Missing types for tracing float8 types.
Linalg conversion requires mapping for f8 types
* Let `toBuiltinTensor()` reflects the original dtype of `!torch.vtensor`. * Backend handles dtype conversion themselves.
This commit adds the lowering for SequenceAt, SequenceEmpty, SequenceInsert, SequenceErase op Signed-Off By: Vivek Khandelwal<vivekkhandelwal1424@gmail.com>
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
Tests the basic constructs of registering a custom op and its abstract implementations (with FakeTensors) in python, going through TorchDynamo export, followed by importing the shape expressions in the Torch dialect. Also fixes the importer were previously the symbolic bind op insertion was not gated in one place.
this fixes the following issue: llvm#3418
half_pixel is also the default mode used by ONNX, see https://onnx.ai/onnx/operators/onnx__Resize.html
mgehre-amd
changed the title
[AutoBump] Merge with fixes of 77d7f644 (Jun 13) (66)
[AutoBump] Merge with fixes of 77d7f644 (Jun 13, needs LLVM bump) (66)
Sep 9, 2024
mgehre-amd
changed the base branch from
bump_to_ae6f5e82
to
feature/backport_ea1_ops
September 11, 2024 11:27
cferry-AMD
approved these changes
Sep 11, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
requires Xilinx/llvm-project#330