-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AutoBump] Merge with fixes of 77d7f644 (Jun 13, needs LLVM bump) (66) #303
Commits on May 31, 2024
-
[NFC] Fix member cast change to global for landing collision (llvm#3407)
A PR landed when moving away from a deprecated cast function. Updated the corresponding lines to pass.
Configuration menu - View commit details
-
Copy full SHA for 617b00b - Browse repository at this point
Copy the full SHA 617b00bView commit details
Commits on Jun 3, 2024
-
[Torch]Support conv_transpose1d and conv_transpose3d (llvm#3286)
1. Support conv_transpose1d and conv_transpose3d 2. Fix bugs of convertTransposedConv func in lib/Conversion/TorchToStablehlo/Linear.cpp
Xinyu Yang authoredJun 3, 2024 Configuration menu - View commit details
-
Copy full SHA for 23b5305 - Browse repository at this point
Copy the full SHA 23b5305View commit details -
Configuration menu - View commit details
-
Copy full SHA for 267052d - Browse repository at this point
Copy the full SHA 267052dView commit details -
Configuration menu - View commit details
-
Copy full SHA for 285b087 - Browse repository at this point
Copy the full SHA 285b087View commit details -
[ONNX] Add OnnxToTorch lowering for SpaceToDepth op (llvm#3393)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 6382dbb - Browse repository at this point
Copy the full SHA 6382dbbView commit details -
[TorchToLinalg] add support for quantized group conv (llvm#3341)
This addresses 7 of the model failures I'm seeing in the test suite. See [Shark-Turbine issue llvm#566](nod-ai/SHARK-ModelDev#566). Need the op ```linalg.conv_2d_ngchw_gfchw_q``` to be added upstream before merging this. See [llvm-project PR #92136 ](llvm/llvm-project#92136). A small additional expansion to operand quantization is included in this patch to address a model failure that occurs when unblocking the quantized group convolutions in one of these onnx models.
Configuration menu - View commit details
-
Copy full SHA for 8995c90 - Browse repository at this point
Copy the full SHA 8995c90View commit details -
Update development.md to use ld.lld (llvm#3412)
@kuhar mentioned in the previous PR that we should use ld.lld. I kept using ld because for my LLD version, it worked. After updating to a new LLD version, that became necessary.
Configuration menu - View commit details
-
Copy full SHA for 948981a - Browse repository at this point
Copy the full SHA 948981aView commit details -
Fix reducesum onnx lit test to linalg lowering fails (llvm#3218)
fixes nod-ai/SHARK-ModelDev#653 --------- Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 11c3281 - Browse repository at this point
Copy the full SHA 11c3281View commit details -
Add conversion operation for bool resolved_literal (llvm#3410)
Resolving `bool` literals can result in a type change to uint8. This needs to be converted back to the expected type before returning to the wrapped `torch` operators.
Configuration menu - View commit details
-
Copy full SHA for 0a6861b - Browse repository at this point
Copy the full SHA 0a6861bView commit details
Commits on Jun 4, 2024
-
Link necessary op interface implementations (llvm#3364)
This patch adds two `memref` passes to `torch-mlir-opt`, which already occur in the pass pipeline `torch-backend-to-linalg-on-tensors-backend-pipeline`. Additionally, necessary op interface external models are included to address issue llvm#3352.
Configuration menu - View commit details
-
Copy full SHA for 56d21cb - Browse repository at this point
Copy the full SHA 56d21cbView commit details -
[Stablehlo] support uint8 (llvm#3367)
Support lowering unsigned integer type to stablehlo as discussed in llvm#2184. The things I do in this PR: 1. create `setupBackendTypeConversionForStablehlo()`, `createFuncBackendTypeConversionForStablehloPass` and `createFinalizingBackendTypeConversionForStablehloPass`. 2. remove `InferTypeOpInterface` from `torch_c.to_builtin_tensor`, because it's different result type between linalg backend and stablehlo backend: ``` // linalg backend func.func @forward(%arg0: !torch.vtensor<[3],ui8>) -> tensor<3xf32> { %c = torch_c.to_builtin_tensor %arg0 : (!torch.vtensor<[3], ui8> -> tensor<3xi8> %0 = tensor.empty() : tensor<3xf32> %1 = linalg.generic {indexing_maps = [#map, #map], iterator_types = ["parallel"]} ins(%arg0 : tensor<3xi8>) outs(%0 : tensor<3xf32>) { ^bb0(%in: i8, %out: f32): %2 = arith.uitofp %in : i8 to f32 linalg.yield %2 : f32 } -> tensor<3xf32> return %1 : tensor<3xf32> } // stablehlo backend func.func @forward(%arg0: !torch.vtensor<[3],ui8>) -> tensor<3xf32> { %c = torch_c.to_builtin_tensor %arg0 : (!torch.vtensor<[3], ui8> -> tensor<3xui8> %0 = stablehlo.convert %arg0 : (tensor<3xui8> -> tensor<3xf32> return %0 : tensor<3xf32> } ``` 3. fix stablehlo and linalg's conversion
Configuration menu - View commit details
-
Copy full SHA for 50f7103 - Browse repository at this point
Copy the full SHA 50f7103View commit details -
[Bazel] Fix bazel deps (llvm#3414)
llvm#3367 and llvm#3364 introduced new dependencies, causing the [Bazel workflow](https://github.com/llvm/torch-mlir/actions/workflows/bazelBuildAndTest.yml) to fail. These need to be fixed in Bazel.
Configuration menu - View commit details
-
Copy full SHA for 89f7d24 - Browse repository at this point
Copy the full SHA 89f7d24View commit details -
[ONNX] Add OnnxToTorch Lowering for MaxUnpool op (llvm#3413)
This commit also adds the Torch declaration for aten.max_unpool2d and aten.max_unpool3d op. The TorchToLinalg lowering for the same will be added in a follow-up commit. Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 35dd8c5 - Browse repository at this point
Copy the full SHA 35dd8c5View commit details -
[MLIR][Torch] Add TorchToLinalg lowering for AtenAvgPool3dOp (llvm#3030)
This commit also fixes the average pool op' test failing for OnnxToLinalg lowering. Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 661be2d - Browse repository at this point
Copy the full SHA 661be2dView commit details -
Configuration menu - View commit details
-
Copy full SHA for d59d0b6 - Browse repository at this point
Copy the full SHA d59d0b6View commit details
Commits on Jun 6, 2024
-
build: manually update PyTorch version (llvm#3340)
Set PyTorch and TorchVision version to nightly release 2024-05-14. Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 72837fb - Browse repository at this point
Copy the full SHA 72837fbView commit details
Commits on Jun 7, 2024
-
[Stablehlo] Add lowering of GridSampler Op (llvm#3084)
Inspired by PyTorch decompositions.py. See https://github.com/pytorch/pytorch/blob/ec58f1f74ebcec744d2ab90ad34abd09c1018e92/torch/_decomp/decompositions.py#L3923-L4086 Only support paddingMode=0 or 1 and interpolationMode=0 or 1
Xinyu Yang authoredJun 7, 2024 Configuration menu - View commit details
-
Copy full SHA for 431d98b - Browse repository at this point
Copy the full SHA 431d98bView commit details -
Representing Symbolic Shape Expressions in Torch Dialect (llvm#3372)
Torch Dialect with symbolic shape expressions: ```ll module { func.func @main(%arg0: !torch.vtensor<[?,?,3],f32>, %arg1: !torch.vtensor<[?,?,3],f32>) -> !torch.vtensor<[?,?,3],f32> { %0 = torch.symbolic_int "s0" {min_val = 5, max_val = 10} : !torch.int %1 = torch.symbolic_int "s1" {min_val = 0, max_val = 100} : !torch.int %2 = torch.symbolic_int "s3" {min_val = 0, max_val = 50} : !torch.int torch.bind_symbolic_shape %arg0, [%0, %1], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32> torch.bind_symbolic_shape %arg1, [%0, %2], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32> %3 = torch.aten.tanh %arg0 : !torch.vtensor<[?,?,3],f32> -> !torch.vtensor<[?,?,3],f32> torch.bind_symbolic_shape %3, [%0, %1], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32> %4 = torch.aten.sigmoid %arg1 : !torch.vtensor<[?,?,3],f32> -> !torch.vtensor<[?,?,3],f32> torch.bind_symbolic_shape %4, [%0, %2], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32> %5 = torch.prim.ListConstruct %3, %3, %4 : (!torch.vtensor<[?,?,3],f32>, !torch.vtensor<[?,?,3],f32>, !torch.vtensor<[?,?,3],f32>) -> !torch.list<vtensor> %int1 = torch.constant.int 1 %6 = torch.aten.cat %5, %int1 : !torch.list<vtensor>, !torch.int -> !torch.vtensor<[?,?,3],f32> torch.bind_symbolic_shape %6, [%0, %1, %2], #affine_map<()[s0, s1, s2] -> (s0, s1 * 2 + s2, 3)> : !torch.vtensor<[?,?,3],f32> return %6 : !torch.vtensor<[?,?,3],f32> } } ``` For reference, this is the TorchDynamo exported program with symbolic shape expressions that the above Torch dialect program is imported from: ```py ExportedProgram: class GraphModule(torch.nn.Module): def forward(self, x: "f32[s0, s1, 3]", y: "f32[s0, s3, 3]"): # File: /home/sambhav.jain/workspaces/cruise/src/3p/torch-mlir/test/python/fx_importer/symbolic_shape_expr_test.py:31 in forward, code: a = torch.tanh(x) tanh: "f32[s0, s1, 3]" = torch.ops.aten.tanh.default(x); x = None # File: /home/sambhav.jain/workspaces/cruise/src/3p/torch-mlir/test/python/fx_importer/symbolic_shape_expr_test.py:32 in forward, code: b = torch.sigmoid(y) sigmoid: "f32[s0, s3, 3]" = torch.ops.aten.sigmoid.default(y); y = None # File: /home/sambhav.jain/workspaces/cruise/src/3p/torch-mlir/test/python/fx_importer/symbolic_shape_expr_test.py:33 in forward, code: return torch.cat((a, a, b), dim=1) cat: "f32[s0, 2*s1 + s3, 3]" = torch.ops.aten.cat.default([tanh, tanh, sigmoid], 1); tanh = sigmoid = None return (cat,) Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='y'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='cat'), target=None)]) Range constraints: {s0: ValueRanges(lower=5, upper=10, is_bool=False), s1: ValueRanges(lower=0, upper=100, is_bool=False), s3: ValueRanges(lower=0, upper=50, is_bool=False)} ``` Huge credit to @stellaraccident for the inputs that helped evaluate the various design options and arrive at the representation of choice. - [x] Op definitions for symbolic_int and bind_symbolic_shape ops - [x] fx_importer updates to import range constraints + create symbolic_int ops - [x] fx_importer changes for AffineMapAttr building + adding bind_symbolic_shape ops - [x] custom printer/parser for inlined AffineMap expressions in mlir assembly - [x] Dialect lit test - [x] fx_importer python lit tests - [ ] Cleanup pass to remove these ops (can add in a follow-on)
Configuration menu - View commit details
-
Copy full SHA for d0a818a - Browse repository at this point
Copy the full SHA d0a818aView commit details -
[Bazel] Add BuiltinDialectTdFiles dep to MLIRTorchOpsIncGen (llvm#3430)
This is needed after llvm#3372.
Configuration menu - View commit details
-
Copy full SHA for 94838ca - Browse repository at this point
Copy the full SHA 94838caView commit details -
[ONNX] Conv op adds support for asymmetric padding. (llvm#3426)
Supports asymmetric padding by performing a torch.nn.functional.pad on the input before performing the convolution. Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
Configuration menu - View commit details
-
Copy full SHA for 1c2778d - Browse repository at this point
Copy the full SHA 1c2778dView commit details -
[Onnx] Add Onnx->Torch lowering for Onnx.Shrink Op (llvm#3385)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 1a9c0a3 - Browse repository at this point
Copy the full SHA 1a9c0a3View commit details -
Configuration menu - View commit details
-
Copy full SHA for f794582 - Browse repository at this point
Copy the full SHA f794582View commit details -
Add f8 types to fx importer (llvm#3434)
Missing types for tracing float8 types.
Configuration menu - View commit details
-
Copy full SHA for 7f188eb - Browse repository at this point
Copy the full SHA 7f188ebView commit details -
[torch] Add support for f8 types for linalg conversion (llvm#3436)
Linalg conversion requires mapping for f8 types
Configuration menu - View commit details
-
Copy full SHA for 75af64f - Browse repository at this point
Copy the full SHA 75af64fView commit details
Commits on Jun 8, 2024
-
[Torch] fix toBuiltinTensor() (llvm#3415)
* Let `toBuiltinTensor()` reflects the original dtype of `!torch.vtensor`. * Backend handles dtype conversion themselves.
Configuration menu - View commit details
-
Copy full SHA for 689efc8 - Browse repository at this point
Copy the full SHA 689efc8View commit details -
[ONNX] Add OnnxToTorch Lowering for Sequence Ops (llvm#3425)
This commit adds the lowering for SequenceAt, SequenceEmpty, SequenceInsert, SequenceErase op Signed-Off By: Vivek Khandelwal<vivekkhandelwal1424@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for d35b6b4 - Browse repository at this point
Copy the full SHA d35b6b4View commit details
Commits on Jun 9, 2024
-
[ONNX] Lower Onnx.Concat lowering version (llvm#3437)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 5bc6264 - Browse repository at this point
Copy the full SHA 5bc6264View commit details -
Test custom op import with symbolic shapes (llvm#3431)
Tests the basic constructs of registering a custom op and its abstract implementations (with FakeTensors) in python, going through TorchDynamo export, followed by importing the shape expressions in the Torch dialect. Also fixes the importer were previously the symbolic bind op insertion was not gated in one place.
Configuration menu - View commit details
-
Copy full SHA for 7e0e23c - Browse repository at this point
Copy the full SHA 7e0e23cView commit details
Commits on Jun 10, 2024
-
[torch-mlir][sparse] re-enable all sparse tests (llvm#3444)
this fixes the following issue: llvm#3418
Configuration menu - View commit details
-
Copy full SHA for d77bab3 - Browse repository at this point
Copy the full SHA d77bab3View commit details -
onnx.resize: Add support for coordTfMode "half_pixel" (llvm#3441)
half_pixel is also the default mode used by ONNX, see https://onnx.ai/onnx/operators/onnx__Resize.html
Configuration menu - View commit details
-
Copy full SHA for e07a0bf - Browse repository at this point
Copy the full SHA e07a0bfView commit details
Commits on Jun 12, 2024
-
[ONNX] Fix resize ceil numerics and add half_pixel_symmetric support (l…
…lvm#3443) This patch fixes several failing tests in our [external test suite](https://github.com/nod-ai/SHARK-TestSuite/tree/main/iree_tests/onnx/node/generated), and addresses some of the issues discussed in llvm#3420
Configuration menu - View commit details
-
Copy full SHA for 7cd3368 - Browse repository at this point
Copy the full SHA 7cd3368View commit details -
[ONNX] add int16 quantization support (llvm#3446)
There is currently no int16 quantization support in torch. This patch adds a new mlir type to correspond to the missing "torch.qint16" type, and enables lowering of quantization-related onnx ops using int16 types. In follow-up patches, custom quantization logic for ops like aten.matmul/aten.mm/aten.convolution may need to be revisited to allow support for qint16. The passes in FuseQuantizedOps.cpp may also need slight modifications.
Configuration menu - View commit details
-
Copy full SHA for de28c85 - Browse repository at this point
Copy the full SHA de28c85View commit details -
[ONNX] add some args to the onnx importer to assist shape_inference (l…
…lvm#3445) Adds the following arguments: - "--clear-domain": enabling this flag (default False) will delete the domain attribute from each node in the onnx model before importing. Shape inference does not seem to work for onnx ops in custom domains. In the rare case when these ops have a corresponding counterpart in base onnx, enabling this flag might allow shape inference to work properly. - "--opset-version": allows setting the opset version manually. This will cause the importer to attempt to update the opset_version of the onnx model before importing. Newer opset versions sometimes have more robust shape inference patterns.
Configuration menu - View commit details
-
Copy full SHA for c0eb6d8 - Browse repository at this point
Copy the full SHA c0eb6d8View commit details -
[onnx] Resize supports default-valued attributes (llvm#3450)
Handles onnx exporters emitting default-valued attributes. Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
Configuration menu - View commit details
-
Copy full SHA for 41d04a8 - Browse repository at this point
Copy the full SHA 41d04a8View commit details -
[ONNX] Fix AveragePool attributes support (llvm#3235)
Issues was found here nod-ai/SHARK-ModelDev#643 - [ONNX] Fix padding attributes for onnx.AveragePool - [Linalg] Add countIncludePad false support for AtenAvgPool1/2dOp - [Linalg] Add an avg_pool2d countIncludePad False e2e tests - [Linalg] Fix conflict with AtenAvgPool3dOp - [Linalg] Fix e2e crash with AtenAvgPool1dOp - [Linalg] Add dynamic dim support for AtenAvgPool2dOp - [Linalg] Fix AvgPool2dDivisorOverrideModule crash
Configuration menu - View commit details
-
Copy full SHA for ae6f5e8 - Browse repository at this point
Copy the full SHA ae6f5e8View commit details
Commits on Jun 13, 2024
-
Update to llvm/llvm-proect@27ac46e6bea2 (2024-6-12) (llvm#3454)
This would require to bump stablehlo at the same time.
Configuration menu - View commit details
-
Copy full SHA for 77d7f64 - Browse repository at this point
Copy the full SHA 77d7f64View commit details
Commits on Aug 28, 2024
-
Configuration menu - View commit details
-
Copy full SHA for 75c2a81 - Browse repository at this point
Copy the full SHA 75c2a81View commit details -
Configuration menu - View commit details
-
Copy full SHA for c639e26 - Browse repository at this point
Copy the full SHA c639e26View commit details -
Configuration menu - View commit details
-
Copy full SHA for 7b01213 - Browse repository at this point
Copy the full SHA 7b01213View commit details -
Configuration menu - View commit details
-
Copy full SHA for 029673a - Browse repository at this point
Copy the full SHA 029673aView commit details -
Configuration menu - View commit details
-
Copy full SHA for 4a5fdf3 - Browse repository at this point
Copy the full SHA 4a5fdf3View commit details -
Configuration menu - View commit details
-
Copy full SHA for ad1facc - Browse repository at this point
Copy the full SHA ad1faccView commit details -
Configuration menu - View commit details
-
Copy full SHA for e698f4a - Browse repository at this point
Copy the full SHA e698f4aView commit details -
Configuration menu - View commit details
-
Copy full SHA for accf7f6 - Browse repository at this point
Copy the full SHA accf7f6View commit details -
Configuration menu - View commit details
-
Copy full SHA for ca733c5 - Browse repository at this point
Copy the full SHA ca733c5View commit details -
Configuration menu - View commit details
-
Copy full SHA for f724438 - Browse repository at this point
Copy the full SHA f724438View commit details -
Configuration menu - View commit details
-
Copy full SHA for a22c27c - Browse repository at this point
Copy the full SHA a22c27cView commit details -
Configuration menu - View commit details
-
Copy full SHA for 0ef5530 - Browse repository at this point
Copy the full SHA 0ef5530View commit details -
Configuration menu - View commit details
-
Copy full SHA for 56770da - Browse repository at this point
Copy the full SHA 56770daView commit details
Commits on Aug 29, 2024
-
Configuration menu - View commit details
-
Copy full SHA for 977b3a7 - Browse repository at this point
Copy the full SHA 977b3a7View commit details
Commits on Sep 6, 2024
-
Configuration menu - View commit details
-
Copy full SHA for fbb1cca - Browse repository at this point
Copy the full SHA fbb1ccaView commit details -
Configuration menu - View commit details
-
Copy full SHA for 813abc3 - Browse repository at this point
Copy the full SHA 813abc3View commit details
Commits on Sep 9, 2024
-
Configuration menu - View commit details
-
Copy full SHA for 7c5a142 - Browse repository at this point
Copy the full SHA 7c5a142View commit details -
Configuration menu - View commit details
-
Copy full SHA for 077a2ee - Browse repository at this point
Copy the full SHA 077a2eeView commit details -
Configuration menu - View commit details
-
Copy full SHA for 23b2b30 - Browse repository at this point
Copy the full SHA 23b2b30View commit details -
Configuration menu - View commit details
-
Copy full SHA for 5f167e7 - Browse repository at this point
Copy the full SHA 5f167e7View commit details -
Configuration menu - View commit details
-
Copy full SHA for 4ffe137 - Browse repository at this point
Copy the full SHA 4ffe137View commit details
Commits on Sep 11, 2024
-
Configuration menu - View commit details
-
Copy full SHA for 2b86be6 - Browse repository at this point
Copy the full SHA 2b86be6View commit details