Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AutoBump] Merge with fixes of 77d7f644 (Jun 13, needs LLVM bump) (66) #303

Merged
merged 58 commits into from
Sep 11, 2024

Commits on May 31, 2024

  1. [NFC] Fix member cast change to global for landing collision (llvm#3407)

    A PR landed when moving away from a deprecated cast function. Updated
    the corresponding lines to pass.
    rsuderman authored May 31, 2024
    Configuration menu
    Copy the full SHA
    617b00b View commit details
    Browse the repository at this point in the history

Commits on Jun 3, 2024

  1. [Torch]Support conv_transpose1d and conv_transpose3d (llvm#3286)

    1. Support conv_transpose1d and conv_transpose3d
    2. Fix bugs of convertTransposedConv func in
    lib/Conversion/TorchToStablehlo/Linear.cpp
    Xinyu Yang authored Jun 3, 2024
    Configuration menu
    Copy the full SHA
    23b5305 View commit details
    Browse the repository at this point in the history
  2. [Torch] decompose AtenLerpTensorOp (llvm#3251)

    as title
    Xinyu Yang authored Jun 3, 2024
    Configuration menu
    Copy the full SHA
    267052d View commit details
    Browse the repository at this point in the history
  3. [Torch] Emit rrelu and decompose it (llvm#3250)

    as title
    Xinyu Yang authored Jun 3, 2024
    Configuration menu
    Copy the full SHA
    285b087 View commit details
    Browse the repository at this point in the history
  4. [ONNX] Add OnnxToTorch lowering for SpaceToDepth op (llvm#3393)

    Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
    vivekkhandelwal1 authored Jun 3, 2024
    Configuration menu
    Copy the full SHA
    6382dbb View commit details
    Browse the repository at this point in the history
  5. [TorchToLinalg] add support for quantized group conv (llvm#3341)

    This addresses 7 of the model failures I'm seeing in the test suite. See
    [Shark-Turbine issue
    llvm#566](nod-ai/SHARK-ModelDev#566).
    
    Need the op ```linalg.conv_2d_ngchw_gfchw_q``` to be added upstream
    before merging this. See [llvm-project PR #92136
    ](llvm/llvm-project#92136).
    
    A small additional expansion to operand quantization is included in this
    patch to address a model failure that occurs when unblocking the
    quantized group convolutions in one of these onnx models.
    zjgarvey authored Jun 3, 2024
    Configuration menu
    Copy the full SHA
    8995c90 View commit details
    Browse the repository at this point in the history
  6. Update development.md to use ld.lld (llvm#3412)

    @kuhar mentioned in the previous PR that we should use ld.lld. I kept
    using ld because for my LLD version, it worked.
    
    After updating to a new LLD version, that became necessary.
    renxida authored Jun 3, 2024
    Configuration menu
    Copy the full SHA
    948981a View commit details
    Browse the repository at this point in the history
  7. Fix reducesum onnx lit test to linalg lowering fails (llvm#3218)

    fixes nod-ai/SHARK-ModelDev#653
    
    ---------
    
    Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
    renxida and Xida Ren authored Jun 3, 2024
    Configuration menu
    Copy the full SHA
    11c3281 View commit details
    Browse the repository at this point in the history
  8. Add conversion operation for bool resolved_literal (llvm#3410)

    Resolving `bool` literals can result in a type change to uint8. This
    needs to be converted back to the expected type before returning to the
    wrapped `torch` operators.
    rsuderman authored Jun 3, 2024
    Configuration menu
    Copy the full SHA
    0a6861b View commit details
    Browse the repository at this point in the history

Commits on Jun 4, 2024

  1. Link necessary op interface implementations (llvm#3364)

    This patch adds two `memref` passes to `torch-mlir-opt`, which already
    occur in the pass pipeline
    `torch-backend-to-linalg-on-tensors-backend-pipeline`. Additionally,
    necessary op interface external models are included to address issue
    llvm#3352.
    zjgarvey authored Jun 4, 2024
    Configuration menu
    Copy the full SHA
    56d21cb View commit details
    Browse the repository at this point in the history
  2. [Stablehlo] support uint8 (llvm#3367)

    Support lowering unsigned integer type to stablehlo as discussed in
    llvm#2184.
    
    The things I do in this PR:
    1. create `setupBackendTypeConversionForStablehlo()`,
    `createFuncBackendTypeConversionForStablehloPass` and
    `createFinalizingBackendTypeConversionForStablehloPass`.
    2. remove `InferTypeOpInterface` from `torch_c.to_builtin_tensor`,
    because it's different result type between linalg backend and stablehlo
    backend:
    ```
    // linalg backend
    func.func @forward(%arg0: !torch.vtensor<[3],ui8>) -> tensor<3xf32> {
        %c = torch_c.to_builtin_tensor %arg0 : (!torch.vtensor<[3], ui8> -> tensor<3xi8>
        %0 = tensor.empty() : tensor<3xf32>
        %1 = linalg.generic {indexing_maps = [#map, #map], iterator_types = ["parallel"]} ins(%arg0 : tensor<3xi8>) outs(%0 : tensor<3xf32>) {
        ^bb0(%in: i8, %out: f32):
          %2 = arith.uitofp %in : i8 to f32
          linalg.yield %2 : f32
        } -> tensor<3xf32>
        return %1 : tensor<3xf32>
    }
    // stablehlo backend
    func.func @forward(%arg0: !torch.vtensor<[3],ui8>) -> tensor<3xf32> {
        %c = torch_c.to_builtin_tensor %arg0 : (!torch.vtensor<[3], ui8> -> tensor<3xui8>
        %0 = stablehlo.convert %arg0 : (tensor<3xui8> -> tensor<3xf32>
        return %0 : tensor<3xf32>
    }
    ```
    3. fix stablehlo and linalg's conversion
    qingyunqu authored Jun 4, 2024
    Configuration menu
    Copy the full SHA
    50f7103 View commit details
    Browse the repository at this point in the history
  3. [Bazel] Fix bazel deps (llvm#3414)

    llvm#3367 and llvm#3364 introduced new dependencies, causing the [Bazel
    workflow](https://github.com/llvm/torch-mlir/actions/workflows/bazelBuildAndTest.yml)
    to fail. These need to be fixed in Bazel.
    penguin-wwy authored Jun 4, 2024
    Configuration menu
    Copy the full SHA
    89f7d24 View commit details
    Browse the repository at this point in the history
  4. [ONNX] Add OnnxToTorch Lowering for MaxUnpool op (llvm#3413)

    This commit also adds the Torch declaration for aten.max_unpool2d and
    aten.max_unpool3d op. The TorchToLinalg lowering for the same will be
    added in a follow-up commit.
    
    Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
    vivekkhandelwal1 authored Jun 4, 2024
    Configuration menu
    Copy the full SHA
    35dd8c5 View commit details
    Browse the repository at this point in the history
  5. [MLIR][Torch] Add TorchToLinalg lowering for AtenAvgPool3dOp (llvm#3030)

    This commit also fixes the average pool op' test failing for
    OnnxToLinalg lowering.
    
    Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
    vivekkhandelwal1 authored Jun 4, 2024
    Configuration menu
    Copy the full SHA
    661be2d View commit details
    Browse the repository at this point in the history
  6. Configuration menu
    Copy the full SHA
    d59d0b6 View commit details
    Browse the repository at this point in the history

Commits on Jun 6, 2024

  1. build: manually update PyTorch version (llvm#3340)

    Set PyTorch and TorchVision version to nightly release 2024-05-14.
    
    Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
    vivekkhandelwal1 authored Jun 6, 2024
    Configuration menu
    Copy the full SHA
    72837fb View commit details
    Browse the repository at this point in the history

Commits on Jun 7, 2024

  1. [Stablehlo] Add lowering of GridSampler Op (llvm#3084)

    Inspired by PyTorch decompositions.py.
    See
    https://github.com/pytorch/pytorch/blob/ec58f1f74ebcec744d2ab90ad34abd09c1018e92/torch/_decomp/decompositions.py#L3923-L4086
    Only support paddingMode=0 or 1 and interpolationMode=0 or 1
    Xinyu Yang authored Jun 7, 2024
    Configuration menu
    Copy the full SHA
    431d98b View commit details
    Browse the repository at this point in the history
  2. Representing Symbolic Shape Expressions in Torch Dialect (llvm#3372)

    Torch Dialect with symbolic shape expressions:
    ```ll
    module {                                                                                                                                                                                                     
      func.func @main(%arg0: !torch.vtensor<[?,?,3],f32>, %arg1: !torch.vtensor<[?,?,3],f32>) -> !torch.vtensor<[?,?,3],f32> {                                                                                   
        %0 = torch.symbolic_int "s0" {min_val = 5, max_val = 10} : !torch.int                                                                                                                                    
        %1 = torch.symbolic_int "s1" {min_val = 0, max_val = 100} : !torch.int                                                                                                                                   
        %2 = torch.symbolic_int "s3" {min_val = 0, max_val = 50} : !torch.int                                                                                                                                    
        
        torch.bind_symbolic_shape %arg0, [%0, %1], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32>                                                                                          
        torch.bind_symbolic_shape %arg1, [%0, %2], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32>                                                                                          
        
        %3 = torch.aten.tanh %arg0 : !torch.vtensor<[?,?,3],f32> -> !torch.vtensor<[?,?,3],f32>                                                                                                                  
        torch.bind_symbolic_shape %3, [%0, %1], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32>                                                                                             
        
        %4 = torch.aten.sigmoid %arg1 : !torch.vtensor<[?,?,3],f32> -> !torch.vtensor<[?,?,3],f32>                                                                                                               
        torch.bind_symbolic_shape %4, [%0, %2], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32>                                                                                             
        
        %5 = torch.prim.ListConstruct %3, %3, %4 : (!torch.vtensor<[?,?,3],f32>, !torch.vtensor<[?,?,3],f32>, !torch.vtensor<[?,?,3],f32>) -> !torch.list<vtensor>                                               
        %int1 = torch.constant.int 1                                                                                                                                                                             
        %6 = torch.aten.cat %5, %int1 : !torch.list<vtensor>, !torch.int -> !torch.vtensor<[?,?,3],f32>                                                                                                          
        torch.bind_symbolic_shape %6, [%0, %1, %2], #affine_map<()[s0, s1, s2] -> (s0, s1 * 2 + s2, 3)> : !torch.vtensor<[?,?,3],f32>                                                                            
        
        return %6 : !torch.vtensor<[?,?,3],f32>                                                                                                                                                                  
      }                                                                                                                                                                                                          
    }              
    ```
    
    For reference, this is the TorchDynamo exported program with symbolic
    shape expressions that the above Torch dialect program is imported from:
    ```py
    ExportedProgram:                                                                                                                                                                                             
        class GraphModule(torch.nn.Module):                                                                                                                                                                      
            def forward(self, x: "f32[s0, s1, 3]", y: "f32[s0, s3, 3]"):                                                                                                                                         
                # File: /home/sambhav.jain/workspaces/cruise/src/3p/torch-mlir/test/python/fx_importer/symbolic_shape_expr_test.py:31 in forward, code: a = torch.tanh(x)                                        
                tanh: "f32[s0, s1, 3]" = torch.ops.aten.tanh.default(x);  x = None                                                                                                                               
                                                                                                                                                                                                                 
                # File: /home/sambhav.jain/workspaces/cruise/src/3p/torch-mlir/test/python/fx_importer/symbolic_shape_expr_test.py:32 in forward, code: b = torch.sigmoid(y)                                     
                sigmoid: "f32[s0, s3, 3]" = torch.ops.aten.sigmoid.default(y);  y = None                                                                                                                         
                                                                                                                                                                                                                 
                # File: /home/sambhav.jain/workspaces/cruise/src/3p/torch-mlir/test/python/fx_importer/symbolic_shape_expr_test.py:33 in forward, code: return torch.cat((a, a, b), dim=1)                       
                cat: "f32[s0, 2*s1 + s3, 3]" = torch.ops.aten.cat.default([tanh, tanh, sigmoid], 1);  tanh = sigmoid = None                                                                                      
                return (cat,)                                                                                                                                                                                    
                                                                                                                                                                                                                 
    Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='y'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='cat'), target=None)])                                               
    Range constraints: {s0: ValueRanges(lower=5, upper=10, is_bool=False), s1: ValueRanges(lower=0, upper=100, is_bool=False), s3: ValueRanges(lower=0, upper=50, is_bool=False)} 
    ```
    
    Huge credit to @stellaraccident for the inputs that helped evaluate the
    various design options and arrive at the representation of choice.
    
    
    - [x] Op definitions for symbolic_int and bind_symbolic_shape ops
    - [x] fx_importer updates to import range constraints + create
    symbolic_int ops
    - [x] fx_importer changes for AffineMapAttr building + adding
    bind_symbolic_shape ops
    - [x] custom printer/parser for inlined AffineMap expressions in mlir
    assembly
    - [x] Dialect lit test
    - [x] fx_importer python lit tests
    - [ ] Cleanup pass to remove these ops (can add in a follow-on)
    sjain-stanford authored Jun 7, 2024
    Configuration menu
    Copy the full SHA
    d0a818a View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    94838ca View commit details
    Browse the repository at this point in the history
  4. [ONNX] Conv op adds support for asymmetric padding. (llvm#3426)

    Supports asymmetric padding by performing a torch.nn.functional.pad on
    the input before performing the convolution.
    
    Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
    sjarus authored Jun 7, 2024
    Configuration menu
    Copy the full SHA
    1c2778d View commit details
    Browse the repository at this point in the history
  5. [Onnx] Add Onnx->Torch lowering for Onnx.Shrink Op (llvm#3385)

    Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
    vivekkhandelwal1 authored Jun 7, 2024
    Configuration menu
    Copy the full SHA
    1a9c0a3 View commit details
    Browse the repository at this point in the history
  6. Configuration menu
    Copy the full SHA
    f794582 View commit details
    Browse the repository at this point in the history
  7. Add f8 types to fx importer (llvm#3434)

    Missing types for tracing float8 types.
    rsuderman authored Jun 7, 2024
    Configuration menu
    Copy the full SHA
    7f188eb View commit details
    Browse the repository at this point in the history
  8. [torch] Add support for f8 types for linalg conversion (llvm#3436)

    Linalg conversion requires mapping for f8 types
    rsuderman authored Jun 7, 2024
    Configuration menu
    Copy the full SHA
    75af64f View commit details
    Browse the repository at this point in the history

Commits on Jun 8, 2024

  1. [Torch] fix toBuiltinTensor() (llvm#3415)

    * Let `toBuiltinTensor()` reflects the original dtype of
    `!torch.vtensor`.
    * Backend handles dtype conversion themselves.
    qingyunqu authored Jun 8, 2024
    Configuration menu
    Copy the full SHA
    689efc8 View commit details
    Browse the repository at this point in the history
  2. [ONNX] Add OnnxToTorch Lowering for Sequence Ops (llvm#3425)

    This commit adds the lowering for SequenceAt, SequenceEmpty,
    SequenceInsert, SequenceErase op
    
    Signed-Off By: Vivek Khandelwal<vivekkhandelwal1424@gmail.com>
    vivekkhandelwal1 authored Jun 8, 2024
    Configuration menu
    Copy the full SHA
    d35b6b4 View commit details
    Browse the repository at this point in the history

Commits on Jun 9, 2024

  1. [ONNX] Lower Onnx.Concat lowering version (llvm#3437)

    Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
    vivekkhandelwal1 authored Jun 9, 2024
    Configuration menu
    Copy the full SHA
    5bc6264 View commit details
    Browse the repository at this point in the history
  2. Test custom op import with symbolic shapes (llvm#3431)

    Tests the basic constructs of registering a custom op and its abstract
    implementations (with FakeTensors) in python, going through TorchDynamo
    export, followed by importing the shape expressions in the Torch
    dialect.
    
    Also fixes the importer were previously the symbolic bind op insertion
    was not gated in one place.
    sjain-stanford authored Jun 9, 2024
    Configuration menu
    Copy the full SHA
    7e0e23c View commit details
    Browse the repository at this point in the history

Commits on Jun 10, 2024

  1. [torch-mlir][sparse] re-enable all sparse tests (llvm#3444)

    this fixes the following issue:
    
    llvm#3418
    aartbik authored Jun 10, 2024
    Configuration menu
    Copy the full SHA
    d77bab3 View commit details
    Browse the repository at this point in the history
  2. onnx.resize: Add support for coordTfMode "half_pixel" (llvm#3441)

    half_pixel is also the default mode used by ONNX, see
    https://onnx.ai/onnx/operators/onnx__Resize.html
    mgehre-amd authored Jun 10, 2024
    Configuration menu
    Copy the full SHA
    e07a0bf View commit details
    Browse the repository at this point in the history

Commits on Jun 12, 2024

  1. [ONNX] Fix resize ceil numerics and add half_pixel_symmetric support (l…

    …lvm#3443)
    
    This patch fixes several failing tests in our [external test
    suite](https://github.com/nod-ai/SHARK-TestSuite/tree/main/iree_tests/onnx/node/generated),
    and addresses some of the issues discussed in llvm#3420
    zjgarvey authored Jun 12, 2024
    Configuration menu
    Copy the full SHA
    7cd3368 View commit details
    Browse the repository at this point in the history
  2. [ONNX] add int16 quantization support (llvm#3446)

    There is currently no int16 quantization support in torch. This patch
    adds a new mlir type to correspond to the missing "torch.qint16" type,
    and enables lowering of quantization-related onnx ops using int16 types.
    
    In follow-up patches, custom quantization logic for ops like
    aten.matmul/aten.mm/aten.convolution may need to be revisited to allow
    support for qint16. The passes in FuseQuantizedOps.cpp may also need
    slight modifications.
    zjgarvey authored Jun 12, 2024
    Configuration menu
    Copy the full SHA
    de28c85 View commit details
    Browse the repository at this point in the history
  3. [ONNX] add some args to the onnx importer to assist shape_inference (l…

    …lvm#3445)
    
    Adds the following arguments:
    - "--clear-domain": enabling this flag (default False) will delete the
    domain attribute from each node in the onnx model before importing.
    Shape inference does not seem to work for onnx ops in custom domains. In
    the rare case when these ops have a corresponding counterpart in base
    onnx, enabling this flag might allow shape inference to work properly.
    - "--opset-version": allows setting the opset version manually. This
    will cause the importer to attempt to update the opset_version of the
    onnx model before importing. Newer opset versions sometimes have more
    robust shape inference patterns.
    zjgarvey authored Jun 12, 2024
    Configuration menu
    Copy the full SHA
    c0eb6d8 View commit details
    Browse the repository at this point in the history
  4. [onnx] Resize supports default-valued attributes (llvm#3450)

    Handles onnx exporters emitting default-valued attributes.
    
    Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
    sjarus authored Jun 12, 2024
    Configuration menu
    Copy the full SHA
    41d04a8 View commit details
    Browse the repository at this point in the history
  5. [ONNX] Fix AveragePool attributes support (llvm#3235)

    Issues was found here nod-ai/SHARK-ModelDev#643
        - [ONNX] Fix padding attributes for onnx.AveragePool
        - [Linalg] Add countIncludePad false support for AtenAvgPool1/2dOp
        - [Linalg] Add an avg_pool2d countIncludePad False e2e tests
        - [Linalg] Fix conflict with AtenAvgPool3dOp
        - [Linalg] Fix e2e crash with AtenAvgPool1dOp
        - [Linalg] Add dynamic dim support for AtenAvgPool2dOp
        - [Linalg] Fix AvgPool2dDivisorOverrideModule crash
    AmosLewis authored Jun 12, 2024
    Configuration menu
    Copy the full SHA
    ae6f5e8 View commit details
    Browse the repository at this point in the history

Commits on Jun 13, 2024

  1. Update to llvm/llvm-proect@27ac46e6bea2 (2024-6-12) (llvm#3454)

    This would require to bump stablehlo at the same time.
    antiagainst authored Jun 13, 2024
    Configuration menu
    Copy the full SHA
    77d7f64 View commit details
    Browse the repository at this point in the history

Commits on Aug 28, 2024

  1. Configuration menu
    Copy the full SHA
    75c2a81 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    c639e26 View commit details
    Browse the repository at this point in the history
  3. Update xfail

    mgehre-amd committed Aug 28, 2024
    Configuration menu
    Copy the full SHA
    7b01213 View commit details
    Browse the repository at this point in the history
  4. Configuration menu
    Copy the full SHA
    029673a View commit details
    Browse the repository at this point in the history
  5. Configuration menu
    Copy the full SHA
    4a5fdf3 View commit details
    Browse the repository at this point in the history
  6. Configuration menu
    Copy the full SHA
    ad1facc View commit details
    Browse the repository at this point in the history
  7. Configuration menu
    Copy the full SHA
    e698f4a View commit details
    Browse the repository at this point in the history
  8. Configuration menu
    Copy the full SHA
    accf7f6 View commit details
    Browse the repository at this point in the history
  9. Configuration menu
    Copy the full SHA
    ca733c5 View commit details
    Browse the repository at this point in the history
  10. Configuration menu
    Copy the full SHA
    f724438 View commit details
    Browse the repository at this point in the history
  11. Configuration menu
    Copy the full SHA
    a22c27c View commit details
    Browse the repository at this point in the history
  12. Configuration menu
    Copy the full SHA
    0ef5530 View commit details
    Browse the repository at this point in the history
  13. Configuration menu
    Copy the full SHA
    56770da View commit details
    Browse the repository at this point in the history

Commits on Aug 29, 2024

  1. Update LLVM

    mgehre-amd committed Aug 29, 2024
    Configuration menu
    Copy the full SHA
    977b3a7 View commit details
    Browse the repository at this point in the history

Commits on Sep 6, 2024

  1. Configuration menu
    Copy the full SHA
    fbb1cca View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    813abc3 View commit details
    Browse the repository at this point in the history

Commits on Sep 9, 2024

  1. Configuration menu
    Copy the full SHA
    7c5a142 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    077a2ee View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    23b2b30 View commit details
    Browse the repository at this point in the history
  4. Configuration menu
    Copy the full SHA
    5f167e7 View commit details
    Browse the repository at this point in the history
  5. Configuration menu
    Copy the full SHA
    4ffe137 View commit details
    Browse the repository at this point in the history

Commits on Sep 11, 2024

  1. Configuration menu
    Copy the full SHA
    2b86be6 View commit details
    Browse the repository at this point in the history