Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[aievec] Add AIEml intrinsic for matmul ops #684

Merged
merged 1 commit into from
Nov 8, 2023

Conversation

jsetoain
Copy link
Collaborator

No description provided.

Copy link
Collaborator

@david-vc david-vc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jsetoain : I request to generalize the new Operation so that it can model all matmul configs that the architecture supports. Thanks.

Pure,
AllTypesMatch<["acc", "result"]>
]>,
Arguments<(ins VectorOfShapeAndType<[4, 8], BF16>:$lhs,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jsetoain : I request to generalize this so that it can model all matmul configurations that are not convolutions, element-wise, or sparse.

@jsetoain jsetoain changed the title [aievec] Add AIEml intrinsic for bf16/f32 types [aievec] Add AIEml intrinsic for matmul ops Oct 19, 2023
@jsetoain jsetoain force-pushed the add-bf16-matmul-intrinsic branch 2 times, most recently from c349f56 to 837e08a Compare October 20, 2023 10:06
@stephenneuendorffer stephenneuendorffer changed the base branch from vitis2023_2 to main October 23, 2023 18:27
Copy link
Collaborator

@david-vc david-vc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the first AIEVec op that requires its operands to be 2-dimensional. I think this direction is worth pursuing.

test/dialect/AIEVec/roundtrip.mlir Outdated Show resolved Hide resolved
test/dialect/AIEVec/roundtrip.mlir Outdated Show resolved Hide resolved
include "mlir/IR/BuiltinTypes.td"
include "mlir/IR/OpBase.td"

def I4 : I<4>;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, so only I4 needs to be defined at here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I8/I16/I32/I64 are all defined in "mlir/IR/OpBase.d" (moved somewhere else in the latest mlir, by the way)

class IsValidAIE2MatMulShapeAndType<string lhs, string rhs, string acc> :
PredOpTrait<lhs # " x " # rhs # " = " # acc # " is a valid AIE2 " #
"matrix-multiply and accumulate op",
Or<[VectorTypesMatch<lhs, AIE2MatMulOperand_4x16xi8,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These checks and constraints look so nice!

@jsetoain jsetoain force-pushed the add-bf16-matmul-intrinsic branch 2 times, most recently from fc2451c to 260edb9 Compare November 2, 2023 19:38
@jsetoain jsetoain force-pushed the add-bf16-matmul-intrinsic branch 3 times, most recently from a622e0d to 4dd1644 Compare November 7, 2023 14:35

// Notice: These predicate definitions assume that the type has been verified to
// be a ShapedType
def AIE2MatMulOperand_4x16xi8 : VectorOfShapeAndType<[4, 16], I8>;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Down the road we may support multiple architecture variants, all in the family "AIE", but the numeric suffix may be different. So my suggestion is to drop AIE2 from the name.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea I agree, something like SmallMatrix... maybe.

Copy link
Contributor

@makslevental makslevental left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM (just some curious questions).


// Notice: These predicate definitions assume that the type has been verified to
// be a ShapedType
def AIE2MatMulOperand_4x16xi8 : VectorOfShapeAndType<[4, 16], I8>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea I agree, something like SmallMatrix... maybe.

Comment on lines 54 to 57
def AIE2MatMulOperand_16x8xi4 : VectorOfShapeAndType<[16, 8], I4>;
def AIE2MatMulOperand_8x8xi8 : VectorOfShapeAndType<[8, 8], I8>;
def AIE2MatMulOperand_8x4xi8 : VectorOfShapeAndType<[8, 4], I8>;
def AIE2MatMulOperand_8x4xbf16 : VectorOfShapeAndType<[8, 4], BF16>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason not to just "forward" the VectorOfShapeAndType<...> to the use site? I.e., I'm asking can't you use VectorOfShapeAndType<[4, 8], BF16> in the type constraint below?

Copy link
Collaborator Author

@jsetoain jsetoain Nov 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Progressive refinement. I used to have them separated by lhs/rhs/acc even, but it's true that at this point I might as well remove the "alias".

Comment on lines +103 to +81
class ShapeDimsMatch<string lhs, int ld, string rhs, int rd> :
CPred<Shape<lhs>.result # "[" # ld # "] == " #
Shape<rhs>.result # "[" # rd # "]">;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought this existed upstream but I can't find it - good stuff for upstreaming!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not this one, but you just reminded me that "VectorOfShape" does exist in newer versions of MLIR (we were stuck in an old one for quite some time), I should get rid of that one.

@@ -6,4 +6,4 @@
# (c) Copyright 2022 Xilinx Inc.

add_mlir_dialect(AIEVecOps aievec)
add_mlir_doc(AIEVecOps AIEVecDialect ./ -gen-dialect-doc)
add_mlir_doc(AIEVecOps AIEVecDialect ./ -gen-dialect-doc -dialect=aievec)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gonna have a collision when you rebase :)

Copy link
Collaborator Author

@jsetoain jsetoain Nov 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know, but it wouldn't pass CI otherwise :-)

@jsetoain jsetoain merged commit 7114fb1 into Xilinx:main Nov 8, 2023
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants