Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add lowerings for mma, register and allocate #86

Merged
merged 2 commits into from
Aug 19, 2024
Merged

Conversation

harsh-nod
Copy link
Contributor

This PR adds a mma unit test which lowers to
vector.loads/stores and amdgpu.mfmas. Also supports shared memory promotion.

This PR adds a mma unit test which lowers to
vector.loads/stores and amdgpu.mfmas. Also supports
shared memory promotion.

Signed-off-by: Harsh Menon <harsh@nod-labs.com>
shark_turbine/kernel/wave/wave.py Outdated Show resolved Hide resolved
lit_tests/kernel/wave/codegen.py Outdated Show resolved Hide resolved
shark_turbine/kernel/wave/codegen.py Outdated Show resolved Hide resolved
shark_turbine/kernel/wave/codegen.py Outdated Show resolved Hide resolved
shark_turbine/kernel/wave/codegen.py Outdated Show resolved Hide resolved
shark_turbine/kernel/wave/constraints.py Outdated Show resolved Hide resolved
Signed-off-by: Harsh Menon <harsh@nod-labs.com>
emitter.emit(graph.get_root_graph())
emitter.finish()

if kwargs.get("canonicalize", False):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When do we not want to canonicalize?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't want to canonicalize when you will lose all your IR on canonicalization (because they have no uses). This happens on some of the other tests (you can try it out by setting canonicalize=True on some of the other tests).

named_sequence = transform_d.NamedSequenceOp(
"__transform_main", [any_op_t()], []
)
with InsertionPoint(named_sequence.body):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, is this the best way to setup canonicalization? by setting up a TD-like structure to do the canonicalization? I'd assume we'd have better API support from upstream to call canonicalization and or other standard patterns on.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question and I was thinking about this myself. I think we can avoid TD by using other patterns, but there could be some advantages to using TD for now (as it has support for other patterns like LICM etc.). If we discover that we don't need that, we can always rewrite this without using TD.

Copy link
Contributor

@Hardcode84 Hardcode84 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will need to look at indexing more later, but let's merge it for now so we can make progress.

raise CodegenError("No hardware constraints found.")

result = None
for constraint in hardware_constraints:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably should validate len(hardware_constraints) == 1 and get rid of this loop.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, will add this to the follow on PR.

@harsh-nod harsh-nod merged commit 344c65d into iree-org:main Aug 19, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants