Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[labs] lie_closure_dense using dense matrices #6371

Draft
wants to merge 41 commits into
base: master
Choose a base branch
from

Conversation

Qottmann
Copy link
Contributor

@Qottmann Qottmann commented Oct 9, 2024

We identified that there are instances where computing commutators is significantly more efficient using their dense matrix representation. We thus provide a function to perform the lie closure using the dense representation and make it an experimental pennylane.labs feature.

Computing the adjoint representation to follow. From there the functionality is the same, its just a different path to obtain the adjoint representation.

edit: pending completion of the parent branch to incorporate pennylane.labs functionality

[sc-74994]

Copy link
Contributor

github-actions bot commented Oct 9, 2024

Hello. You may have forgotten to update the changelog!
Please edit doc/releases/changelog-dev.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

@Qottmann Qottmann changed the base branch from master to labs_sandbox October 10, 2024 08:16
Copy link

codecov bot commented Oct 10, 2024

Codecov Report

Attention: Patch coverage is 0% with 54 lines in your changes missing coverage. Please review.

Project coverage is 99.27%. Comparing base (209b24b) to head (88fdf20).

Files with missing lines Patch % Lines
pennylane/labs/lie/lie_closure.py 0.00% 53 Missing ⚠️
pennylane/labs/lie/__init__.py 0.00% 1 Missing ⚠️
Additional details and impacted files
@@               Coverage Diff                @@
##           labs_sandbox    #6371      +/-   ##
================================================
- Coverage         99.39%   99.27%   -0.13%     
================================================
  Files               447      449       +2     
  Lines             42365    42419      +54     
================================================
  Hits              42110    42110              
- Misses              255      309      +54     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@dwierichs dwierichs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here goes my nitpicking :)

pennylane/labs/lie/lie_closure.py Outdated Show resolved Hide resolved
pennylane/labs/lie/lie_closure.py Outdated Show resolved Hide resolved
pennylane/labs/lie/lie_closure.py Outdated Show resolved Hide resolved
pennylane/labs/lie/lie_closure.py Outdated Show resolved Hide resolved
pennylane/labs/lie/lie_closure.py Outdated Show resolved Hide resolved
print(f"epoch {epoch+1} of lie_closure_dense, DLA size is {new_length}")

# compute all commutators
m0m1 = np.einsum("aij,bjk->abik", vspace, gens)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we compute the commutators between vspace and gens, and not between vspace and vspace? 🤔 gens should just be a redundant version of vspace, no?

I also was thinking: We're re-computing commutators if we take the full guys here, right?
The following skips over the pairs of commutators that we already computed earlier:

Suggested change
m0m1 = np.einsum("aij,bjk->abik", vspace, gens)
m0m1 = np.einsum("aij,bjk->abik", vspace, vspace[old_length:])

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, we should explore whether we want tensordot or einsum, I suppose :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got defeated by tensordot and eventually caved to einsum 😢

I also was thinking: We're re-computing commutators if we take the full guys here, right?
The following skips over the pairs of commutators that we already computed earlier:

Oh yeah there definitely is redundancy, good point! In the old implementation we had combinations(vspace, 2).
I'm still tempted to do vspace[old_length:], gens because that "naturally" corresponds to the nesting depth, which enables a user to just do the closure until a certain level of nesting through max_iterations. What do you think? There is no redundancy but it could be made faster by including more combinations at once.

Copy link
Contributor Author

@Qottmann Qottmann Oct 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is also redundancy in that we are computing both [A,B] and [B,A], but I couldnt figure out how to re-order stuff to only have it once. I figured thats still outweighed by the ability to vectorize the commutators

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still tempted to do vspace[old_length:], gens because that "naturally" corresponds to the nesting depth, which enables a user to just do the closure until a certain level of nesting through max_iterations. What do you think? There is no redundancy but it could be made faster by including more combinations at once.

Ohhh, that is so neat, I had not thought of this.
Let's go for this and document that this might be a speedup left on the road for this feature.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is also redundancy in that we are computing both [A,B] and [B,A], but I couldnt figure out how to re-order stuff to only have it once. I figured thats still outweighed by the ability to vectorize the commutators

If we use the [old_length:] slicing I do not think that we have redundancy anymore :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's go for this and document that this might be a speedup left on the road for this feature.

@Qottmann Actually, could we do something like vspace[old_length:], vspace[:initial_length]? Then we keep the minimal basis for the initial set of operators but also the "nesting level <> iteration" correspondence

Copy link
Contributor Author

@Qottmann Qottmann Oct 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we use the [old_length:] slicing I do not think that we have redundancy anymore :)

yes you are right!

could we do something like vspace[old_length:], vspace[:initial_length]? Then we keep the minimal basis for the initial set of operators but also the "nesting level <> iteration" correspondence

is initial_length the length before the current "step" or the very first length? Can you elaborate a bit, I am not following yet

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's the very first length.
Your suggestion was to use gens, which have not run through _hermitian_basis yet, right?
So my suggestion is to use _hermitian_basis(gens), which is coincidentally stored in vspace[:initial_length] with initial_length=len(vspace) before the while loop

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah gotcha, yes! makes sense

pennylane/labs/lie/lie_closure.py Outdated Show resolved Hide resolved
pennylane/labs/lie/lie_closure.py Outdated Show resolved Hide resolved
pennylane/labs/lie/lie_closure.py Outdated Show resolved Hide resolved
@dwierichs dwierichs marked this pull request as draft October 10, 2024 14:57
Base automatically changed from labs_sandbox to master October 10, 2024 20:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants