-
Notifications
You must be signed in to change notification settings - Fork 594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[labs] lie_closure_dense using dense matrices #6371
base: master
Are you sure you want to change the base?
Conversation
Hello. You may have forgotten to update the changelog!
|
…o lie_closure_dense
… into lie_closure_dense
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## labs_sandbox #6371 +/- ##
================================================
- Coverage 99.39% 99.27% -0.13%
================================================
Files 447 449 +2
Lines 42365 42419 +54
================================================
Hits 42110 42110
- Misses 255 309 +54 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here goes my nitpicking :)
pennylane/labs/lie/lie_closure.py
Outdated
print(f"epoch {epoch+1} of lie_closure_dense, DLA size is {new_length}") | ||
|
||
# compute all commutators | ||
m0m1 = np.einsum("aij,bjk->abik", vspace, gens) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we compute the commutators between vspace
and gens
, and not between vspace
and vspace
? 🤔 gens
should just be a redundant version of vspace
, no?
I also was thinking: We're re-computing commutators if we take the full guys here, right?
The following skips over the pairs of commutators that we already computed earlier:
m0m1 = np.einsum("aij,bjk->abik", vspace, gens) | |
m0m1 = np.einsum("aij,bjk->abik", vspace, vspace[old_length:]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, we should explore whether we want tensordot
or einsum
, I suppose :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I got defeated by tensordot and eventually caved to einsum 😢
I also was thinking: We're re-computing commutators if we take the full guys here, right?
The following skips over the pairs of commutators that we already computed earlier:
Oh yeah there definitely is redundancy, good point! In the old implementation we had combinations(vspace, 2)
.
I'm still tempted to do vspace[old_length:], gens
because that "naturally" corresponds to the nesting depth, which enables a user to just do the closure until a certain level of nesting through max_iterations
. What do you think? There is no redundancy but it could be made faster by including more combinations at once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is also redundancy in that we are computing both [A,B]
and [B,A]
, but I couldnt figure out how to re-order stuff to only have it once. I figured thats still outweighed by the ability to vectorize the commutators
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still tempted to do vspace[old_length:], gens because that "naturally" corresponds to the nesting depth, which enables a user to just do the closure until a certain level of nesting through max_iterations. What do you think? There is no redundancy but it could be made faster by including more combinations at once.
Ohhh, that is so neat, I had not thought of this.
Let's go for this and document that this might be a speedup left on the road for this feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is also redundancy in that we are computing both [A,B] and [B,A], but I couldnt figure out how to re-order stuff to only have it once. I figured thats still outweighed by the ability to vectorize the commutators
If we use the [old_length:]
slicing I do not think that we have redundancy anymore :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's go for this and document that this might be a speedup left on the road for this feature.
@Qottmann Actually, could we do something like vspace[old_length:], vspace[:initial_length]
? Then we keep the minimal basis for the initial set of operators but also the "nesting level <> iteration" correspondence
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we use the [old_length:] slicing I do not think that we have redundancy anymore :)
yes you are right!
could we do something like vspace[old_length:], vspace[:initial_length]? Then we keep the minimal basis for the initial set of operators but also the "nesting level <> iteration" correspondence
is initial_length the length before the current "step" or the very first length? Can you elaborate a bit, I am not following yet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the very first length.
Your suggestion was to use gens
, which have not run through _hermitian_basis
yet, right?
So my suggestion is to use _hermitian_basis(gens)
, which is coincidentally stored in vspace[:initial_length]
with initial_length=len(vspace)
before the while
loop
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah gotcha, yes! makes sense
We identified that there are instances where computing commutators is significantly more efficient using their dense matrix representation. We thus provide a function to perform the lie closure using the dense representation and make it an experimental pennylane.labs feature.
Computing the adjoint representation to follow. From there the functionality is the same, its just a different path to obtain the adjoint representation.
edit: pending completion of the parent branch to incorporate
pennylane.labs
functionality[sc-74994]