-
I would to compute the gradient of certain circuits the way it's done in PennyLane. For instance, creating a circuit of strongly entangled layers and computing the expectation value of a local gate
Analogously, with Quimb
Up to this point, everything is in order. I would then like to obtain the gradient. This can be done with the parameter shift method as follows
which works but it inefficient. I am thus wondering how the get the gradient using a differentiable backend like JAX. I have tried using
, but I cannot get the correct value (unless all input parameters are the same) and gradient (in any case). I am thus wondering how to use |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 6 replies
-
I haven't looked over your examples in detail, but my first thought would be that the vectorizer that The process it follows is something like:
Generally the point of the The easiest thing to do is just use jax or torch normally, and treat quimb simply as a way to dispatch all the oprations which are then traced through. The function If it was helpful, one might add a method to |
Beta Was this translation helpful? Give feedback.
-
Thanks for responding so quickly! As you pointed out, using Let's take JAX, for example, and the following simple circuit. It applies two gates to one qubit and evaluates the expectation value of PauliZ. The
|
Beta Was this translation helpful? Give feedback.
I haven't looked over your examples in detail, but my first thought would be that the vectorizer that
quimb
uses internally has no fixed relation tojparams.ravel()
, other than the number of parameters, since it treats the tags and tensor network structure as the 'base' data-structure.The process it follows is something like:
Generally the point of the
TNOptimizer
is to optimize the parameters inplace, so user access …