Computing entropy of tensor networks containing classical probability distributions #195
-
Hi all, we are looking into using tensor networks to encode classical probability distributions. The goal is to build an inference engine that will replace the back-end of a machine diagnostics library. My question is: is there any clever way of computing the largest k eigenvalues of a general non-negative tn? Does anybody have any ideas? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hi @apiedrafTNO, I am not very familiar with the language of inference and machine diagnostics, so maybe you could give some more specific details to narrow it down:
There are possible methods for both which can get past computing the full output tensor, but non-linear quantities are trickier to scale up. |
Beta Was this translation helpful? Give feedback.
So for classical tensor networks, I don't know of any really scalable methods, but you can push a bit further by contracting the output chunks lazily and computing$\sum p \log p$ on each of them separately - I added a $2^{36}$ marginal.
cotengra
example here https://cotengra.readthedocs.io/en/latest/examples/ex_large_output_lazy.html that shows this for a sizeBeyond that probably sampling or using a replica trick might be possible, but nothing non-trivial comes to mind!