-
Notifications
You must be signed in to change notification settings - Fork 43
Expectation values of local MPO tensors #327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov Report❌ Patch coverage is
... and 15 files with indirect coverage changes 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the PR!
Some comments about the MPOHamiltonian
case: we previously had functionality similar to this, but we actually decided to remove this. The main issue is that a "local" expectation value is actually very much not well-defined in this context. The idea is that if you encode your Hamiltonian in different but equivalent ways into an MPO, you will obtain different values.
For example, in the implementation you have here you make the choice to count for the local expectation value all of the interactions that end on a given site. You could of course just as well take the interactions that begin at a site, or the average, or ... Really the only thing that is well defined is the total sum, which is why we only expose that. If you want a "local" expectation value, you have to say which operator you want and on which sites it acts, through expectation_value(state, inds => local_operator)
For the case you have this doesn't apply, since the user is providing the tensor you are inserting, so I definitely agree that this is good functionality to have.
Considering the interface, we already have the following syntax for local (hamiltonian) operators:
expectation_value(state, i => O_1, [envs])
expectation_value(state, (i, j, ...) => O_N, [envs])
I think I would prefer to adopt a similar approach here, to avoid having to introduce another function name. This is mostly because I think expectation_value
already works for local things, so then it might be strange to only have to add local_
for the MPO case.
Nevertheless, simply copying that has two issues. One the one hand, there is the normalization issue that you already mentioned, since you don't have access to the original MPO anymore and the environments are normalized to have a zero-site expectation value of 1 (instead of the single site expectation value), and on the other hand I am not so fond of making the envs
a required argument, since this is mostly an implementation detail in most of our cases.
Therefore, thinking about this for a bit, might I suggest either of the following signatures:
expectation_value(state, (mpo, site => mpotensor), envs = environments(state, mpo))
expectation_value(state, mpo, site => mpotensor, envs = environments(state, mpo))
The idea is to adhere to the same style as we had before, and not have to introduce another function, to have the environments as optional arguments and to have enough information to properly normalize.
I think I slightly like the first form better since then the first argument is always the state and the second argument always the "operator", but I'm open to suggestions.
Obviously, open to discuss this further, and possibly involving more people that would want to use this.
As a final wild idea that I want to share, in principle what you are asking for is the expectation value of a window-mpo of length 1. We could consider making that the actual implementation, and working with datastructures like that. This has the advantage that it would also generalize to longer ranges, and could be implemented in a completely generic way.
I'm not actually suggesting to do that here, since I think this is a little bit of work that might not (yet) be worth it (for example our current window structures always start at site 1), but I wanted to share that idea as a possible thing to keep in mind in case someone at some point wants to do more advanced things with this.
I like your suggestion. For now, I will adopt that one, and people can discuss further. |
local_expectation_value
function
This PR introduces
the functiona method to calculate the expectation value of a single MPO tensor. This was originally meant for in the context of (infinite) partition functions and thus MPO tensors (see #320), but I extended the definition to MPO hamiltonians.local_expectation_value
The local expectation values are currently not normalised in the MPO case (as can be seen in the added tests). I don't know what's the cleanest way to do that, since the environments are not from the MPO tensor itself. I'm open to suggestions. Maybe @VictorVanthilt can chime in, since I'm doing the complementary thing to TNR.
I didn't add anything for the finite case, partly because I don't need it, and mostly because it's annoying to make a test. If it's really wanted, I can look into it, but this won't happen any time soon 🫠.