Skip to content

Conversation

borisdevos
Copy link
Member

@borisdevos borisdevos commented Sep 29, 2025

This PR introduces the function local_expectation_value a method to calculate the expectation value of a single MPO tensor. This was originally meant for in the context of (infinite) partition functions and thus MPO tensors (see #320), but I extended the definition to MPO hamiltonians.

The local expectation values are currently not normalised in the MPO case (as can be seen in the added tests). I don't know what's the cleanest way to do that, since the environments are not from the MPO tensor itself. I'm open to suggestions. Maybe @VictorVanthilt can chime in, since I'm doing the complementary thing to TNR.

I didn't add anything for the finite case, partly because I don't need it, and mostly because it's annoying to make a test. If it's really wanted, I can look into it, but this won't happen any time soon 🫠.

@codecov
Copy link

codecov bot commented Sep 29, 2025

Codecov Report

❌ Patch coverage is 77.77778% with 2 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/algorithms/expval.jl 77.77% 2 Missing ⚠️
Files with missing lines Coverage Δ
src/algorithms/expval.jl 89.53% <77.77%> (+3.59%) ⬆️

... and 15 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Member

@lkdvos lkdvos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the PR!

Some comments about the MPOHamiltonian case: we previously had functionality similar to this, but we actually decided to remove this. The main issue is that a "local" expectation value is actually very much not well-defined in this context. The idea is that if you encode your Hamiltonian in different but equivalent ways into an MPO, you will obtain different values.
For example, in the implementation you have here you make the choice to count for the local expectation value all of the interactions that end on a given site. You could of course just as well take the interactions that begin at a site, or the average, or ... Really the only thing that is well defined is the total sum, which is why we only expose that. If you want a "local" expectation value, you have to say which operator you want and on which sites it acts, through expectation_value(state, inds => local_operator)

For the case you have this doesn't apply, since the user is providing the tensor you are inserting, so I definitely agree that this is good functionality to have.
Considering the interface, we already have the following syntax for local (hamiltonian) operators:

expectation_value(state, i => O_1, [envs])
expectation_value(state, (i, j, ...) => O_N, [envs])

I think I would prefer to adopt a similar approach here, to avoid having to introduce another function name. This is mostly because I think expectation_value already works for local things, so then it might be strange to only have to add local_ for the MPO case.

Nevertheless, simply copying that has two issues. One the one hand, there is the normalization issue that you already mentioned, since you don't have access to the original MPO anymore and the environments are normalized to have a zero-site expectation value of 1 (instead of the single site expectation value), and on the other hand I am not so fond of making the envs a required argument, since this is mostly an implementation detail in most of our cases.
Therefore, thinking about this for a bit, might I suggest either of the following signatures:

expectation_value(state, (mpo, site => mpotensor), envs = environments(state, mpo))
expectation_value(state, mpo, site => mpotensor, envs = environments(state, mpo))

The idea is to adhere to the same style as we had before, and not have to introduce another function, to have the environments as optional arguments and to have enough information to properly normalize.
I think I slightly like the first form better since then the first argument is always the state and the second argument always the "operator", but I'm open to suggestions.

Obviously, open to discuss this further, and possibly involving more people that would want to use this.


As a final wild idea that I want to share, in principle what you are asking for is the expectation value of a window-mpo of length 1. We could consider making that the actual implementation, and working with datastructures like that. This has the advantage that it would also generalize to longer ranges, and could be implemented in a completely generic way.
I'm not actually suggesting to do that here, since I think this is a little bit of work that might not (yet) be worth it (for example our current window structures always start at site 1), but I wanted to share that idea as a possible thing to keep in mind in case someone at some point wants to do more advanced things with this.

@borisdevos
Copy link
Member Author

I like your suggestion. For now, I will adopt that one, and people can discuss further.

@borisdevos borisdevos changed the title local_expectation_value function Expectation values of local MPO tensors Sep 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants