Skip to content

Conversation

@smondal13
Copy link
Contributor

@smondal13 smondal13 commented Dec 16, 2025

Fixes

Summary/Motivation:

In the old definition of A-optimality (we will call it pseudo A-optimality from now on), DoE computed the trace of the Fisher Information Matrix (FIM), as few studies used that definition. However, that is not correct, and new literature mentions the trace of the inverse of the FIM as A-optimality. This PR has adopted this new definition of A-optimality in DoE.

Changes proposed in this PR:

  • Change the old A-optimality to pseudo A-optimality
  • Implement a new definition of A-optimality (=trace(inverse of FIM))

Legal Acknowledgement

By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:

  1. I agree my contributions are submitted under the BSD license.
  2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.

@smondal13
Copy link
Contributor Author

@adowling2 @blnicho This PR is ready for initial review.

smondal13 and others added 2 commits December 18, 2025 15:12
Replace `model` with `m` in `cholesky_LLinv_imp()`

Co-authored-by: Bethany Nicholson <[email protected]>
Copy link
Member

@blnicho blnicho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@smondal13 thanks for addressing my initial review comments. I think these changes look reasonable so far but would like to see tests for the corrected A-optimality before I'll approve the PR.

@blnicho blnicho self-requested a review December 18, 2025 21:24
@smondal13 smondal13 moved this from Ready for design review to Ready for final review in ParmEst & Pyomo.DoE Development Jan 15, 2026
Copy link
Member

@adowling2 adowling2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Started to leave review comments. I will add more comments soon.

@@ -64,6 +64,7 @@
class ObjectiveLib(Enum):
determinant = "determinant"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add the following comments after each line

# det(FIM), D-optimality
# trace(inv(FIM)), A-optimalty
# trace(FIM), "psuedo"-A-optimality
# min(eig(FIM)), E-optimality
# cond(FIM), ME-optimality
# constant zero, useful for initialization and debugging

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added comments

Copy link
Member

@adowling2 adowling2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a computational perspective, pseudo-A-optimality, i.e., trace(FIM), is the easiest objective to solve. I suggest we do a majority of the testing of permutations of DoE options with pseudo-A-optimality. I suggest that we then do a fewer number of tests with D-optimality (both Cholseky and GreyBox) and A-optimality (both Cholseky and GreyBox).

@blnicho blnicho moved this from Todo to Review In Progress in Pyomo 6.10 Jan 21, 2026
Comment on lines 51 to 68
rooney_biegler_doe = DesignOfExperiments(
experiment=rooney_biegler_experiment,
objective_option="determinant",
tee=tee,
prior_FIM=FIM,
improve_cholesky_roundoff_error=True,
)

if optimize_experiment_D:
# D-Optimality
rooney_biegler_doe_D = DesignOfExperiments(
experiment=rooney_biegler_experiment,
objective_option="determinant",
tee=tee,
prior_FIM=FIM,
improve_cholesky_roundoff_error=True,
)
rooney_biegler_doe_D.run_doe()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need both rooney_biegler_doe and rooney_biegler_doe_D? Seems like they are identical instances of the DesignOfExperiments class. Why not just call run_doe directly on rooney_biegler_doe when optimize_experiment_D is True?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was repeated. It's corrected now.

Comment on lines 110 to 111
opt_D = results_container["optimization"]["D"]
opt_A = results_container["optimization"]["A"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will fail if either of optimize_experiment_A or optimize_experiment_D are set to False. You should either guard this code against different options sent to the run_rooney_biegler_doe function or remove the options and have this example always run DoE D/A optimization.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have removed the plotting function. It is to visually confirm the result. It is not required.

Comment on lines 574 to 579
A_opt_value = run_rooney_biegler_doe(optimize_experiment_A=True)[
"optimization"
]["A"]["value"]
A_opt_design_value = run_rooney_biegler_doe(optimize_experiment_A=True)[
"optimization"
]["A"]["design"][0]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Calling run_rooney_biegler_doe twice here seems inefficient. Why not save the return value in a local variable? Then you can extract any of the values in the saved results

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it was inefficient. I have saved the return as a variable and reused it.

Comment on lines 589 to 590
@unittest.skipUnless(pandas_available, "test requires pandas")
@unittest.skipUnless(matplotlib_available, "test requires matplotlib")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The run_rooney_biegler_doe function has an option to plot or not. So why not have this test change the value of plot_optimal_design depending on the value of matplotlib_available? Then you can still test most of the example even when matplotlib isn't available.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I have done that.

Comment on lines 598 to 599
# Switch backend to 'Agg' to prevent GUI windows from popping up
plt.switch_backend('Agg')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still needed? Didn't you set the backend to 'Agg` up on L28?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not needed, I have removed that

fd_method = "central"
obj_used = "trace"

experiment = run_rooney_biegler_doe()["experiment"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you rework this to use FullReactorExperiment like the other tests in this file?

Copy link
Contributor Author

@smondal13 smondal13 Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we talked about using inexpensive tests during the IDAES summit in Pittsburgh, I have added simpler rooney_biegler example here for the test instead of FullReactorExperiment, which is more complex and sometimes fails to converge. Eventually, we want to make all tests inexpensive (or comparatively inexpensive) for Pyomo.DoE. However, it does not mean all tests that use FullReactorExperiment are expensive. I can add a test using FullReactorExperiment. Let me know whether I should add FullReactorExperiment.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we do a lightweight version of FullReactorExperiment, perhaps with pseudo A opt

Copy link
Contributor Author

@smondal13 smondal13 Jan 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adowling2 @blnicho I have replaced all the tests in test_doe_errors.py with rooney_biegler experiment in #3828 . So, we are good here, right?

Note: This branch is merged with #3828

params = list(model.parameter_names)

# Check cov_trace initialization
cov_trace_from_fim_inv = sum(pyo.value(model.fim_inv[j, j]) for j in params)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The expected value in a test should be a hard-coded number not a repeat of the implementation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A hard-coded value is added.

Comment on lines 610 to 614
val = sum(
pyo.value(model.L[c, params[k]])
* pyo.value(model.L_inv[params[k], d])
for k in range(len(params))
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
val = sum(
pyo.value(model.L[c, params[k]])
* pyo.value(model.L_inv[params[k], d])
for k in range(len(params))
)
val = pyo.value(sum(model.L[c, params[k]] * model.L_inv[params[k], d]
for k in range(len(params))
))

I'm pretty sure you can simplify this slightly by just calling pyo.value once. (Note: if you accept this change through the GitHub GUI the line indentation is off)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added manually.

@mrmundt
Copy link
Contributor

mrmundt commented Jan 21, 2026

@smondal13 - Gurobi just released a new version that supports Python 3.14, and we are seeing failures for our gurobi tests because of it. I cancelled your jobs because I have a fix already, just need to get it in.

TL;DR - if you see 3.14 failures in gurobi, not your fault, we're on it.

Copy link
Member

@adowling2 adowling2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have some concerns about how this testing framework can scale to all of the variants of Pyomo.DoE that we want to try, including:

  • central finite difference versus symbolic differentiation
  • GreyBox: A, D, E, ME optimality
  • Cholesky: A, D

We do not need to add all of these variants in this PR. But I think we need to ensure this PR sets us up to easily add these extensions in other PRs. This is important because I could see multiple parallel PRs to add these features that will all be building on this testing foundation.


Parameters
----------
optimize_experiment_A : bool, optional
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not seem very general... instead, do we want to support A, D, E, ME, pseudo A, ... objectives?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean that we pass an argument, e.g., objective="trace" / "pesudo_trace"... etc in run_rooney_biegler_doe()` ? I think that is a very good idea.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, unless there are significant benefits to doing it as a single function (sequentially). If that is the case, you could make objectives the input and support either a single string or a list of strings.

Copy link
Contributor Author

@smondal13 smondal13 Jan 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added the objective and the gradient/sensitivity computation option. Maybe symbolic can be added there when symbolic is merged. And solving the example for only one type of objective is fine.

def run_rooney_biegler_doe(
    sensitivity_formula="central",
    optimization_objective="determinant",
    improve_cholesky_roundoff_error=False,
    compute_FIM_full_factorial=False,
    draw_factorial_figure=False,
    design_range={'hour': [0, 10, 40]},
    tee=False,
    print_output=False,
):

fd_method = "central"
obj_used = "trace"

experiment = run_rooney_biegler_doe()["experiment"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we do a lightweight version of FullReactorExperiment, perhaps with pseudo A opt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: Ready for final review
Status: Review In Progress

Development

Successfully merging this pull request may close these issues.

6 participants