Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## latest #2518 +/- ##
==========================================
+ Coverage 80.31% 80.46% +0.14%
==========================================
Files 348 348
Lines 86095 86771 +676
==========================================
+ Hits 69149 69819 +670
- Misses 16946 16952 +6 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
fwesselm
left a comment
There was a problem hiding this comment.
@Opt-Mucca , thank you for adding the flow cover cuts.
One general question (- I have not thought this through -): Should new cut separators like flow cover cuts live in the HighsCutGeneration class or have its own (derived) class?
highs/mip/HighsCutGeneration.cpp
Outdated
| // col is in N- \ C- | ||
| if (snfr.flowCoverStatus[i] == -1 && snfr.coef[i] == -1) { | ||
| assert(snfr.vubCoef[i] >= 0); | ||
| if (snfr.vubCoef[i] > snfr.lambda + 1e-8) { |
There was a problem hiding this comment.
Maybe add a const variable to store the hard-coded 1e-8?
There was a problem hiding this comment.
I've added const double vubEpsilon = 1e-8;
| if (snfr.flowCoverStatus[i] == -1 && snfr.coef[i] == -1) { | ||
| assert(snfr.vubCoef[i] >= 0); | ||
| if (snfr.vubCoef[i] > snfr.lambda + 1e-8) { | ||
| ld.m[ld.r] = snfr.vubCoef[i]; |
There was a problem hiding this comment.
Would it be worth adding a mini lambda to add elements to ld.m?
There was a problem hiding this comment.
As in turn the entire loop into a lambda?
| if (vubcoef <= ld.M[i] - 1e-8) { | ||
| assert(ld.M[i] < vubcoefpluslambda); | ||
| alpha = 1; | ||
| beta = -i * HighsCDouble(snfr.lambda) + ld.M[i]; |
There was a problem hiding this comment.
It would be great to use static_cast<HighsCDouble>(...) instead of C-style casting.
There was a problem hiding this comment.
Is this C-style casting? My IDE at least doesn't complain like it normally does for other conversions. It's rather a call to the constructor:
HighsCDouble(double val) : hi(val), lo(0.0) {} in HighsCDouble.h.
I'm happy to change it if this is still counted as bad practice.
There was a problem hiding this comment.
The C style of cast - even (type)foo, rather than type(foo) - is everywhere in HiGHS and has never caused an issue.
Hence the static_cast (which is excessively verbose) is not necessary. We have greater priorities
highs/mip/HighsCutGeneration.cpp
Outdated
| liftedcoef -= ld.M[i]; | ||
| return static_cast<double>(liftedcoef); | ||
| } | ||
| if (i < ld.r) { |
There was a problem hiding this comment.
Is this actually an else if?
There was a problem hiding this comment.
Yes, but as all the cases always return something there's no functional difference.
I've added the else to help with readability.
highs/mip/HighsCutGeneration.cpp
Outdated
| if (snfr.vubCoef[i] > snfr.lambda + 1e-8) { | ||
| if (snfr.origBinCols[i] != -1) { | ||
| // col is in L- | ||
| vals[rowlen] = -snfr.lambda; |
There was a problem hiding this comment.
Suggestion: Add another mini-lambda to add to vals, inds, update rowlen and tmpRhs. Just wondering if this may shorten this loop...
There was a problem hiding this comment.
It would definitely shorten the loop. I should have added this earlier.... Good suggestion!
@fwesselm For the current design they shouldn't. Anything that is going to call |
There was a problem hiding this comment.
I can't comment on the technical details as they're way beyond me. However, it's the sort of stuff you understand well, and @fwesselm has been with you in the process.
My only observation is that
struct SNFRelaxation {
HighsInt numNnzs; // |N-| + |N+|
std::vector coef; // (+-1) coefficient of col in SNFR
std::vector vubCoef; // u_j in y'_j <= u_j x_j in SNFR
std::vector binSolval; // lp[x_j], y'_j <= u_j x_j in SNFR
std::vector origBinCols; // orig x_i, y'_j <= u_j x_j in SNFR
std::vector origContCols; // orig y_i used to make y'_j in SNFR
std::vector aggrBinCoef; // coef of x_i in y'_j aggregation
std::vector aggrContCoef; // coef of y_i in y'_j aggrregation
std::vector aggrConstant; // constant shift in y'_j aggregation
std::vector complementation;
with the HiGHS naming convention would be
struct SnfRelaxation {
HighsInt num_n_nz; // |N-| + |N+|
std::vector coef; // (+-1) coefficient of col in SNFR
std::vector vub_coef; // u_j in y'_j <= u_j x_j in SNFR
std::vector bin_sol_val; // lp[x_j], y'_j <= u_j x_j in SNFR
std::vector orig_bin_cols; // orig x_i, y'_j <= u_j x_j in SNFR
std::vector orig_cont_cols; // orig y_i used to make y'_j in SNFR
std::vector aggr_bin_coef; // coef of x_i in y'_j aggregation
std::vector aggr_cont_coef; // coef of y_i in y'_j aggrregation
std::vector aggr_constant; // constant shift in y'_j aggregation
std::vector complementation;
Don't fee you need to change it, though. I also acknowledge that Leona wasn't great at following the naming convention....
|
@jajhall Currently the big hurdle is getting this to be performance improving on @fwesselm cluster. (Last time it was measured it was performance neutral). |
|
Hmm... Can you easily switch off the flow cover constraints by default? That way the code is merged, possibly avoiding conflicts that might occur if merged later. Probably a useless comment, but I have a memory of Philip Christophel saying that flow cover constraints weren't so useful until he revisited them... |
|
I can add a parameter to turn them off before they get merged. In the long term I'd like to add some high level cut parameters anyways (something like an emphasis setting) because it's clear some users want to test settings for their application. Currently I'm just trying to make it so the behaviour of when they're turned off is identical to before. |
This PR adds flow cover cuts to HiGHS. All logic related to the cut can be followed from the
HighsCutGeneration::tryGenerateFLowCoverCutfunction. Currently the cuts are only generated based on rows in the LP, i.e., "aggregations" of size 1.The performance results are extremely finicky, but seem to be promising, and are especially so for some network based energy problems I've been testing on. This shouldn't be merged before any additional computational experiments are done.
@galabovaa I changed the two tests because (1) the instance now solved at the root node and therefore has the optimal solution but has status
interrupted(2) the primal heuristics now jumped quicker to the optimal solution, so I had to up the objective limit to still catch the event.