-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC/WIP] MOI wrapper #16
Conversation
Codecov Report
@@ Coverage Diff @@
## master #16 +/- ##
===========================================
+ Coverage 52.19% 69.90% +17.70%
===========================================
Files 15 15
Lines 1002 1123 +121
===========================================
+ Hits 523 785 +262
+ Misses 479 338 -141
Continue to review full report at Codecov.
|
Great! I’ll have a look during the week
|
OK, so I like the idea of passing a I'm more skeptical about the Infeasible/Unbounded case. model = # Your MOI model
presolved_model = # Your favorite Model/Optimizer
postsolve = MOP.presolve!(presolved_model, model, T) # run presolve
MOI.optimize!(presolved_model)
# assume no error so far
x_ = MOI.get.(presolved_model, MOI.VariablePrimal(), x) # Exact definition of x is TBD
x_original = MOP.postsolve(x_) # or something like this. The only difference between I looked at what the main solvers are doing:
I would rather see a workflow like this (with one extra step for the user):
|
Interesting. Does that mean that there's no way to do postsolve through the Gurobi interface? I think on the primal side you could have a nice interface like postsolve, args... = MOP.presolve!(presolved_model, model, T)
MOI.optimize!(presolved_model)
x_ = MOI.get.(presolved_model, MOI.get(presolved_model, MOI.ListOfVariableIndices()))
x_original = postsolve(x_) (At least I think this would work). I was less clear on what to do for the duals. Do you want to return a
(Ideally I would also like to get rid of the |
Not that I know of 🤷♂️ Given the absence of documentation, I don't think Gurobi intended many people to use it...
We can even say that
Since in the second case, the resulting problem is essentially "empty", we could take the convention that Then the interface could look like status, postsolve, args... = MOP.presolve!(presolved_model, model, T)
if status == OPTIMIZE_NOT_CALLED
# Proceed normally
MOI.optimize!(presolved_model)
x_ = MOI.get.(
presolved_model,
MOI.VariablePrimal(),
MOI.get(presolved_model, MOI.ListOfVariableIndices())
)
elseif status == OPTIMAL || INFEASIBLE || UNBOUNDED
# Don't need to call optimize!, postsolve expects an empty vector
x_ = T[]
else
# Something went wrong
error("Presolved exited with status $status")
end
x_original = postsolve(x_) I'm purposefully ignoring the dual part for now. Oh, and stating the obvious: the ordering of MOI.get(model, MOI.ListOfVariableIndices()) # <--- this is the original model after appropriate "scalarization" of any non-scalar variable indices, if any. |
That sounds good to me. Using |
I won't fight for using MOI status codes over MOP ones :) I thought not having to deal with both would be more user-friendly. |
I agree that it's easier, and think it outweighs the puniness. (I sketched out the postsolve functions. I haven't tested the code though, so they might not even compile...) |
We should be able (at least internally) to use the same All the postsolve does is take an initial point or ray in the presolved space, and lift it to a point or ray in the original space. MathOptPresolve.jl/src/solution.jl Lines 5 to 8 in c317b25
As an illustration, the snippet below is an extract from these lines and these lines in Tulip: st = presolve!(model.presolve_data)
if st == Trm_Optimal || st == Trm_PrimalInfeasible || st == Trm_DualInfeasible || st == Trm_PrimalDualInfeasible
# Perform post-solve
sol0 = Solution{T}(model.pbdata.ncon, model.pbdata.nvar)
postsolve!(sol0, model.presolve_data.solution, model.presolve_data)
model.solution = sol0
else
# A call to optimize! will have produce a solution sol_inner
sol_inner = ... # <--- extract solution from presolved model
sol_outer = Solution{T}(model.pbdata.ncon, model.pbdata.nvar)
postsolve!(sol_outer, sol_inner, model.presolve_data)
model.solution = sol_outer
end In the first case, a solution is generated by |
@mtanneau see if you like this better |
@mtanneau: Added unit tests for each possible return case. I'm a bit confused by what's going on with unbounded models: I would think that you should be able to query an unbounded ray by passing in an empty |
I checked my code in Tulip: I actually intercept the unbounded/infeasible case just after presolve (here), extract the solution from I guess that, when detecting unboundedness/infeasibility, we just don't reduce the problem further: there is no need to eliminate The tricky part would be that, in the infeasible (primal or dual) case, we have a non-trivial solution for the reduced problem (e.g. here), which is stored in Coming back to an earlier comment, maybe a more generic approach would be to have That way we eliminate the need to edge-cases in Would that make more sense? |
I'm a bit confused. Let's say that |
Sorry, I was not clear about that. Here's my train of thought. The last approach I mentioned essentially allows to make the following workflow work all the time status, postsolve = MOP.presolve!(presolved_model, model, T)
MOI.optimize!(presolved_model)
x_ = MOI.get.(
presolved_model,
MOI.VariablePrimal(),
MOI.get(presolved_model, MOI.ListOfVariableIndices())
)
x_original = postsolve(x_) Of course, if the model was unbounded/infeasible at presolve, then the call to For the infeasible/unbounded case, this has the drawback of (i) having a different convention, and (ii) not being able to post-crush anything other than the certificate found at presolve. Since the post-crush is just a lifting from reduced to original space, it should work for any input vertex/ray of appropriate dimension. This is why I floated the idea of having a generic
Either way, we run into the limitation of not exposing the underlying |
What about a struct to encapsulate all this? E.g. struct PresolveResult{T} ... end
presolve!(dest, src, T) --> PresolveResult
get_status(::PresolveResult) --> MOI.TerminationStatusCode
get_optimal_solution(::PresolveResult) --> Vector{T} # fails if get_status != OPTIMAL
get_unbounded_ray(::PresolveResult) --> Vector{T} # fails if get_status != DUAL_INFEASIBLE
get_infeasible_certificate(::PresolveResult) --> Vector{T} # fails if get_status != INFEASIBLE
post_crush(::PresolveResult, ::Vector{T}) --> Vector{T} This would still populate |
I like that a lot 😄 |
Cool, take a look now then. |
Bump |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aside for the post-crushing of unbounded rays, everything looks good
Comments addressed, will merge in a day or two ( I want to turn this on in Cerberus :) ) |
👍 Yes, it's about time this gets merged :) |
This is a very rough and incomplete draft of an MOI wrapper. I'm requesting feedback particularly on the postsolve interface (the incomplete part), but welcome feedback on it all. Once we converge on how we want this to look, I'll clean it up and add more tests.
The main entrypoint is
presolve!(dest::MOI.ModelLike, src::ModelLike, T::Type)
. You pass in a modelsrc
to be reduced, empty modeldest
for the reduced problem to live, and a coefficient typeT
. The modification happens in-place todest
, and the return value is a function which (somehow, TBD) returns the mapping between the original problem and the reduced one.Rather than return a struct containing the termination results of the algorithm (e.g. reduced problem is infeasible, or optimal with cost X), instead I encode this in
dest
through trivial problems. For "optimal" reductions the problem has no variables/constraints, and a constant term in the objective equal to the optimal cost. I kind of like this since it means 1) you don't need to understand MOP-specific data structures to understand what happened, and 2) you can "solve"dest
and immediately infer what's going on anyway. But, I recognize this might be a little cute.