Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DNMY: Port over code from Tulip #1

Merged
merged 2 commits into from
Dec 8, 2020

Conversation

joehuchette
Copy link
Collaborator

I ripped out the presolve code from Tulip. It passes tests the tests I brought along, but this was quick and dirty so caveat emptor.

To excise the code, I needed to bring along the Tulip code for specifying the problem data, solutions, and statuses. It is probably not a good idea to have this code exactly duplicated twice.

Thoughts on: de-duplication? Code organization? Further testing? Other things?

@mtanneau
Copy link
Owner

mtanneau commented Dec 7, 2020

TL;DR: IMO, the 2 points that need addressing now are data structures for storing problem data, and organization of the presolve.

Code duplication

I'm not worried about duplicating code. These components will be moved out of Tulip eventually.

  • ProblemData and Solution: this will underlie most of the code, so it deserves some thought. Changing course later will be a pain.
  • statuses: we only need 4: optimal, primal infeasible, dual infeasible, and unknown. I would say this is lower priority, so duplicating code is OK at this point.

Code organization

Pointers:

Looking into the code, both HiGHS and SCIP store a list of "presolver" objects, and the presolve loop applies each of these to the current problem.
Pros: it's easy to implement new presolve rules, and to activate/deactivate some.
Cons: we'd have to manipulate AbstractPresolver objects. There might be some thinking to do to make things julia-friendly.

Testing

I wouldn't call the tests in Tulip first class.

@joehuchette
Copy link
Collaborator Author

Agreed on ProblemData and Statuses.

Re. Solution, I am trying to think through how tightly coupled solutions are to the presolve reductions. It seems like at least some of the routines, like "empty column", the connection is quite tight.

I'm imagining a case where you have an LP with additional nonconvex quadratic constraints, but only want to apply presolve to the LP portion. In this case, you might still want to apply some problem reductions (e.g. drop empty rows), but returning solutions might not be meaningful (there may be additional constraints rendering that point infeasible). Likely the best way to handle this is just to be able to configure the presolve routines that are run...

Is it likely that folks would want to build their own presolve objects outside of this package? This is the only real advantage I see of following the design pattern of HiGHS and SCIP. Otherwise, I would be fine with a monolithic configuration object that you can pass in when you call presolve!.

@mtanneau
Copy link
Owner

mtanneau commented Dec 7, 2020

I am trying to think through how tightly coupled solutions are to the presolve reductions

The way I view it, we start from a problem formulation (e.g. standard form, canonical form, etc).
This determines the form of the primal/dual solution. In the mixed-integer case just drop the dual part.

For each reduction we need a corresponding pre-crush and post-crush procedure:

  • pre-crush: original-> presolve
  • post-crush: presolve -> original

This needs to be done for both primal and dual solutions (if there are integers, then primal only), hence the tight coupling with problem formulation.

case where you have an LP with additional nonconvex quadratic constraints

[putting aside updating the problem formulation and data structures]
You can address that in (at least) 2 ways:

  • hard-code some conditions in each presolve reduction. For instance, forcing constraint applies only to linear constraints, and do nothing if the constraint is non-linear. Obviously, you need a way to tell at runtime whether constraint i is linear or not.
  • disable some reductions altogether based on the problem class. For instance, dual reductions would be disabled for MIP models, because MIP don't have a (nice) dual.

Most likely a mix of both, since the latter also allows the user to manually disable some components.
W.r.t that, there's a PresolveOptions struct that I intended for this.

Is it likely that folks would want to build their own presolve objects outside of this package?

  • For (convex continuous) LP/QP -> unlikely, unless for special-structure problem (e.g., block-wise presolve or something). LP/QP presolve is pretty straightforward and well-understood.
  • For MIP -> more likely, given the breadth of presolve reductions and problem types

@joehuchette
Copy link
Collaborator Author

For each reduction we need a corresponding pre-crush and post-crush procedure:

I see, this makes sense.

You can address that in (at least) 2 ways:

  • hard-code some conditions in each presolve reduction. For instance, forcing constraint applies only to linear constraints, and do nothing if the constraint is non-linear. Obviously, you need a way to tell at runtime whether constraint i is linear or not.
  • disable some reductions altogether based on the problem class. For instance, dual reductions would be disabled for MIP models, because MIP don't have a (nice) dual.
    Most likely a mix of both, since the latter also allows the user to manually disable some components.

In what I was proposing, I was thinking of mixing the two.

For MIP -> more likely, given the breadth of presolve reductions and problem types

This sounds correct. In which case, we can do something like introduce a AbstractPresolveRoutine type and an apply!(problem, routine) -> (precrush, postcrush) interface.

How should we proceed? Should I try to hack things on top of this PR, or should we merge and then create separate PRs for the problem and routine interfaces? It will probably be hard to review changes on top of this current PR...

Copy link
Owner

@mtanneau mtanneau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I only flagged parts that could be removed as is.
Let's merge this and make modifications from there.

@mtanneau
Copy link
Owner

mtanneau commented Dec 7, 2020

we can do something like introduce a AbstractPresolveRoutine type and an apply!(problem, routine) -> (precrush, postcrush) interface.

At the lower level (individual presolve routines), I prefer to keep a list of reductions and update it on the go. It makes it easy to inspect afterwards, and pre/post-crush is a simple for loop.

At the higher level, I'd like to be able to access the presolve's internal data structures, in case someone wants extra information, e.g. dual bounds or conflict graph for MIP.
I've opened an issue (#2) to discuss this further.

@joehuchette joehuchette merged commit a070ad4 into mtanneau:master Dec 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants