-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DNMY: Port over code from Tulip #1
Conversation
TL;DR: IMO, the 2 points that need addressing now are data structures for storing problem data, and organization of the presolve. Code duplicationI'm not worried about duplicating code. These components will be moved out of Tulip eventually.
Code organizationPointers: Looking into the code, both HiGHS and SCIP store a list of "presolver" objects, and the presolve loop applies each of these to the current problem. TestingI wouldn't call the tests in Tulip first class. |
Agreed on ProblemData and Statuses. Re. Solution, I am trying to think through how tightly coupled solutions are to the presolve reductions. It seems like at least some of the routines, like "empty column", the connection is quite tight. I'm imagining a case where you have an LP with additional nonconvex quadratic constraints, but only want to apply presolve to the LP portion. In this case, you might still want to apply some problem reductions (e.g. drop empty rows), but returning solutions might not be meaningful (there may be additional constraints rendering that point infeasible). Likely the best way to handle this is just to be able to configure the presolve routines that are run... Is it likely that folks would want to build their own presolve objects outside of this package? This is the only real advantage I see of following the design pattern of HiGHS and SCIP. Otherwise, I would be fine with a monolithic configuration object that you can pass in when you call |
The way I view it, we start from a problem formulation (e.g. standard form, canonical form, etc). For each reduction we need a corresponding pre-crush and post-crush procedure:
This needs to be done for both primal and dual solutions (if there are integers, then primal only), hence the tight coupling with problem formulation.
[putting aside updating the problem formulation and data structures]
Most likely a mix of both, since the latter also allows the user to manually disable some components.
|
I see, this makes sense.
In what I was proposing, I was thinking of mixing the two.
This sounds correct. In which case, we can do something like introduce a How should we proceed? Should I try to hack things on top of this PR, or should we merge and then create separate PRs for the problem and routine interfaces? It will probably be hard to review changes on top of this current PR... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I only flagged parts that could be removed as is.
Let's merge this and make modifications from there.
At the lower level (individual presolve routines), I prefer to keep a list of reductions and update it on the go. It makes it easy to inspect afterwards, and pre/post-crush is a simple At the higher level, I'd like to be able to access the presolve's internal data structures, in case someone wants extra information, e.g. dual bounds or conflict graph for MIP. |
I ripped out the presolve code from Tulip. It passes tests the tests I brought along, but this was quick and dirty so caveat emptor.
To excise the code, I needed to bring along the Tulip code for specifying the problem data, solutions, and statuses. It is probably not a good idea to have this code exactly duplicated twice.
Thoughts on: de-duplication? Code organization? Further testing? Other things?