Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solving an LP to initialize NLP's should not be default behavior #373

Closed
tkelman opened this issue Jan 29, 2015 · 4 comments · Fixed by #375
Closed

Solving an LP to initialize NLP's should not be default behavior #373

tkelman opened this issue Jan 29, 2015 · 4 comments · Fixed by #375

Comments

@tkelman
Copy link
Contributor

tkelman commented Jan 29, 2015

Yikes, I'm surprised I didn't notice this earlier, but https://github.com/JuliaOpt/JuMP.jl/blob/e97f9cbf77df71c2a510fa14f345554bf43bcaf4/src/nlp.jl#L486-L490 is not a good default. It can be circumvented right now by providing initial conditions for all variables, but otherwise you're imposing an auxiliary LP solve any time someone forgets to set an initial point. Testing OS on Travis with Ipopt and Cbc installed (and forgetting to provide a starting point at first), that was failing since the default LPsolvers list that linprog tries doesn't include Cbc.

There are some situations where solving a simpler approximation of the problem is a good idea for initialization, but I'd make this kind of expensive choice opt-in. Users should be deciding when it makes sense to do this kind of thing. Maybe make the current behavior easily accessible (JuliaOpt/MathProgBase.jl#50 is somewhat related here), but the default behavior should be something simple and cheap. All zeros, or zeros projected into the variable bounds, or the midpoint of the bounds when both are finite, or other comparable strategies would be good. If user functions are poorly behaved at the default initialization point, then #320 is relevant to making that fact more easily discoverable. Since the current default initialization strategy just ignores nonlinear constraints, it's just as likely to hit numerical problems with arbitrary nonlinear functions as any other choice.

@mlubin
Copy link
Member

mlubin commented Jan 29, 2015

Yeah, happy to change this. I was never too satisfied with the current behavior, and it seems like AMPL and friends don't try to do anything smart here.

tkelman added a commit to JuliaOpt/CoinOptServices.jl that referenced this issue Jan 29, 2015
@tkelman
Copy link
Contributor Author

tkelman commented Jan 29, 2015

Looking at OS' source code, I think it might be initializing everything to 0 by default with Couenne (Couenne probably handles initialization in its own way internally?), and for Bonmin and Ipopt it appears to be projecting 1.7171 into the bounds as a default starting point. Somewhat arbitrary number, maybe chosen as less likely to trigger any division by zero than 0, 1, or some other choices? I'm not familiar with what AMPL does for [MI]NLP when not given an initial point.

@mlubin
Copy link
Member

mlubin commented Jan 29, 2015

IIRC AMPL uses all zeros by default.

@tkelman
Copy link
Contributor Author

tkelman commented Jan 29, 2015

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging a pull request may close this issue.

2 participants