-
-
Notifications
You must be signed in to change notification settings - Fork 401
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide diagnostics mode to pinpoint source of NaNs #320
Comments
Does GAMS (or AMPL) provide diagnostics like this? |
I don't know |
Depending on the solver, this is the kind of thing where I would just turn up the logging level, output to a file and start digging through it. Hard to handle in a solver-independent way, unless you want to start getting into adding optional diagnostic features to generated callback functions. Somewhat related to jump-dev/NLopt.jl#16 |
I think at some point it would be good to have diagnostic features in the generated callback functions, so that is really the point of this issue. Right now it is almost impossible to debug those, because one can also not just insert |
Julia really really badly needs a debugger. That much is obvious. I don't know if there's any good solution to make that happen any faster aside from locking Keno in a room. At the JuMP level of solver independence, what diagnostics would even make sense beyond "print the callback function inputs" and "print the callback function outputs"? Any decent solver, assuming it was designed with optimization modeling languages in mind as most of them are, can already do that for you. |
I guess my main trouble is that even if I knew which input combination of values caused a NaN, it would still be a fair amount of work to trace that back to the precise expression that caused the NaN. I might have a complicated expression as my objective or constraint, and then I would have to code that expression as normal julia code, run that under a debugger (once it is there) and then figure out which exact operation causes the NaN. I'm sure it would work, but I know it would be painful. If there was some way for JuMP to just automatically tell me "this particular operation in constraint X returned NaN for these precise input parameters" it would just be super handy and safe an awful lot of time. |
Maybe something to the effect of -
On part 3, I wonder if this is something where JuMP could insert extra compiler metadata into the debug version of the callback to help tie backtraces back to the corresponding |
Yes, Tony's idea would be perfect. |
This discussion is very old and I don't expect any progress on providing NaN diagnostics unless someone is willing to do the work. I'm tempted to close. |
This continues a discussion started in #318.
This model now solves, but we now know that there are iterations where it returns NaNs. I would like to be able to pinpoint the exact expression where the NaN is introduced, so that I can double check whether this is due to an error in my model, or whether I'm ok with this.
This is obviously not trivial at all, because the NaN could also be generated in a derivative, so in those cases ideally the diagnostic would say something like "the NaN was introduced in part x of the gradient of expression y" or something like that.
The text was updated successfully, but these errors were encountered: