Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lp sensitivity summary #1917

Merged
merged 15 commits into from
Apr 25, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 92 additions & 0 deletions docs/src/solutions.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ optimization step. Typical questions include:
- Do I have a solution to my problem?
- Is it optimal?
- Do I have a dual solution?
- How sensitive is the solution to data perturbations?

JuMP follows closely the concepts defined in [MathOptInterface (MOI)](https://github.com/JuliaOpt/MathOptInterface.jl)
to answer user questions about a finished call to `optimize!(model)`. There
Expand Down Expand Up @@ -116,6 +117,97 @@ end
# TODO: How to accurately measure the solve time.
```

## Sensitivity analysis for LP

Given an LP problem and an optimal solution corresponding to a basis, we can
question how much an objective coefficient or standard form rhs coefficient
(c.f., [`standard_form_rhs`](@ref)) can change without violating primal or dual
feasibility of the basic solution. Note that not all solvers computes the basis
and the sensitivity analysis requires that the solver interface implements
`MOI.ConstraintBasisStatus`.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a note that this form of sensitivity analysis appears often in textbooks but has limited use in practice because optimal bases are often degenerate.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(If you object to this characterization, I'd be interested to hear the argument.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, it depends quite a lot on the instance if these ranges are practically useful, I added a paragraph discussing briefly their limitations.

Given an LP optimal solution (and both [`has_values`](@ref) and
[`has_duals`](@ref) returns `true`) [`lp_objective_perturbation_range`](@ref)
returns a range of the allowed perturbation of the cost coefficient
corresponding to the input variable. Note that the current primal solution
remains optimal within this range, however the corresponding dual solution might
change since a cost coefficient is perturbed. Similarly,
[`lp_rhs_perturbation_range`](@ref) returns a range of the allowed perturbation
of the rhs coefficient corresponding to the input constraint. And in this range
the current dual solution remains optimal but the primal solution might change
since a rhs coefficient is perturbed.

However, if the problem is degenerate, there are multiple optimal bases and
hence these ranges might not be as intuitive and seem too narrow. E.g., a larger
cost coefficient perturbation might not invalidate the optimality of the current
primal solution. Moreover, if a problem is degenerate, due to finite precision,
it can happen that, e.g., a perturbation seems to invalidate a basis even though
it doesn't (again providing too narrow ranges). To prevent this
`feasibility_tolerance` and `optimality_tolerance` is introduced, which in turn,
might make the ranges too wide for numerically challenging instances. Thus do not
blindly trust these ranges, especially not for highly degenerate or numerically
unstable instances.

To give a simple example, we could analyze the sensitivity of the optimal
solution to the following (non-degenerate) LP problem:

```julia
julia> model = Model();
julia> @variable(model, x[1:2]);
julia> @constraint(model, c1, x[1] + x[2] <= 1);
julia> @constraint(model, c2, x[1] - x[2] <= 1);
julia> @constraint(model, c3, -0.5 <= x[2] <= 0.5);
julia> @objective(model, Max, x[1]);
```

```@meta
DocTestSetup = quote
using JuMP
model = Model(with_optimizer(MOIU.MockOptimizer, JuMP._MOIModel{Float64}(),
eval_variable_constraint_dual=true));
@variable(model, x[1:2]);
@constraint(model, c1, x[1] + x[2] <= 1);
@constraint(model, c2, x[1] - x[2] <= 1);
@constraint(model, c3, -0.5 <= x[2] <= 0.5);
@objective(model, Max, x[1]);
optimize!(model);
mock = backend(model).optimizer.model;
MOI.set(mock, MOI.TerminationStatus(), MOI.OPTIMAL);
MOI.set(mock, MOI.DualStatus(), MOI.FEASIBLE_POINT);
MOI.set(mock, MOI.VariablePrimal(), JuMP.optimizer_index(x[1]), 1.0);
MOI.set(mock, MOI.VariablePrimal(), JuMP.optimizer_index(x[2]), 0.0);
MOI.set(mock, MOI.ConstraintBasisStatus(), JuMP.optimizer_index(c1), MOI.NONBASIC);
MOI.set(mock, MOI.ConstraintBasisStatus(), JuMP.optimizer_index(c2), MOI.NONBASIC);
MOI.set(mock, MOI.ConstraintBasisStatus(), JuMP.optimizer_index(c3), MOI.BASIC);
MOI.set(mock, MOI.ConstraintDual(), optimizer_index(c1), -0.5);
MOI.set(mock, MOI.ConstraintDual(), optimizer_index(c2), -0.5);
MOI.set(mock, MOI.ConstraintDual(), optimizer_index(c3), 0.0);
end
```

To analyze the sensitivity of the problem we could check the allowed
perturbation ranges of, e.g., the cost coefficients and the rhs coefficient of
constraint `c1` as follows:
```jldoctest
julia> optimize!(model);

julia> value.(x)
2-element Array{Float64,1}:
1.0
0.0
julia> lp_objective_perturbation_range(x[1])
(-1.0, Inf)
julia> lp_objective_perturbation_range(x[2])
(-1.0, 1.0)
julia> lp_rhs_perturbation_range(c1)
(-1.0, 1.0)
```

```@docs
lp_objective_perturbation_range
lp_rhs_perturbation_range
```

## Reference

```@docs
Expand Down
2 changes: 2 additions & 0 deletions src/JuMP.jl
Original file line number Diff line number Diff line change
Expand Up @@ -716,6 +716,8 @@ include("macros.jl")
include("optimizer_interface.jl")
include("nlp.jl")
include("print.jl")
include("lp_sensitivity.jl")


# JuMP exports everything except internal symbols, which are defined as those
# whose name starts with an underscore. If you don't want all of these symbols
Expand Down
Loading