Skip to content

Commit

Permalink
Lp sensitivity summary (#1917)
Browse files Browse the repository at this point in the history
* Add range of feasibility and optimality, i.e., where the current lp-basis remain optimal.

* Add tests for range of feasibility

* Remove dependence of Lower/UpperBoundref for Range of optimality

* Add tests for range of optimality

* Code clean

* Minor simplification of _std_matrix

* Move lp sensitivity to new file

* Code cleaning

* Minor fix and clean in _std_matrix

* Revised tolerances used in lp sensitivity

* Clean up an fixes in lp sensitivity

* Tolerance as an optional argument in lp_sensitivity

* Fix some comments in lp_sensitivity

* Documentation of lp_sensitivity

* Clarificaitons in documenation of lp_sensitivity
  • Loading branch information
EdvinAblad authored and mlubin committed Apr 25, 2019
1 parent fd5ea00 commit 94e2cbb
Show file tree
Hide file tree
Showing 5 changed files with 664 additions and 0 deletions.
92 changes: 92 additions & 0 deletions docs/src/solutions.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ optimization step. Typical questions include:
- Do I have a solution to my problem?
- Is it optimal?
- Do I have a dual solution?
- How sensitive is the solution to data perturbations?

JuMP follows closely the concepts defined in [MathOptInterface (MOI)](https://github.com/JuliaOpt/MathOptInterface.jl)
to answer user questions about a finished call to `optimize!(model)`. There
Expand Down Expand Up @@ -117,6 +118,97 @@ end
# TODO: How to accurately measure the solve time.
```

## Sensitivity analysis for LP

Given an LP problem and an optimal solution corresponding to a basis, we can
question how much an objective coefficient or standard form rhs coefficient
(c.f., [`standard_form_rhs`](@ref)) can change without violating primal or dual
feasibility of the basic solution. Note that not all solvers computes the basis
and the sensitivity analysis requires that the solver interface implements
`MOI.ConstraintBasisStatus`.

Given an LP optimal solution (and both [`has_values`](@ref) and
[`has_duals`](@ref) returns `true`) [`lp_objective_perturbation_range`](@ref)
returns a range of the allowed perturbation of the cost coefficient
corresponding to the input variable. Note that the current primal solution
remains optimal within this range, however the corresponding dual solution might
change since a cost coefficient is perturbed. Similarly,
[`lp_rhs_perturbation_range`](@ref) returns a range of the allowed perturbation
of the rhs coefficient corresponding to the input constraint. And in this range
the current dual solution remains optimal but the primal solution might change
since a rhs coefficient is perturbed.

However, if the problem is degenerate, there are multiple optimal bases and
hence these ranges might not be as intuitive and seem too narrow. E.g., a larger
cost coefficient perturbation might not invalidate the optimality of the current
primal solution. Moreover, if a problem is degenerate, due to finite precision,
it can happen that, e.g., a perturbation seems to invalidate a basis even though
it doesn't (again providing too narrow ranges). To prevent this
`feasibility_tolerance` and `optimality_tolerance` is introduced, which in turn,
might make the ranges too wide for numerically challenging instances. Thus do not
blindly trust these ranges, especially not for highly degenerate or numerically
unstable instances.

To give a simple example, we could analyze the sensitivity of the optimal
solution to the following (non-degenerate) LP problem:

```julia
julia> model = Model();
julia> @variable(model, x[1:2]);
julia> @constraint(model, c1, x[1] + x[2] <= 1);
julia> @constraint(model, c2, x[1] - x[2] <= 1);
julia> @constraint(model, c3, -0.5 <= x[2] <= 0.5);
julia> @objective(model, Max, x[1]);
```

```@meta
DocTestSetup = quote
using JuMP
model = Model(with_optimizer(MOIU.MockOptimizer, JuMP._MOIModel{Float64}(),
eval_variable_constraint_dual=true));
@variable(model, x[1:2]);
@constraint(model, c1, x[1] + x[2] <= 1);
@constraint(model, c2, x[1] - x[2] <= 1);
@constraint(model, c3, -0.5 <= x[2] <= 0.5);
@objective(model, Max, x[1]);
optimize!(model);
mock = backend(model).optimizer.model;
MOI.set(mock, MOI.TerminationStatus(), MOI.OPTIMAL);
MOI.set(mock, MOI.DualStatus(), MOI.FEASIBLE_POINT);
MOI.set(mock, MOI.VariablePrimal(), JuMP.optimizer_index(x[1]), 1.0);
MOI.set(mock, MOI.VariablePrimal(), JuMP.optimizer_index(x[2]), 0.0);
MOI.set(mock, MOI.ConstraintBasisStatus(), JuMP.optimizer_index(c1), MOI.NONBASIC);
MOI.set(mock, MOI.ConstraintBasisStatus(), JuMP.optimizer_index(c2), MOI.NONBASIC);
MOI.set(mock, MOI.ConstraintBasisStatus(), JuMP.optimizer_index(c3), MOI.BASIC);
MOI.set(mock, MOI.ConstraintDual(), optimizer_index(c1), -0.5);
MOI.set(mock, MOI.ConstraintDual(), optimizer_index(c2), -0.5);
MOI.set(mock, MOI.ConstraintDual(), optimizer_index(c3), 0.0);
end
```

To analyze the sensitivity of the problem we could check the allowed
perturbation ranges of, e.g., the cost coefficients and the rhs coefficient of
constraint `c1` as follows:
```jldoctest
julia> optimize!(model);
julia> value.(x)
2-element Array{Float64,1}:
1.0
0.0
julia> lp_objective_perturbation_range(x[1])
(-1.0, Inf)
julia> lp_objective_perturbation_range(x[2])
(-1.0, 1.0)
julia> lp_rhs_perturbation_range(c1)
(-1.0, 1.0)
```

```@docs
lp_objective_perturbation_range
lp_rhs_perturbation_range
```

## Reference

```@docs
Expand Down
2 changes: 2 additions & 0 deletions src/JuMP.jl
Original file line number Diff line number Diff line change
Expand Up @@ -751,6 +751,8 @@ include("macros.jl")
include("optimizer_interface.jl")
include("nlp.jl")
include("print.jl")
include("lp_sensitivity.jl")


# JuMP exports everything except internal symbols, which are defined as those
# whose name starts with an underscore. If you don't want all of these symbols
Expand Down
Loading

0 comments on commit 94e2cbb

Please sign in to comment.