Skip to content

Commit

Permalink
Clarify the documentation on user-defined gradients (#2639)
Browse files Browse the repository at this point in the history
  • Loading branch information
odow authored Jun 28, 2021
1 parent 2871a06 commit 7ef4be4
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 9 deletions.
10 changes: 6 additions & 4 deletions docs/src/manual/nlp.md
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ julia> expr = @NLexpression(model, sin(x) + 1)

With the exception of the splatting syntax discussed below, all expressions
must be simple scalar operations. You cannot use `dot`, matrix-vector products,
vector slices, etc.
vector slices, etc.
```jldoctest nlp_scalar_only; setup=:(model = Model(); @variable(model, x[1:2]); @variable(model, y); c = [1, 2])
julia> @NLobjective(model, Min, c' * x + 3y)
ERROR: Unexpected array [1 2] in nonlinear expression. Nonlinear expressions may contain only scalar expressions.
Expand Down Expand Up @@ -262,7 +262,7 @@ In addition to this list of functions, it is possible to register custom
a function is not needed. Two exceptions are if you want to provide custom
derivatives, or if the function is not available in the scope of the
nonlinear expression.

!!! warning
User-defined functions must return a scalar output. For a work-around, see
[User-defined functions with vector outputs](@ref).
Expand Down Expand Up @@ -395,7 +395,7 @@ vector as the first argument that is filled in-place:
```@example
using JuMP #hide
f(x, y) = (x - 1)^2 + (y - 2)^2
function ∇f(g::Vector{T}, x::T, y::T) where {T}
function ∇f(g::AbstractVector{T}, x::T, y::T) where {T}
g[1] = 2 * (x - 1)
g[2] = 2 * (y - 2)
return
Expand All @@ -407,7 +407,9 @@ register(model, :my_square, 2, f, ∇f)
@NLobjective(model, Min, my_square(x[1], x[2]))
```

Hessian information is not supported for multivariate functions.
!!! warning
Make sure the first argument to `∇f` supports an `AbstractVector`, and do
not assume the input is `Float64`.

### Register a function, gradient, and hessian

Expand Down
10 changes: 5 additions & 5 deletions src/nlp.jl
Original file line number Diff line number Diff line change
Expand Up @@ -1861,16 +1861,16 @@ end
Register the user-defined function `f` that takes `dimension` arguments in
`model` as the symbol `s`. In addition, provide a gradient function `∇f`.
The functions `f`and ∇f must support all subtypes of `Real` as arguments. Do not
assume that the inputs are `Float64`.
The functions `f`and `∇f` must support all subtypes of `Real` as arguments. Do
not assume that the inputs are `Float64`.
## Notes
* If the function `f` is univariate (i.e., `dimension == 1`), `∇f` must return
a number which represents the first-order derivative of the function `f`.
* If the function `f` is multi-variate, `∇f` must have a signature matching
`∇f(g::Vector{T}, args::T...) where {T<:Real}`, where the first argument is a
vector `g` that is modified in-place with the gradient.
`∇f(g::AbstractVector{T}, args::T...) where {T<:Real}`, where the first
argument is a vector `g` that is modified in-place with the gradient.
* If `autodiff = true` and `dimension == 1`, use automatic differentiation to
compute the second-order derivative information. If `autodiff = false`, only
first-order derivative information will be used.
Expand All @@ -1892,7 +1892,7 @@ register(model, :foo, 1, f, ∇f; autodiff = true)
model = Model()
@variable(model, x[1:2])
g(x::T, y::T) where {T<:Real} = x * y
function ∇g(g::Vector{T}, x::T, y::T) where {T<:Real}
function ∇g(g::AbstractVector{T}, x::T, y::T) where {T<:Real}
g[1] = y
g[2] = x
return
Expand Down

0 comments on commit 7ef4be4

Please sign in to comment.