Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hermite splines? #6

Open
tomasaschan opened this issue Nov 21, 2014 · 9 comments
Open

Hermite splines? #6

tomasaschan opened this issue Nov 21, 2014 · 9 comments

Comments

@tomasaschan
Copy link
Contributor

A recent post on julia-users made a case for Cubic Hermite splines, that seem to have some properties that regular B-splines might not have (although I must admit that I'm not too familiar with the terminology used in the julia-users thread...).

Cubic Hermite splines are quite similar to B-splines, but instead of using several data points to construct an interpolating polynomial, the derivatives at the data points are used instead. It readily generalizes to irregular grids, which is nice, but I can't find any good resources on whether it generalizes nicely also in dimensions.

@timholy
Copy link
Member

timholy commented Nov 23, 2014

Seems quite reasonable. I don't have anything to add (either comments or experience). This could well be something that one can encourage others to contribute.

@lstagner
Copy link

lstagner commented Jun 2, 2015

I recently had to code up Cubic Hermite Splines (including Monotone Cubic Splines). I also implemented Polyharmonic Splines which work on N-dimensional irregularly gridded data.

Heres the code
Cubic Splines
Polyharmonic Splines

I probably needs a bit of cleanup but its a start.

@cauribeteran
Copy link

@lstagner, your code looks pretty good. Have you compared your functions in terms of efficiency with pchip or SimplePchip (https://github.com/slabanja/SimplePCHIP)? I'm solving a big scale model and I'm having a hard time in terms of efficiency. Using linear interpolation (Interpolations.jl) each iteration takes around 0.86 minutes. But that approximation is not good enough, and Pchip is required. My matlab code (enhanced with Matlab Coder running C++ code) uses the interp1(...,'pchip') function and each iteration takes about 1.3 minutes. However, using SimplePCHIP in julia each iteration takes almost 13 minutes!! Any recommendations?

@tomasaschan
Copy link
Contributor Author

@cauribeteran Have you tried higher order B-splines from Interpolations.jl? Are there any reasons you can't apply them to your use case?

The B-spline parts of Interpolations.jl have been very carefully crafted to be performant (most of the actual work in that area was done by Tim). For quadratic and higher order B-splines there's an initial "setup" cost, for solving a system of equations for the spline coefficients, but after that it should be on the same scale as linear interpolation. If you do a lot of evaluations in each iteration, before you change the underlying data (and with 0.83 minutes per iteration for linear splines it sounds like you do) the "setup" cost might be negligible.

@cauribeteran
Copy link

cauribeteran commented Aug 1, 2017

@tlycken Thank you for your answer. I need to use monotonic cubic splines because my functions have inflexion points that I can't anticipate. Simple cubic splines would produce non-monotonicities that do not respond to the functions' properties.

Another issue that I have is that within each iteration I need to setup the interpolant object, and this is a bottleneck.

I've been playing around with the scipy python library, but it is still taking too long (specifically with PyCall and scipy.interpolate.PchipInterpolant.

Like I said in the previous post, my point of reference is a Matlab code enhanced with matlab coder. Thus, I'm actually comparing my julia code with C code.

Does anyone know about an interpolations library in C that includes the pchip function, and what would be the proper way to call it from julia? I'm guessing I should use the ccall function in Julia, but I've been attempting this without success. Help will be much appreciated!!!

@mschauer
Copy link
Member

mschauer commented Aug 1, 2017

For comparison, for the tight loops you can compare with https://github.com/mschauer/Bridge.jl/blob/master/src/cspline.jl#L2

@NAThompson
Copy link
Contributor

NAThompson commented Jan 2, 2022

I made a stab at this, but quickly realized I was out of my Julia depth, esp. wrt how many nice things this library is compatible with (units, generic over types and vectors, vector valued codomains etc.). In any case, if anyone wants to pick up this ball, cubic hermite interpolation is great for ODE solution skeletons, since y' = f(y, t) immediately makes derivatives available. Morever, you immediately get a nice way to validate your solution, since ||u' - f(u(t), t)|| gives the backward error of your solve.

struct CubicHermite
    xs::Vector{Float64}
    ys::Vector{Float64}
    dydxs::Vector{Float64}
    function CubicHermite(xs, ys, dydxs)
        if length(xs) < 2
            throw(
                DomainError(
                    "There must be at least two samples in the domain of the CubicHermite interpolator.",
                ),
            )
        end
        x = xs[1]
        for i = 2:length(xs)
            if xs[i] <= x
                throw(DomainError("The list of abscissas must be strictly increasing."))
            end
            x = xs[i]
        end
        if (length(xs) != length(ys) || length(ys) != length(dydxs))
            throw(
                DomainError(
                    "There must be exactly the same number of abscissas as ordinates and derivatives.",
                ),
            )
        end
        new(xs, ys, dydxs)
    end
end

function (ch::CubicHermite)(x::Float64)::Float64
    if x <= ch.xs[1]
        if x == ch.xs[1]
            return ch.ys[1]
        end
        throw(
            DomainError(
                "Requested abscissas $x is less than the minimum value of the domain of the interpolator.",
            ),
        )
    end

    if x >= last(ch.xs)
        if x == last(ch.xs)
            return last(ch.ys)
        end
        throw(
            DomainError(
                "Requested abscissa $x is greater than the maximum value of the domain of the interpolator.",
            ),
        )
    end
    # searchsorted is convenient but it is a little too general for this usecase.
    # We have asserted that the xs's are strictly increasing.
    # Ostensibly there is another function that could do this more cleanly.
    urange = searchsorted(ch.xs, x)
    # urange.start exceeds urange.stop in this case which again indicates that searchsorted is not quite the right tool for this job:
    i = urange.stop
    x1 = ch.xs[i]
    x2 = ch.xs[i+1]
    @assert x1 <= x <= x2
    h = x2 - x1
    y1 = ch.ys[i]
    y2 = ch.ys[i+1]

    dydx1 = ch.dydxs[i]
    dydx2 = ch.dydxs[i+1]
    # Corless, A Graduate Introduction to Numerical Methods, equation 14.10:
    c1 = (2x - 3x1 + x2) * (x - x2) * (x - x2) / (h * h * h)
    c2 = -(2x - 3x2 + x1) * (x - x1) * (x - x1) / (h * h * h)
    d1 = (x - x1) * (x - x2) * (x - x2) / (h * h)
    d2 = (x - x2) * (x - x1) * (x - x1) / (h * h)
    c1 * y1 + c2 * y2 + d1 * dydx1 + d2 * dydx2
end

function gradient(ch::CubicHermite, x::Float64)::Float64
    if x <= ch.xs[1]
        if x == ch.xs[1]
            return ch.dydxs[1]
        end
        throw(
            DomainError(
                "Requested abscissas $x is less than the minimum value of the domain of the interpolator.",
            ),
        )
    end

    if x >= last(ch.xs)
        if x == last(ch.xs)
            return last(ch.dydxs)
        end
        throw(
            DomainError(
                "Requested abscissa $x is greater than the maximum value of the domain of the interpolator.",
            ),
        )
    end
    urange = searchsorted(ch.xs, x)
    i = urange.stop
    x1 = ch.xs[i]
    x2 = ch.xs[i+1]
    @assert x1 <= x <= x2
    h = x2 - x1
    y1 = ch.ys[i]
    y2 = ch.ys[i+1]

    dydx1 = ch.dydxs[i]
    dydx2 = ch.dydxs[i+1]
    # Corless, A Graduate Introduction to Numerical Methods, equation 14.11:
    c1 = 6 * (x - x2) * (x - x1) / (h * h * h)
    d1 = (x - x2) * (3x - 2x1 - x2) / (h * h)
    d2 = (x - x1) * (3x - 2x2 - x1) / (h * h)
    c1 * (y1 - y2) + d1 * dydx1 + d2 * dydx2
end

function hessian(ch::CubicHermite, x::Float64)::Float64
    if x < ch.xs[1]
        throw(
            DomainError(
                "Requested abscissas $x is less than the minimum value of the domain of the interpolator.",
            ),
        )
    end

    if x >= last(ch.xs)
        if x == last(ch.xs)
            h = ch.xs[end] - ch.xs[end-1]
            c1 = 6 / (h * h)
            d1 = 2 / h
            d2 = 4 / h
            return c1 * (ch.ys[end-1] - ch.ys[end]) +
                   d1 * ch.dydxs[end-1] +
                   d2 * ch.dydxs[end]
        end
        throw(
            DomainError(
                "Requested abscissa $x is greater than the maximum value of the domain of the interpolator.",
            ),
        )
    end
    urange = searchsorted(ch.xs, x)
    i = urange.stop
    x1 = ch.xs[i]
    x2 = ch.xs[i+1]
    @assert x1 <= x <= x2
    h = x2 - x1
    y1 = ch.ys[i]
    y2 = ch.ys[i+1]

    dydx1 = ch.dydxs[i]
    dydx2 = ch.dydxs[i+1]
    # Corless, A Graduate Introduction to Numerical Methods, equation 14.12:
    c1 = 6 * (2x - x1 - x2) / (h * h * h)
    d1 = 2 * (3x - 2x2 - x1) / (h * h)
    d2 = 2 * (3x - 2x1 - x2) / (h * h)
    c1 * (y1 - y2) + d1 * dydx1 + d2 * dydx2
end


export CubicHermite
export gradient
export hessian

And tests:

using Distributions
using Random
using Test

function cubic_hermite_test()
    xs = ones(5)
    for i = 2:length(xs)
        xs[i] = i
    end
    ys = ones(5)
    dydxs = zeros(5)
    ch = CubicHermite(xs, ys, dydxs)
    for i = 1:length(xs)-1
        x = Float64(i)
        @test ch(x) == 1.0
        @test gradient(ch, x) == 0.0
        @test hessian(ch, x) == 0.0
        x = x + 0.25
        @test ch(x) == 1.0
        @test gradient(ch, x) == 0.0
        @test hessian(ch, x) == 0.0
        x = x + 0.25
        @test ch(x) == 1.0
        @test gradient(ch, x) == 0.0
        @test hessian(ch, x) == 0.0
        x = x + 0.25
        @test ch(x) == 1.0
        @test gradient(ch, x) == 0.0
        @test hessian(ch, x) == 0.0
    end
    # Ensure that the right endpoint query doesn't read past the end of the array:
    @test ch(5.0) == 1.0
    @test gradient(ch, 5.0) == 0.0
    @test hessian(ch, 5.0) == 0.0

    # Now linear functions:
    a = 7.2
    b = 9.6
    for i = 1:length(xs)
        ys[i] = a * xs[i] + b
        dydxs[i] = a
    end
    ch = CubicHermite(xs, ys, dydxs)
    for i = 1:length(xs)-1
        x = Float64(i)
        @test ch(x)  a * x + b
        @test gradient(ch, x)  a
        @test abs(hessian(ch, x)) < 3e-14
        x = x + 0.25
        @test ch(x)  a * x + b
        @test gradient(ch, x)  a
        @test abs(hessian(ch, x)) < 3e-14
        x = x + 0.25
        @test ch(x)  a * x + b
        @test gradient(ch, x)  a
        @test abs(hessian(ch, x)) < 3e-14
        x = x + 0.25
        @test ch(x)  a * x + b
        @test gradient(ch, x)  a
        @test abs(hessian(ch, x)) < 3e-14
    end
    @test ch(last(xs))  a * last(xs) + b
    @test gradient(ch, last(xs))  a
    @test abs(hessian(ch, last(xs))) < 3e-14

    # Now the interpolation condition:
    r = Random.Xoshiro(12345)
    xs = zeros(50)
    ys = zeros(50)
    dydxs = zeros(50)
    xs[1] = 0.0
    ys[1] = randn(r, Float64)
    dydxs[1] = randn(r, Float64)
    for i = 2:50
        xs[i] = xs[i-1] + rand(r) + 0.1
        ys[i] = randn(r, Float64)
        dydxs[i] = randn(r, Float64)
    end

    ch = CubicHermite(xs, ys, dydxs)

    for i = 1:50
        @test ch(xs[i])  ys[i]
        @test gradient(ch, xs[i])  dydxs[i]
    end

    # Now quadratics:
    a = 1.0 / 8
    b = -3
    c = -2
    for i = 1:50
        ys[i] = a * xs[i] * xs[i] + b * xs[i] + c
        dydxs[i] = 2 * a * xs[i] + b
    end
    ch = CubicHermite(xs, ys, dydxs)
    for i = 1:200
        x = rand(r) * last(xs)
        @test ch(x)  a * x * x + b * x + c
        @test gradient(ch, x)  2 * a * x + b
        @test hessian(ch, x)  2 * a
    end
    x = last(xs)
    @test ch(x)  a * x * x + b * x + c
    @test gradient(ch, x)  2 * x * a + b
    @test hessian(ch, x)  2 * a
end

@mkitti
Copy link
Collaborator

mkitti commented Jan 2, 2022

@NAThompson could you resubmit that code as a pull request? We can work on it in the branch there.

@NAThompson
Copy link
Contributor

@mkitti : Done. Clicked the "allow edits from maintainers" button as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants