Skip to content

Commit

Permalink
[sample] add dog position example (#24)
Browse files Browse the repository at this point in the history
  • Loading branch information
FrancoisCarouge authored May 5, 2022
1 parent ac614f3 commit 4b1c9a5
Show file tree
Hide file tree
Showing 9 changed files with 239 additions and 119 deletions.
54 changes: 27 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,10 +58,10 @@ k.x(60.);
k.p(225.);
k.r(25.);

k.observe(48.54);
k.update(48.54);
```

### Multi-Dimensional
## Multi-Dimensional

Two states, one control, using Eigen3 support.

Expand Down Expand Up @@ -94,9 +94,9 @@ k.h(1, 0);
k.r(400);

k.predict(delta_time, gravitational_acceleration);
k.observe(-32.40 );
k.update(-32.40 );
k.predict(delta_time, 39.72);
k.observe(-11.1);
k.update(-11.1);
```

# Library
Expand All @@ -105,19 +105,19 @@ k.observe(-11.1);
- [Continuous Integration & Deployment Actions](#continuous-integration--deployment-actions)
- [Examples](#examples)
- [One-Dimensional](#one-dimensional)
- [Multi-Dimensional](#multi-dimensional)
- [Multi-Dimensional](#multi-dimensional)
- [Library](#library)
- [Motivation](#motivation)
- [Motivation](#motivation)
- [Class fcarouge::kalman](#class-fcarougekalman)
- [Template Parameters](#template-parameters)
- [Member Types](#member-types)
- [Member Functions](#member-functions)
- [Characteristics](#characteristics)
- [Modifiers](#modifiers)
- [Resources](#resources)
- [Class fcarouge::kalman](#class-fcarougekalman)
- [Template Parameters](#template-parameters)
- [Member Types](#member-types)
- [Member Functions](#member-functions)
- [Characteristics](#characteristics)
- [Modifiers](#modifiers)
- [License](#license)

# Motivation
## Motivation

Kalman filters can be difficult to learn, use, and implement. Users often need fair algebra, domain, and software knowledge. Inadequacy leads to incorrectness, underperformance, and a big ball of mud.

Expand All @@ -126,14 +126,7 @@ This package explores what could be a Kalman filter implementation a la standard
- Separation of the algebra implementation.
- Generalization of the support.

# Resources

Awesome resources to learn about Kalman filters:

- [KalmanFilter.NET](https://www.kalmanfilter.net) by Alex Becker.
- [Kalman and Bayesian Filters in Python](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python) by Roger Labbe.

# Class fcarouge::kalman
## Class fcarouge::kalman

Defined in header [fcarouge/kalman.hpp](include/fcarouge/kalman.hpp)

Expand All @@ -147,9 +140,9 @@ template <typename State, typename Output = State, typename Input = State,
class kalman
```
## Template Parameters
### Template Parameters
## Member Types
### Member Types
| Member Type | Definition |
| --- | --- |
Expand All @@ -163,15 +156,15 @@ class kalman
| `output_model` | Type of the observation transition matrix H, also known as C. |
| `input_control` | Type of the control transition matrix G, also known as B. |
## Member Functions
### Member Functions
| Member Function | Definition |
| --- | --- |
| `(constructor)` | Constructs the filter. |
| `(destructor)` | Destructs the filter. |
| `operator=` | Assigns values to the filter. |
### Characteristics
#### Characteristics
| Modifier | Definition |
| --- | --- |
Expand All @@ -185,13 +178,20 @@ class kalman
| `h` | Manages the observation transition matrix. |
| `g` | Manages the control transition matrix. |
### Modifiers
#### Modifiers
| Modifier | Definition |
| --- | --- |
| `observe` | Updates the estimates with the outcome of a measurement. |
| `update` | Updates the estimates with the outcome of a measurement. |
| `predict` | Produces estimates of the state variables and uncertainties. |
# Resources
Awesome resources to learn about Kalman filters:
- [KalmanFilter.NET](https://www.kalmanfilter.net) by Alex Becker.
- [Kalman and Bayesian Filters in Python](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python) by Roger Labbe.
# License
<img align="right" src="http://opensource.org/trademarks/opensource/OSI-Approved-License-100x137.png">
Expand Down
4 changes: 2 additions & 2 deletions include/fcarouge/internal/kalman.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -182,12 +182,12 @@ struct kalman {
//! @name Public Member Functions
//! @{

inline constexpr void observe(const auto &...output_z)
inline constexpr void update(const auto &...output_z)
{
h = transition_observation_h();
r = noise_observation_r();
const auto z{ output{ output_z... } };
internal::observe<Transpose, Symmetrize, Divide, Identity>(x, p, h, r, z);
internal::update<Transpose, Symmetrize, Divide, Identity>(x, p, h, r, z);
}

inline constexpr void predict(const PredictionArguments &...arguments,
Expand Down
4 changes: 2 additions & 2 deletions include/fcarouge/internal/kalman_equation.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -134,8 +134,8 @@ template <template <typename> typename Transpose,
template <typename> typename Symmetrize,
template <typename, typename> typename Divide,
template <typename> typename Identity>
inline constexpr void observe(auto &x, auto &p, const auto &h, const auto &r,
const auto &z)
inline constexpr void update(auto &x, auto &p, const auto &h, const auto &r,
const auto &z)
{
const auto k{ weight_gain<Transpose, Divide>(p, h, r) };

Expand Down
30 changes: 27 additions & 3 deletions include/fcarouge/kalman.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,8 @@ namespace fcarouge
{
//! @brief Kalman filter.
//!
//! @details Bayesian filter that uses Gaussians.
//!
//! @tparam State The type template parameter of the state vector x.
//! @tparam Output The type template parameter of the measurement vector z.
//! @tparam Input The type template parameter of the control u.
Expand Down Expand Up @@ -261,6 +263,8 @@ class kalman

//! @brief Returns the observation, measurement noise covariance matrix R.
//!
//! @details The variance there is in each measurement.
//!
//! @return The observation, measurement noise covariance matrix R.
//!
//! @complexity Constant.
Expand Down Expand Up @@ -354,12 +358,17 @@ class kalman

//! @brief Updates the estimates with the outcome of a measurement.
//!
//! @details Implements the Bayes' theorem?
//! Combine one measurement and the prior estimate.
//!
//! @tparam output_z Observation parameters. Types must be compatible with the
//! `output` type.
inline constexpr void observe(const auto &...output_z);
inline constexpr void update(const auto &...output_z);

//! @brief Produces estimates of the state variables and uncertainties.
//!
//! @details Implements the total probability theorem?
//!
//! @param arguments Optional prediction parameters passed through for
//! computations of prediction matrices.
//! @param input_u Optional control parameters. Types must be compatible with
Expand Down Expand Up @@ -551,6 +560,21 @@ kalman<State, Output, Input, Transpose, Symmetrize, Divide, Identity,
// +reset function
}

template <typename State, typename Output, typename Input,
template <typename> typename Transpose,
template <typename> typename Symmetrize,
template <typename, typename> typename Divide,
template <typename> typename Identity,
typename... PredictionArguments>
[[nodiscard]] inline constexpr
typename kalman<State, Output, Input, Transpose, Symmetrize, Divide,
Identity, PredictionArguments...>::input_control
kalman<State, Output, Input, Transpose, Symmetrize, Divide, Identity,
PredictionArguments...>::g() const
{
return filter.g;
}

template <typename State, typename Output, typename Input,
template <typename> typename Transpose,
template <typename> typename Symmetrize,
Expand All @@ -573,9 +597,9 @@ template <typename State, typename Output, typename Input,
typename... PredictionArguments>
inline constexpr void
kalman<State, Output, Input, Transpose, Symmetrize, Divide, Identity,
PredictionArguments...>::observe(const auto &...output_z)
PredictionArguments...>::update(const auto &...output_z)
{
filter.observe(output_z...);
filter.update(output_z...);
}

template <typename State, typename Output, typename Input,
Expand Down
20 changes: 10 additions & 10 deletions sample/building_height.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -51,18 +51,18 @@ namespace
// the measurement uncertainty is: r1 = 25.
k.r(25.);

k.observe(48.54);
k.update(48.54);

// And so on.
k.observe(47.11);
k.observe(55.01);
k.observe(55.15);
k.observe(49.89);
k.observe(40.85);
k.observe(46.72);
k.observe(50.05);
k.observe(51.27);
k.observe(49.95);
k.update(47.11);
k.update(55.01);
k.update(55.15);
k.update(49.89);
k.update(40.85);
k.update(46.72);
k.update(50.05);
k.update(51.27);
k.update(49.95);

// After 10 measurements the filter estimates the height of the building
// at 49.57m.
Expand Down
96 changes: 96 additions & 0 deletions sample/dog_position.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
#include "fcarouge/kalman.hpp"

#include <cassert>

namespace fcarouge::sample
{
namespace
{
//! @test Estimating the Position of a Dog
//!
//! @copyright This example is transcribed from Kalman and Bayesian Filters in
//! Python copyright Roger Labbe
//!
//! @see
//! https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/04-One-Dimensional-Kalman-Filters.ipynb
//!
//! @details Assume that in our latest hackathon someone created an RFID tracker
//! that provides a reasonably accurate position of the dog. The sensor returns
//! the distance of the dog from the left end of the hallway in meters. So, 23.4
//! would mean the dog is 23.4 meters from the left end of the hallway. The
//! sensor is not perfect. A reading of 23.4 could correspond to the dog being
//! at 23.7, or 23.0. However, it is very unlikely to correspond to a position
//! of 47.6. Testing during the hackathon confirmed this result - the sensor is
//! 'reasonably' accurate, and while it had errors, the errors are small.
//! Furthermore, the errors seemed to be evenly distributed on both sides of the
//! true position; a position of 23 m would equally likely be measured as 22.9
//! or 23.1. Perhaps we can model this with a Gaussian. We predict that the dog
//! is moving. This prediction is not perfect. Sometimes our prediction will
//! overshoot, sometimes it will undershoot. We are more likely to undershoot or
//! overshoot by a little than a lot. Perhaps we can also model this with a
//! Gaussian.
//!
//! @example dog_position.cpp
[[maybe_unused]] auto building_height{ [] {
using kalman = kalman<double>;
kalman k;

// Initialization
// This is the dog's initial position expressed as a Gaussian. The position is
// 0 meters, and the variance to 400 m, which is a standard deviation of 20
// meters. You can think of this as saying "I believe with 99.7% accuracy the
// position is 0 plus or minus 60 meters". This is because with Gaussians
// ~99.7% of values fall within of the mean.
k.x(1.);
k.p(20 * 20.);

// Prediction
// Variance in the dog's movement. The process variance is how much error
// there is in the process model. Dogs rarely do what we expect, and things
// like hills or the whiff of a squirrel will change his progress.
k.q(1.);

// Measure and Update
// Variance in the sensor. The meaning of sensor variance should be clear - it
// is how much variance there is in each measurement.
k.r(2.);

// We are predicting that at each time step the dog moves forward one meter.
// This is the process model - the description of how we think the dog moves.
// How do I know the velocity? Magic? Consider it a prediction, or perhaps we
// have a secondary velocity sensor. Please accept this simplification for
// now.
k.g(1.);

k.predict(1.);
k.update(1.354);
k.predict(1.);
k.update(1.882);
k.predict(1.);
k.update(4.341);
k.predict(1.);
k.update(7.156);
k.predict(1.);
k.update(6.939);
k.predict(1.);
k.update(6.844);
k.predict(1.);
k.update(9.847);
k.predict(1.);
k.update(12.553);
k.predict(1.);
k.update(16.273);
k.predict(1.);
k.update(14.8);

assert(
15.053 - 0.001 < k.x() && k.x() < 15.053 + 0.001 &&
"Here we can see that the variance converges to 2.1623 in 9 steps. This "
"means that we have become very confident in our position estimate. It "
"is equal to meters. Contrast this to the sensor's meters.");

return 0;
}() };

} // namespace
} // namespace fcarouge::sample
20 changes: 10 additions & 10 deletions sample/liquid_temperature.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -66,27 +66,27 @@ namespace
// Celsius.
k.r(0.1 * 0.1);

k.observe(49.95);
k.update(49.95);

// And so on, every measurements period: Δt = 5s (constant).
k.predict();
k.observe(49.967);
k.update(49.967);
k.predict();
k.observe(50.1);
k.update(50.1);
k.predict();
k.observe(50.106);
k.update(50.106);
k.predict();
k.observe(49.992);
k.update(49.992);
k.predict();
k.observe(49.819);
k.update(49.819);
k.predict();
k.observe(49.933);
k.update(49.933);
k.predict();
k.observe(50.007);
k.update(50.007);
k.predict();
k.observe(50.023);
k.update(50.023);
k.predict();
k.observe(49.99);
k.update(49.99);

// The estimate uncertainty quickly goes down, after 10 measurements:
assert(0.0013 - 0.0001 < k.p() && k.p() < 0.0013 + 0.0001 &&
Expand Down
Loading

0 comments on commit 4b1c9a5

Please sign in to comment.