Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add liekf example for se2_localization #308

Open
wants to merge 6 commits into
base: devel
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions examples/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ add_executable(se2_average se2_average.cpp)
add_executable(se2_interpolation se2_interpolation.cpp)
add_executable(se2_decasteljau se2_DeCasteljau.cpp)
add_executable(se2_localization se2_localization.cpp)
add_executable(se2_localization_liekf se2_localization_liekf.cpp)
add_executable(se2_localization_ukfm se2_localization_ukfm.cpp)
add_executable(se2_sam se2_sam.cpp)

Expand All @@ -23,6 +24,7 @@ set(CXX_11_EXAMPLE_TARGETS
se2_decasteljau
se2_average
se2_localization
se2_localization_liekf
se2_sam

se2_localization_ukfm
Expand Down
282 changes: 282 additions & 0 deletions examples/se2_localization_liekf.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,282 @@
/**
* \file se2_localization_liekf.cpp
*
* Created on: Jan 20, 2021
* \author: artivis
*
* Modified on: Aug 30, 2024
* \author: chris
*
* ---------------------------------------------------------
* This file is:
* (c) 2021 artivis
*
* This file is part of `manif`, a C++ template-only library
* for Lie theory targeted at estimation for robotics.
* Manif is:
* (c) 2021 artivis
* ---------------------------------------------------------
*
* ---------------------------------------------------------
* Demonstration example:
*
* 2D Robot localization based on position measurements (GPS-like)
* using the (Left) Invariant Extended Kalman Filter method.
*
* See se2_localization.cpp for the right invariant equivalent
* ---------------------------------------------------------
*
* This demo corresponds to the application in chapter 4.3 (left invariant)
* in the thesis
* 'Non-linear state error based extended Kalman filters with applications to navigation' A. Barrau.
*
* It is adapted after the sample problem presented in chapter 4.1 (left invariant)
* in the thesis
* 'Practical Considerations and Extensions of the Invariant Extended Kalman Filtering Framework' J. Arsenault
*
* Finally, it is ported from an example (a matlab code plus a problem
* formulation/explanation document) by Faraaz Ahmed and James Richard
* Forbes, McGill University.
*
* The following is an abstract of the content of the paper.
artivis marked this conversation as resolved.
Show resolved Hide resolved
* Please consult the paper for better reference.
*
*
* We consider a robot in the plane.
* The robot receives control actions in the form of axial
* and angular velocities, and is able to measure its position
* using a GPS for instance.
*
* The robot pose X is in SE(2) and the GPS measurement y_k is in R^2,
*
* | cos th -sin th x |
* X = | sin th cos th y | // position and orientation
* | 0 0 1 |
*
* The control signal u is a twist in se(2) comprising longitudinal
* velocity v and angular velocity w, with no lateral velocity
* component, integrated over the sampling time dt.
*
* u = (v*dt, 0, w*dt)
*
* The control is corrupted by additive Gaussian noise u_noise,
* with covariance
*
* Q = diagonal(sigma_v^2, sigma_s^2, sigma_w^2).
*
* This noise accounts for possible lateral slippage u_s
* through a non-zero value of sigma_s,
*
* At the arrival of a control u, the robot pose is updated
* with X <-- X * Exp(u) = X + u.
*
* GPS measurements are put in Cartesian form for simplicity.
* Their noise n is zero mean Gaussian, and is specified
* with a covariances matrix R.
* We notice the rigid motion action y = h(X) = X * [0 0 1]^T + v
*
* y_k = (x, y) // robot coordinates
*
* We define the pose to estimate as X in SE(2).
* The estimation error dx and its covariance P are expressed
* in the global space at epsilon.
*
* All these variables are summarized again as follows
*
* X : robot pose, SE(2)
* u : robot control, (v*dt ; 0 ; w*dt) in se(2)
* Q : control perturbation covariance
* y : robot position measurement in global frame, R^2
* R : covariance of the measurement noise
*
* The motion and measurement models are
*
* X_(t+1) = f(X_t, u) = X_t * Exp ( w ) // motion equation
* y_k = h(X, b_k) = X * [0 0 1]^T // measurement equation
*
* The algorithm below comprises first a simulator to
* produce measurements, then uses these measurements
* to estimate the state, using a Lie-based
* Left Invariant Kalman filter.
*
* This file has plain code with only one main() function.
* There are no function calls other than those involving `manif`.
*
* Printing simulated state and estimated state together
* with an unfiltered state (i.e. without Kalman corrections)
* allows for evaluating the quality of the estimates.
*/

#include "manif/SE2.h"

#include <vector>

#include <iostream>
#include <iomanip>

using std::cout;
using std::endl;

using namespace Eigen;

typedef Array<double, 2, 1> Array2d;
typedef Array<double, 3, 1> Array3d;

int main()
{
std::srand((unsigned int) time(0));

// START CONFIGURATION
//
//
const Matrix3d I = Matrix3d::Identity();

// Define the robot pose element and its covariance
manif::SE2d X, X_simulation, X_unfiltered;
Matrix3d P, P_L;

X_simulation.setIdentity();
X.setIdentity();
X_unfiltered.setIdentity();
P.setZero();

// Define a control vector and its noise and covariance
manif::SE2Tangentd u_simu, u_est, u_unfilt;
Vector3d u_nom, u_noisy, u_noise;
Array3d u_sigmas;
Matrix3d U;

u_nom << 0.1, 0.0, 0.05;
u_sigmas << 0.1, 0.1, 0.1;
U = (u_sigmas * u_sigmas).matrix().asDiagonal();

// Declare the Jacobians of the motion wrt robot and control
manif::SE2d::Jacobian J_x, J_u, AdX, AdXinv;

// Define the gps measurements in R^2
Vector2d y, y_noise;
Array2d y_sigmas;
Matrix2d R;

y_sigmas << 0.1, 0.1;
R = (y_sigmas * y_sigmas).matrix().asDiagonal();

// Declare the Jacobian of the measurements wrt the robot pose
Matrix<double, 2, 3> H; // H = J_e_x

// Declare some temporaries
Vector2d e, z; // expectation, innovation
Matrix2d E, Z; // covariances of the above
Matrix<double, 3, 2> K; // Kalman gain
manif::SE2Tangentd dx; // optimal update step, or error-state

//
//
// CONFIGURATION DONE



// DEBUG
cout << std::fixed << std::setprecision(3) << std::showpos << endl;
cout << "X STATE : X Y THETA" << endl;
cout << "----------------------------------" << endl;
cout << "X initial : " << X_simulation.log().coeffs().transpose() << endl;
cout << "----------------------------------" << endl;
// END DEBUG




// START TEMPORAL LOOP
//
//

// Make 10 steps. Measure one GPS position each time.
for (int t = 0; t < 10; t++)
{
//// I. Simulation ###############################################################################

/// simulate noise
u_noise = u_sigmas * Array3d::Random(); // control noise
u_noisy = u_nom + u_noise; // noisy control

u_simu = u_nom;
u_est = u_noisy;
u_unfilt = u_noisy;

/// first we move - - - - - - - - - - - - - - - - - - - - - - - - - - - -
X_simulation = X_simulation + u_simu; // overloaded X.rplus(u) = X * exp(u)

/// then we receive noisy gps measurement - - - - - - - - - - - - - - - -
y_noise = y_sigmas * Array2d::Random(); // simulate measurement noise

y = X_simulation.translation(); // position measurement, before adding noise
y = y + y_noise; // position measurement, noisy




//// II. Estimation ###############################################################################

/// First we move - - - - - - - - - - - - - - - - - - - - - - - - - - - -

X = X.plus(u_est, J_x, J_u); // X * exp(u), with Jacobians

P = J_x * P * J_x.transpose() + J_u * U * J_u.transpose();


/// Then we correct using the gps position - - - - - - - - - - - - - - -

// transform covariance from right to left invariant
AdX = X.adj();
AdXinv = X.inverse().adj();
P_L = AdX * P * AdX.transpose(); // from eq. (54) in the paper
Chris7462 marked this conversation as resolved.
Show resolved Hide resolved

// expectation
e = X.translation(); // e = t, for X = (R,t).
H.topLeftCorner<2, 2>() = Matrix2d::Identity();
H.topRightCorner<2, 1>() = manif::skew(1.0) * X.translation();
Copy link
Author

@Chris7462 Chris7462 Sep 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the expectation $h(X)=t$, $t$ is the translation, this is equivalent to $X$ act on 0. The Jacobian of $H$ is derived as follows:

$$\begin{align} H & = \frac{\partial h}{\partial X}\\\ & = \lim_{\tau\rightarrow 0} \frac{h(\tau\oplus X)\ominus h(X)}{\tau}\\\ & = \lim_{\tau\rightarrow 0} \frac{(Exp(\tau)X)\cdot 0-X\cdot 0}{\tau}\\\ & = \lim_{\tau\rightarrow 0}\frac{\left(\begin{bmatrix} I+\phi^{\wedge} & \rho\\\ 0 & 1 \end{bmatrix}\begin{bmatrix} R & t\\\ 0 & 1 \end{bmatrix}\right) \cdot 0-\begin{bmatrix} R & t\\\ 0 & 1 \end{bmatrix}\cdot 0}{\tau}\\\ & = \lim_{\tau\rightarrow 0}\frac{\begin{bmatrix} (I+\phi^{\wedge})R & (I+\phi^{\wedge})t+\rho\\\ 0 & 1 \end{bmatrix}\cdot 0-\begin{bmatrix} R & t\\\ 0 & 1 \end{bmatrix}\cdot 0}{\tau}\\\ & = \lim_{\tau\rightarrow 0}\frac{(I+\phi^{\wedge})t+\rho-t}{\tau}\\\ & = \lim_{\tau\rightarrow 0}\frac{\phi^{\wedge}t+\rho}{\tau}\\\ & = \frac{\partial(\phi^{\wedge}t+\rho)}{\partial \tau}\Bigg|_{\tau=0}\\\ & = \left[\left.\frac{\partial(\phi^{\wedge}t+\rho)}{\partial\rho}\right|_{\rho=0},\left.\frac{\partial(\phi^{\wedge}t+\rho)}{\partial\phi}\right|_{\phi=0}\right]\\\ & = [I, 1^{\wedge}t] \end{align}$$

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment I guess will not appear in the final commits.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this comment will not appear in the commits.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This matrix isn't right. See e.g. chapter "4 Simplified car example" in "The Invariant Extended Kalman filter as a stable observer", A. Barrau, S. Bonnabel.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This measurement model is of the form y = x.b + v and is thus left invariant. If I'm not mistaken, H should then be derived with the right-plus (which correspond to left invariance):

$$\begin{align} H & = \frac{\partial h}{\partial X}\\\ & = \lim_{\tau\rightarrow 0} \frac{h(X\oplus\tau)\ominus h(X)}{\tau}\\\ & = ... \end{align}$$

If you work the math you should find something along the lines of:

$$H = \begin{bmatrix} -{\bf I} & {\bf 0} \end{bmatrix}$$

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok this makes more sense

Copy link
Collaborator

@joansola joansola Sep 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then you need the Jacobian of X.inv(y-h(X)), and not of h(X). OK, to find such Jacobian, just use the rules for right-jacobians in the paper, you certainly will find [-I, 0] I guess. Just need to know if you need to differentiate wrt. X, or wrt. \hat X -- I am unsure at the moment.

Copy link
Author

@Chris7462 Chris7462 Sep 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get the desired result from the Jacobian of $X^{-1}\cdot h(X)$ w.r.t $X$, where $h(X)=X\cdot 0 =t$

$$\begin{align} J^{X^{-1}\cdot h(X)}_{X} & = J^{X^{-1}\cdot t}_{X^{-1}}J^{X^{-1}}_{X}\\\ & = \begin{bmatrix} R^{T} & R^{T}[1]_{x}t \end{bmatrix}\begin{bmatrix} -R & [1]_{x}t\\\ 0 & -1 \end{bmatrix}\\\ & = \begin{bmatrix} -\mathbf{I} & \mathbf{0} \end{bmatrix} \end{align}$$

Am I doing this correctly? Does this make sense?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good

Copy link
Author

@Chris7462 Chris7462 Sep 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After reading the papers by Axel Barrau and Silvere Bonnabel (2017, 2018) and Jonathan Arsenault (2019), I believe I finally understand the concepts. Let me use this example to summarize my findings and compare the results with the manif paper.

Example: SE(2) with GPS-like measurement

Recap the setting:

We consider the robot pose $\chi$ in SE(2) and the GPS measurement $y$ is in $\mathbb{R}^{2}$,

$$\mathbf{\chi} = \begin{bmatrix} \mathbf{R} & \mathbf{t}\\\ \mathbf{0}^{T} & 1 \end{bmatrix}$$

The control signal $\mathbf{u}$ is in se(2) and is corrupted by additive Gaussian noise $\mathbf{\varepsilon}$, which has a mean of 0 and a covariance of $\mathbf{Q}$. Upon receiving a control $\mathbf{u}$, the robot pose is updated as follows:

$$\begin{align} \chi_{t} & = f(\chi_{t-1}, \mathbf{u}_{t},\mathbf{\varepsilon}_{t})\\\ & = \chi_{t-1}*\text{Exp}(\mathbf{u}_{t}+\mathbf{\varepsilon}_{t}). \end{align}$$

The GPS measurement $\mathbf{y}$ is in Cartesian form for simplicity, and the noise is denoted $\mathbf{\delta}$, which has a mean of 0 and a covariance of $\mathbf{R}$.

$$\begin{align} \mathbf{y}_{t} & = h(\chi_{t}, b) + \mathbf{\delta}_{t}\\\ & = \chi_{t} \cdot b+\mathbf{\delta}_{t} \end{align}$$

where $b=0$ for rigid motion action in this case.

Let $\bar{\chi}$ be the estimate of $\chi$. Therefore, we have $\bar{\chi}_{t}=f(\bar{\chi}_{t-1}, \mathbf{u}_{t}, 0)$. Now, we can begin working on the math and comparing the differences

Predict step:

Axel and Silvere's approach:

They define the left-invariant error as $\eta_{t}=\chi^{-1}_{t}\bar{\chi}_{t}$.

$$\begin{align} \eta_{t}^{L} & = \chi^{-1}_{t}\bar{\chi}_{t}\\\ & = \left[\chi_{t-1}*\text{Exp}(\mathbf{u}_{t}+\mathbf{\varepsilon}_{t})\right]^{-1}\left[\bar{\chi}_{t-1}*\text{Exp}(\mathbf{u}_{t})\right]\\\ & = \text{Exp}(-\varepsilon_{t})\text{Exp}(\mathbf{u}_{t})^{-1}\eta_{t-1}^{L}\text{Exp}(\mathbf{u}_{t}) \end{align}$$

One can find a $\xi_{t}^{L}$ such that $\eta_{t}^{L} = \text{Exp}(\xi_{t}^{L})$. This allows us to rewrite the above equation as follows:

$$\begin{align} \Rightarrow \text{Exp}(\xi_{t}^{L}) &= \text{Exp}(-\varepsilon_{t})\text{Exp}(\mathbf{u}_{t})^{-1}\text{Exp}(\xi_{t}^{L})\text{Exp}(\mathbf{u}_{t})\\\ & = \text{Exp}(-\varepsilon_{t})\text{Exp}(\text{Ad}_{\text{Exp}(\mathbf{u}_{t})}^{-1}\xi_{t-1}^{L})\\\ \Rightarrow \xi_{t}^{L} &= \text{Ad}_{\text{Exp}(\mathbf{u}_{t})}^{-1}\xi_{t-1}^{L}-\varepsilon_{t}\\\ \Rightarrow \xi_{t}^{L} &= F_{t}\xi_{t-1}^{L}+W_{t}\varepsilon_{t} \end{align}$$

The Jacobians are given by

$$F_{t} = \text{Ad}_{\text{Exp}(\mathbf{u}_{t})}^{-1} \text{\hspace{3ex}and\hspace{3ex}} W_{t}=-\mathbf{I}.$$

Thus, the predicted state and error covariance are computed as follows:

$$\begin{align} \check{\chi}_{t} & = \bar{\chi}_{t}=\bar{\chi}_{t-1}\oplus \mathbf{u}_{t}\\\ \check{\Sigma}_{t} & = F_{t}\hat{\Sigma}_{t-1}F_{t}^{T}+W_{t}Q_{t}W_{t}^{T} \end{align}$$

manif approach:

Their approach involves defining the error on the tangent space and applying the error-state extended Kalman filter on the Lie Group. To facilitate this, they introduce the concepts of right-plus and right-minus operators as follows:

$$\begin{align} \text{right-}\oplus: & \hspace{1ex}\mathcal{Y}=\mathcal{X}\oplus {}^{\mathcal{x}}\tau =\mathcal{X}*\text{Exp}({}^{\mathcal{x}}\tau)\\\ \text{right-}\ominus: & \hspace{1ex} {}^{\mathcal{x}}\tau=\mathcal{Y}\ominus\mathcal{X}=\text{Log}(\mathcal{X}^{-1}*\mathcal{Y}) \end{align}$$

Next, using Taylor expansion of $\chi_{t-1}$ around $\bar{\chi}_{t-1}$, we have:

$$\begin{align} \chi_{t} &= f(\chi_{t-1},\mathbf{u}_{t}, \varepsilon)\\\ & \approx f(\bar{\chi}_{t-1},\mathbf{u}_{t},0)\oplus \left.\frac{\partial f}{\partial \chi}\right|_{\chi_{t-1}=\bar{\chi}_{t-1}}(\chi_{t-1}\ominus\bar{\chi}_{t-1})\oplus\left.\frac{\partial f}{\partial \varepsilon}\right|_{\varepsilon_{t}=0}(\varepsilon_{t}-0)\\\ & = \bar{\chi_{t}}\oplus F_{t}(\chi_{t-1}\ominus\bar{\chi}_{t-1})\oplus W_{t}\varepsilon_{t} \end{align}$$

Let ${}^{\mathcal{x}}\xi=\chi\ominus\bar{\chi}=\text{Log}(\bar{\chi}^{-1}*\chi).$ From this definition, we have $\text{Exp}({}^{\mathcal{x}}\xi)=\bar{\chi}^{-1}*\chi$. We note that ${}^{\mathcal{x}}\xi = -\xi^{L}$.

We can now rewrite the earlier equation as:

$${}^{\mathcal{x}}\xi_{t}=F_{t} {}^{\mathcal{x}}\xi_{t-1}+W_{t}\varepsilon_{t}$$

where the Jacobians are defined as follows:

For $F_{t}$:

$$\begin{align} F_{t} & = \left.\frac{\partial f}{\partial \chi}\right|_{\chi_{t-1}=\bar{\chi}_{t-1}}\\\ & = \lim_{\chi_{t-1}\rightarrow\bar{\chi}_{t-1}}\frac{f(\chi_{t-1},\mathbf{u}_{t},0)\ominus f(\bar{\chi}_{t-1},\mathbf{u}_{t},0)}{\chi_{t-1}\ominus\bar{\chi}_{t-1}}\\\ & = \lim_{{}^{\mathcal{x}}\xi_{t-1}\rightarrow 0}\frac{f(\chi_{t-1}\oplus{}^{\mathcal{x}}\xi_{t-1},\mathbf{u}_{t},0)\ominus f(\chi_{t-1},\mathbf{u}_{t},0)}{{}^{\mathcal{x}}\xi_{t-1}}\\\ & = \frac{Df(\chi_{t-1},\mathbf{u}_{t},0)}{D\chi_{t-1}}\\\ & = J_{\chi_{t-1}}^{f(\chi_{t-1},\mathbf{u}_{t},0)}\\\ & = J_{\chi_{t-1}}^{\chi_{t-1}\oplus\mathbf{u}_{t}}\\\ & = J_{\chi_{t-1}}^{\chi_{t-1}*\text{Exp}(\mathbf{u}_{t})}\\\ & = \text{Ad}_{\text{Exp}(\mathbf{u}_{t})}^{-1} \end{align}$$

For $W_{t}$:

$$\begin{align} W_{t} & = \left.\frac{\partial f}{\partial \varepsilon}\right|_{\varepsilon_{t}=0}\\\ & = \lim_{\varepsilon_{t}\rightarrow 0}\frac{f(\bar{\chi}_{t-1},\mathbf{u}_{t},\varepsilon_{t})\ominus f(\bar{\chi}_{t-1},\mathbf{u}_{t},0)}{\varepsilon_{t}}\\\ & = J^{f(\bar{\chi}_{t-1},\mathbf{u}_{t},\varepsilon_{t})}_{\varepsilon_{t}}\\\ & = J^{(\bar{\chi}_{t-1}\oplus\mathbf{u}_{t})\oplus\varepsilon_{t}}_{\varepsilon_{t}}\\\ & = J_{r}(\varepsilon_{t})\\\ & \approx \mathbf{I} \end{align}$$

Note that in the manif approach, the $W_{t}$ is the negation of $W_{t}$ in Axel and Silvere's approach. This difference arises from the definition of the error state ${}^{\mathcal{x}}\xi$, where $\text{Exp}({}^{\mathcal{x}}\xi)=\bar{\chi}^{-1}\chi$. If you substitute ${}^{\mathcal{x}}\xi$ with $-\xi^{L}$ in the error propogation equation, you will get the same result of $W_{t}$ as in Axel and Silvere's approach.

Therefore, the predicted state and error covariance are

$$\begin{align} \check{\chi}_{t} & = \bar{\chi}_{t}=\bar{\chi}_{t-1}\oplus \mathbf{u}_{t}\\\ \check{\Sigma}_{t} & =F_{t}\hat{\Sigma}_{t-1}F_{t}^{T}+W_{t}Q_{t}W_{t}^{T} \end{align}$$

which match exactly with the results in Axel and Silvere's approach.

Update step:

Axel and Silvere's approach:

$$\begin{align} y_{t} &= h(\chi_{t},b)+\delta_{t},\hspace{1ex}\text{with } b=0\\\ &=\chi_{t}\cdot b+\delta_{t}\\\ \Rightarrow \begin{bmatrix} y_{t}\\\ 0 \end{bmatrix} & = \chi_{t}\begin{bmatrix} b\\\ 1 \end{bmatrix}+\begin{bmatrix} \delta_{t}\\\ 0 \end{bmatrix} \end{align}$$ $$\begin{align} \bar{y}_{t} &=h(\bar{x}_{t},b)\\\ & = \bar{x}_{t}\cdot b \end{align}$$

Now, define the innovation $z_{t}$ such that

$$\begin{align} \begin{bmatrix} z_{t}\\\ 0 \end{bmatrix} &= \bar{\chi}_{t}^{-1}\left(\begin{bmatrix} y_{t}\\\ 0 \end{bmatrix}-\begin{bmatrix} \bar{y}_{t}\\\ 0 \end{bmatrix}\right)\\\ &= \bar{\chi}_{t}^{-1}\left( \chi_{t} \begin{bmatrix} b\\\ 1 \end{bmatrix}+ \begin{bmatrix} \delta_{t}\\\ 0 \end{bmatrix}-\bar{\chi}_{t}\begin{bmatrix} b\\\ 1 \end{bmatrix} \right)\\\ & = \eta_{t}^{L^{-1}} \begin{bmatrix} b\\\ 1 \end{bmatrix}+\bar{\chi}_{t}^{-1} \begin{bmatrix} \delta_{t}\\\ 0 \end{bmatrix}- \begin{bmatrix} b\\\ 1 \end{bmatrix}\\\ & = \text{Exp}(\xi_{t}^{L})^{-1} \begin{bmatrix} b\\\ 1 \end{bmatrix}+\bar{\chi}_{t}^{-1} \begin{bmatrix} \delta_{t}\\\ 0 \end{bmatrix}- \begin{bmatrix} b\\\ 1 \end{bmatrix}\\\ & = (\mathbf{I}-\xi_{t}^{L^{\wedge}}) \begin{bmatrix} b\\\ 1 \end{bmatrix}+\bar{\chi}_{t}^{-1} \begin{bmatrix} \delta_{t}\\\ 0 \end{bmatrix}- \begin{bmatrix} b\\\ 1 \end{bmatrix}\\\ & = -\xi_{t}^{L^{\wedge}} \begin{bmatrix} b\\\ 1 \end{bmatrix}+\bar{\chi}_{t}^{-1} \begin{bmatrix} \delta_{t}\\\ 0 \end{bmatrix}\\\ \Rightarrow z_{t} & = H_{t}\xi_{t}^{L}+V_{t}\delta_{t} \end{align}$$

where

$$H_{t} = \begin{bmatrix} -\mathbf{I} & 0 \end{bmatrix}\hspace{3ex}\text{and}\hspace{3ex}V_{t}=\bar{R}^{T}_{t}$$

You will obtain this result by removing the last row of zeros in these matries.

In addition, we have $\text{Exp}(\xi_{t}^{L})=\eta_{t}^{L}=\chi_{t}^{-1}\bar{\chi}_{t}$, which implies that $\chi_{t}=\bar{\chi}_{t}\text{Exp}(-\xi_{t}^{L})$. This suggest the update of $\chi_{t}$ is

$$\hat{\chi}_{t}=\check{\chi}_{t}\text{Exp}(-K_{t}z_{t}),$$

where $K_{t}=\check{\Sigma}_{t}H_{t}S_{t}^{-1}$ and $S_{t}=H_{t}\check{\Sigma}_{t}H_{t}^{T}+V_{t}R_{t}V_{t}^{T}$.

Thus, the updated state and error covariance are

$$\begin{align} \hat{\chi}_{t} & = \check{\chi}_{t}\text{Exp}(-K_{t}z_{t})\\\ \hat{\Sigma}_{t} & = (\mathbf{I}-K_{t}H_{t})\check{\Sigma}_{t} \end{align}$$

manif approach:

Define the innovation $z_{t}$ such that

$$\begin{align} z_{t} & = \bar{\chi}_{t}^{-1}\cdot (y_{t}-\bar{y}_{t})\\\ & = \bar{\chi}_{t}^{-1}\cdot y_{t}-b \end{align}$$

Now let $h(\chi)=(\bar{\chi}_{t}^{-1}\chi_{t})\cdot b$ and note that

$$\begin{align} \bar{\chi}_{t}^{-1}\cdot y_{t} &= \bar{\chi}_{t}^{-1}(\chi\cdot b+\delta_{t})\\\ & = \bar{\chi}_{t}^{-1}\chi_{t}\cdot b+\bar{\chi}_{t}^{-1}\cdot \delta_{t}\\\ & = h(\bar{\chi}_{t})+\left.\frac{\partial h}{\partial \chi}\right|_{\chi_{t}=\bar{\chi}_{t}}(\chi_{t}\ominus\bar{\chi}_{t})+\bar{\chi}_{t}^{-1}\cdot \delta_{t}\\\ & = b+H_{t}{}^{\mathcal{x}}\xi_{t}+\bar{\chi}_{t}^{-1}\cdot \delta_{t}\\\ \end{align}$$

So, we can rewrite the innovation as

$$z_{t} = H_{t}{}^{\mathcal{x}}\xi_{t}+V_{t}\delta_{t}\\$$

where

$$\begin{align} H_{t} &=\left.\frac{\partial h}{\partial \chi}\right|_{\chi_{t}=\bar{\chi}_{t}}\\\ & = \lim_{\chi_{t}\rightarrow \bar{\chi}_{t}}\frac{h(\chi_{t})-h(\bar{\chi}_{t})}{\chi_{t}\ominus \bar{\chi}_{t}}\\\ & = \lim_{{}^{\mathcal{x}}\xi_{t}\rightarrow 0}\frac{\text{Exp}({}^{\mathcal{x}}\xi_{t})\cdot b-b}{{}^{\mathcal{x}}\xi_{t}}\\\ & = \lim_{{}^{\mathcal{x}}\xi_{t}\rightarrow 0}\frac{(\mathbf{I}+{}^{\mathcal{x}}\xi_{t}^{\wedge})\cdot b-b}{{}^{\mathcal{x}}\xi_{t}}\\\ & = \lim_{{}^{\mathcal{x}}\xi_{t}\rightarrow 0}\frac{\phi^{\wedge}b+\rho}{{}^{\mathcal{x}}\xi_{t}}\\\ & = \begin{bmatrix} \left.\frac{\partial \phi^{\wedge}b+\rho}{\partial\rho}\right|_{\rho=0} & \left.\frac{\partial \phi^{\wedge}b+\rho}{\partial\phi}\right|_{\phi=0} & \end{bmatrix}\\\ & = \begin{bmatrix} \mathbf{I} & 0 \end{bmatrix} \end{align}$$

and $V_{t}=\bar{R}_{t}^{T}$. Again, the $H_{t}$ in manif approach is negation of $H_{t}$ in Axel and Silvere's approach. This difference arises due to the definition of the error ${}^{\mathcal{x}}\xi_{t}$.

Furthermore,, we have $\text{Exp}({}^{\mathcal{x}}\xi_{t})=\bar{\chi}_{t}^{-1}\chi_{t}$, which implies that $\chi_{t}=\bar{\chi}_{t}\text{Exp}({}^{\mathcal{x}}\xi_{t})$. This suggest the update of $\chi_{t}$ is

$$\hat{\chi}_{t}=\check{\chi}_{t}\text{Exp}(K_{t}z_{t}),$$

where $K_{t}=\check{\Sigma}_{t}H_{t}S_{t}^{-1}$ and $S_{t}=H_{t}\check{\Sigma}_{t}H_{t}^{T}+V_{t}R_{t}V_{t}^{T}$.

Thus, the updated state and error covariance are given by:

$$\begin{align} \hat{\chi}_{t} & = \check{\chi}_{t}\text{Exp}(K_{t}z_{t})\\\ \hat{\Sigma}_{t} & = (\mathbf{I}-K_{t}H_{t})\check{\Sigma}_{t} \end{align}$$

Summary:

Axel and Silvere Manif
error $\eta^{L}=\chi^{-1}\bar{\chi}$
$\text{Exp}(\xi^{L})=\chi^{-1}\bar{\chi}$
${}^{\mathcal{x}}\xi=\chi\ominus\bar{\chi}$
$\text{Exp}({}^{\mathcal{x}}\xi)=\bar{\chi}^{-1}\chi$
Predict $\check{\chi}_{t}=\hat{\chi}_{t-1}\oplus \mathbf{u}_{t}$
$\check{\Sigma}_{t}=F_{t}\hat{\Sigma}_{t-1}F_{t}^{T}+W_{t}Q_{t}W_{t}^{T}$
$\check{\chi}_{t}=\hat{\chi}_{t-1}\oplus \mathbf{u}_{t}$
$\check{\Sigma}_{t}=F_{t}\hat{\Sigma}_{t-1}F_{t}^{T}+W_{t}Q_{t}W_{t}^{T}$
Update $z_{t}=\bar{\chi}_{t}^{-1}\cdot(y_{t}-\bar{y}_{t})$
$S_{t}=H_{t}\check{\Sigma}_{t}H_{t}^{T}+V_{t}R_{t}V_{t}^{T}$
$K_{t}=\check{\Sigma}_{t}H_{t}^{T}S_{t}^{-1}$
$\hat{\chi}_{t}=\check{\chi}_{t}\text{Exp}(-K_{t}z_{t}).$
$\hat{\Sigma}_{t}=(\mathbf{I}-K_{t}H_{t})\check{\Sigma}_{t}$
$z_{t}=\bar{\chi}_{t}^{-1}\cdot(y_{t}-\bar{y}_{t})$
$S_{t}=H_{t}\check{\Sigma}_{t}H_{t}^{T}+V_{t}R_{t}V_{t}^{T}$
$K_{t}=\check{\Sigma}_{t}H_{t}^{T}S_{t}^{-1}$
$\hat{\chi}_{t}=\check{\chi}_{t}\text{Exp}(K_{t}z_{t}).$
$\hat{\Sigma}_{t}=(\mathbf{I}-K_{t}H_{t})\check{\Sigma}_{t}$

Noticing the sign difference in the update step, one should be very careful about how the error is defined when implementing the invariant kalman filter.

Also, in this example, since the state error in both of the predict step and the update step is left-invariant, there is no need to perform any covariance transformation from left-invariant to right-invariant or vice versa.

E = H * P_L * H.transpose();

// innovation
z = y - e;
Z = E + R;

// Kalman gain
K = P_L * H.transpose() * Z.inverse();

// Correction step
dx = K * z;

// Update
X = X.lplus(dx);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this should be X.lplus(-dx).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see my comment here: #308 (comment)

P = AdXinv * ((I - K * H) * P_L) * AdXinv.transpose();

Chris7462 marked this conversation as resolved.
Show resolved Hide resolved



//// III. Unfiltered ##############################################################################

// move also an unfiltered version for comparison purposes
X_unfiltered = X_unfiltered + u_unfilt;




//// IV. Results ##############################################################################

// DEBUG
cout << "X simulated : " << X_simulation.log().coeffs().transpose() << endl;
cout << "X estimated : " << X.log().coeffs().transpose() << endl;
cout << "X unfilterd : " << X_unfiltered.log().coeffs().transpose() << endl;
cout << "----------------------------------" << endl;
// END DEBUG

}

//
//
// END OF TEMPORAL LOOP. DONE.

return 0;
}
Loading