Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add liekf example for se2_localization #308
base: devel
Are you sure you want to change the base?
Add liekf example for se2_localization #308
Changes from 1 commit
6dc0d0c
1864eda
b2be76a
9d1eb07
9eb9df3
0ac0dcb
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the expectation$h(X)=t$ , $t$ is the translation, this is equivalent to $X$ act on 0. The Jacobian of $H$ is derived as follows:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment I guess will not appear in the final commits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this comment will not appear in the commits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This matrix isn't right. See e.g. chapter "4 Simplified car example" in "The Invariant Extended Kalman filter as a stable observer", A. Barrau, S. Bonnabel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This measurement model is of the form
y = x.b + v
and is thus left invariant. If I'm not mistaken,H
should then be derived with the right-plus (which correspond to left invariance):If you work the math you should find something along the lines of:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok this makes more sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then you need the Jacobian of
X.inv(y-h(X))
, and not ofh(X)
. OK, to find such Jacobian, just use the rules for right-jacobians in the paper, you certainly will find[-I, 0]
I guess. Just need to know if you need to differentiate wrt.X
, or wrt.\hat X
-- I am unsure at the moment.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get the desired result from the Jacobian of$X^{-1}\cdot h(X)$ w.r.t $X$ , where $h(X)=X\cdot 0 =t$
Am I doing this correctly? Does this make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After reading the papers by Axel Barrau and Silvere Bonnabel (2017, 2018) and Jonathan Arsenault (2019), I believe I finally understand the concepts. Let me use this example to summarize my findings and compare the results with the manif paper.
Example: SE(2) with GPS-like measurement
Recap the setting:
We consider the robot pose$\chi$ in SE(2) and the GPS measurement $y$ is in $\mathbb{R}^{2}$ ,
The control signal$\mathbf{u}$ is in se(2) and is corrupted by additive Gaussian noise $\mathbf{\varepsilon}$ , which has a mean of 0 and a covariance of $\mathbf{Q}$ . Upon receiving a control $\mathbf{u}$ , the robot pose is updated as follows:
The GPS measurement$\mathbf{y}$ is in Cartesian form for simplicity, and the noise is denoted $\mathbf{\delta}$ , which has a mean of 0 and a covariance of $\mathbf{R}$ .
where$b=0$ for rigid motion action in this case.
Let$\bar{\chi}$ be the estimate of $\chi$ . Therefore, we have $\bar{\chi}_{t}=f(\bar{\chi}_{t-1}, \mathbf{u}_{t}, 0)$ . Now, we can begin working on the math and comparing the differences
Predict step:
Axel and Silvere's approach:
They define the left-invariant error as$\eta_{t}=\chi^{-1}_{t}\bar{\chi}_{t}$ .
One can find a$\xi_{t}^{L}$ such that $\eta_{t}^{L} = \text{Exp}(\xi_{t}^{L})$ . This allows us to rewrite the above equation as follows:
The Jacobians are given by
Thus, the predicted state and error covariance are computed as follows:
manif approach:
Their approach involves defining the error on the tangent space and applying the error-state extended Kalman filter on the Lie Group. To facilitate this, they introduce the concepts of right-plus and right-minus operators as follows:
Next, using Taylor expansion of$\chi_{t-1}$ around $\bar{\chi}_{t-1}$ , we have:
Let${}^{\mathcal{x}}\xi=\chi\ominus\bar{\chi}=\text{Log}(\bar{\chi}^{-1}*\chi).$ From this definition, we have $\text{Exp}({}^{\mathcal{x}}\xi)=\bar{\chi}^{-1}*\chi$ . We note that ${}^{\mathcal{x}}\xi = -\xi^{L}$ .
We can now rewrite the earlier equation as:
where the Jacobians are defined as follows:
For$F_{t}$ :
For$W_{t}$ :
Note that in the manif approach, the$W_{t}$ is the negation of $W_{t}$ in Axel and Silvere's approach. This difference arises from the definition of the error state ${}^{\mathcal{x}}\xi$ , where $\text{Exp}({}^{\mathcal{x}}\xi)=\bar{\chi}^{-1}\chi$ . If you substitute ${}^{\mathcal{x}}\xi$ with $-\xi^{L}$ in the error propogation equation, you will get the same result of $W_{t}$ as in Axel and Silvere's approach.
Therefore, the predicted state and error covariance are
which match exactly with the results in Axel and Silvere's approach.
Update step:
Axel and Silvere's approach:
Now, define the innovation$z_{t}$ such that
where
You will obtain this result by removing the last row of zeros in these matries.
In addition, we have$\text{Exp}(\xi_{t}^{L})=\eta_{t}^{L}=\chi_{t}^{-1}\bar{\chi}_{t}$ , which implies that $\chi_{t}=\bar{\chi}_{t}\text{Exp}(-\xi_{t}^{L})$ . This suggest the update of $\chi_{t}$ is
where$K_{t}=\check{\Sigma}_{t}H_{t}S_{t}^{-1}$ and $S_{t}=H_{t}\check{\Sigma}_{t}H_{t}^{T}+V_{t}R_{t}V_{t}^{T}$ .
Thus, the updated state and error covariance are
manif approach:
Define the innovation$z_{t}$ such that
Now let$h(\chi)=(\bar{\chi}_{t}^{-1}\chi_{t})\cdot b$ and note that
So, we can rewrite the innovation as
$$z_{t} = H_{t}{}^{\mathcal{x}}\xi_{t}+V_{t}\delta_{t}\\$$where
and$V_{t}=\bar{R}_{t}^{T}$ . Again, the $H_{t}$ in manif approach is negation of $H_{t}$ in Axel and Silvere's approach. This difference arises due to the definition of the error ${}^{\mathcal{x}}\xi_{t}$ .
Furthermore,, we have$\text{Exp}({}^{\mathcal{x}}\xi_{t})=\bar{\chi}_{t}^{-1}\chi_{t}$ , which implies that $\chi_{t}=\bar{\chi}_{t}\text{Exp}({}^{\mathcal{x}}\xi_{t})$ . This suggest the update of $\chi_{t}$ is
where$K_{t}=\check{\Sigma}_{t}H_{t}S_{t}^{-1}$ and $S_{t}=H_{t}\check{\Sigma}_{t}H_{t}^{T}+V_{t}R_{t}V_{t}^{T}$ .
Thus, the updated state and error covariance are given by:
Summary:
Noticing the sign difference in the update step, one should be very careful about how the error is defined when implementing the invariant kalman filter.
Also, in this example, since the state error in both of the predict step and the update step is left-invariant, there is no need to perform any covariance transformation from left-invariant to right-invariant or vice versa.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this should be
X.lplus(-dx)
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please see my comment here: #308 (comment)