-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added single and double pass ComputeMeanAndCovariance functions, plus basic self-test #3
base: master
Are you sure you want to change the base?
Added single and double pass ComputeMeanAndCovariance functions, plus basic self-test #3
Conversation
👋 A couple of points to discuss.
|
Yes I saw the original commits piling up and increasing, so thanks for the tip. I've gone though the steps you posted, but I see the commits are still there. Should they have disappeared if that process had worked?
Ah, missed that so thanks for pointing it out. I'll make the necessary changes and looking at submitting another PR to the main PCL repo. |
I cloned Sergey's and your forks and the rebase is failing. You need to solve the rebase conflicts before pushing again.
Launch $ git mergetool save the final version of what you want. $ git status
#if README.md appears in git status
$ git add README.md
$ git rebase --continue
# otherwise
$ git rebase --skip Then apply the push force. |
Regarding the test functions. Actually I don't like those two that Sergio referenced, they are against the guidelines of writing custom test assertions for Google Test. But below them there is an example of custom assertions done properly. So I'd propose to take these as an example and implement something like: template <typename VectorT>
::testing::AssertionResult VectorNear (const char* expr1,
const char* expr2,
const char* abs_error_expr,
const VectorT& v1,
const VectorT& v2,
double abs_error)
{
const VectorT diff = ((p1) - (p2)).cwiseAbs ();
if ((diff.array () < abs_error).all ())
return ::testing::AssertionSuccess ();
return ::testing::AssertionFailure ()
<< "Some of the element-wise differences exceed " << abs_error_expr
<< " (which evaluates to " << abs_error << ")" << std::endl
<< "Difference: " << diff.transpose () << std::endl
<< " Value of: " << expr2 << std::endl
<< " Actual: " << v2.transpose () << std::endl
<< " Expected: " << expr1 << std::endl
<< " Which is: " << v1.transpose ();
}
/// Expect that element-wise differences between two vectors
/// are each within abs_error.
#define EXPECT_VECTOR_NEAR(expected, actual, abs_error) \
EXPECT_PRED_FORMAT3(::pcl::test::internal::VectorNear, \
(expected), (actual), abs_error)
|
I think I understand what you're asking for, but I don't seem to have any conflicts. Would you mind taking a look at this output and clarifying the situation? Thanks!
|
You're right in your assessment. It doesn't exhibit any conflict whatsoever. My only explanation is your master branch not being in the exact same state as the one in Sergey's repo. Duplicate the directory somewhere and try this on the duplicate:
This should reset your master branch to the exact same state as Sergey's. Give it a try to the rebase procedure then. |
f9f86f7
to
f846e7c
Compare
Ok, I copied everything into another folder (
Terminal output of commands (collapsible)
|
That was exactly the process. I have to admit that something looks weird with the master branch. @taketwo did you remove previous @msy22's commits from the master branch? I have the impression the first 4 commits were there before and now they're not. Anyway @msy22 you got the gist of it. The take home lesson is never touch the master branch of your fork unless to want to pull the latest changes from upstream. If you ever do anything else other than
on your master branch stop for a second and reassure yourself of what you're doing. On to the next items on the agenda 👍 |
Cool, I appreciate the lesson and I'll keep that in mind for future.
I've pushed more changes, adding your suggested changes to the I've also added a boolean toggle to the As for making the assertion changes to pcl_tests.h, @taketwo's suggestion is more thorough. But perhaps that needs it's own discussion, and is a little tangential to the normal/covariance accuracy stuff. |
On a side note, I'm also doing some more research into numerical problems and what the underlying mathematical reason for the incorrect normal calculation is (at this stage I think it's "catastrophic cancellation"). I see that the double-pass covariance calculation follows a pretty standard algorithm. But I'm not sure where the single pass algorithm is from. Do either of you know where the original sources for this calculation are? |
Disclaimer: I'm claiming this based on a (potentially faulty) memory of having looked into it before. It is likely using the covariance known identity presented here (Second expression). Be sure to have a read at |
Ah, I see what's happened, yes I was reading those pages and looking at those equations, but an error at my end meant they didn't match what I have written on paper. Everything matches now so I'll carry on. |
Ok, so now that I understand the co-variance calculations better, I have a question regarding the difference between them. The single-pass method is mathematically the same as the double-pass method, except the equation has been expanded out. According to what I've read this is ostensibly because the single pass method only accesses the inputs once instead of twice. In terms of code, this just means that the From: pcl::experimental::Function <pcl::Type, Scalar> (*CloudIn ... To: pcl::experimental::computeMeanAndCovarianceMatrixDoublePass <pcl::Type, Scalar> (*CloudIn ... To force the benchmark to use the double-pass version instead of the single-pass version. I then ran the benchmark with a few different sample rates and iterations, but the results show pretty much the same result. It's only when you get to 60000 iterations (roughly the number of points in a Velodyne HDL-32 scan) that you start seeing a difference, but it's pretty negligible. Terminal output of `make benchmarks` (20 samples, 4 iterations)
Terminal output of `make benchmarks` (30 samples, 50 iterations)
Terminal output of `make benchmarks` (30 samples, 60000 iterations)
So have I interpreted this output correctly and the methods aren't really much faster than each other? Or is there something about my implementation that is wrong? On a side note, it appears that the output logged at the end to |
So how do these functions look for single and double pass ComputeMeanAndCovariance? I can add a toggle later so the user can de-meaning on or off, but currently both functions de-mean by default.
I've also added a very basic self-test for the functions to confirm they work. In regards to how we actually test the results there are three options, which I've illustrated with the centroid:
So do these functions look ok? And which calculated vs expected comparison method do we prefer?