stateEstimation worldModeling A form of Iterative Closest Point (ICP) that models each point as a gaussian distribution based on its local neighborhood.

How it works

Instead of computing the distance error between each point, we calculate the extension of a Mahalanobis Distance between two a local distributions in the source and target.

Distance between two distributions

People colloquially call this computation a Mahalanobis Distance even though its not the formal definition. More of an extension.

Given two points, that we correspond to be two local distributions.

Then the extended mahalanobis distance between them is

This no longer computes the number of standard deviations two points are from each other.

Instead, its better to think of this as a likelihood the two points are the same.

Probabilistic Interpretation of GICP

Say we have two local distributions. One is in the source cloud, another is in a target cloud.

These distributions are found by a k cluster around the points.

We then model the difference between the two points as.

Assumption: They are the same point

If we assume them to share the same true point, then we would witness the difference to have a mean of 0 and variance equivalent to the sum of the variances of the two distributions.

where is the true value of the point.

This tells us that if the two points share the same true point, then their difference can be modeled by a Gaussian distribution with a and .

Which means that we know that we’ve successfully transformed the source cloud to the target when the likelihood of is at its max

What we are trying to find

Because the goal of GICP is to find a transform such that

We must find a transform on the source cloud that maximizes our Likelihood that the observed differences can be modeled by

Where

Which is just the observation when put inside a gaussian distribution with mean 0 (since we are calculating how likely we can model the observation (with the transform) as part of a distribution assuming that the source and target share the same point.)

Because is very expensive to calculate as a product, we can derive the log-likelihood to make it a summation, which is alot easier to optimize.

Taking the log of the probability…

We can drop the constants because they don’t depend on T, and GICP often drops the determinant term as well

Which is where we finally end up getting the mahalanobis-like distance calculation for GICP.

all this to tell us that we are actually calculating the likelihood that the two points are the same true point.

How do we actually get this transformation

We want to solve

This is a non-linear least squares problem, which can be solved by a class of iterative optimizers. There are many ways to solve for the transformation here. One way is Gauss-Newton Method

There are other methods, but I can’t be bothered to go any further into this for now. small_gicp is fast not because of its optimizer, but rather because of its parallel processing architecture and a bunch of implementation optimizations.

Surprise! Underthehood, we are actually calculating the Mahalanobis Distance with Lie Algebra.