The goal of MAP is generally:
Which means that we are trying to find the single best estimate for the state of the system given our control input and measurements.
Using Bayes’ Theorem
yes you can single out specific variables and keep the givens, here was a singling out and and how they interact with each other
We can drop the denominator because it doesn’t depend on (we are trying to argmax here). We can drop because doesn’t depend on it. (see observation model in LG Problem Statement).
Each set of state and measurement is independent of the other sets, so
Looking at our motion model
we see that depends on its previous state and the input. As a result, we can factor as
Which gives us
From the LG Problem Statement its its motion and observation models, we can get that:
To make optimization easier, the logarithm of both sides is taken
where
After cancelling the terms that are independent of x, we can construct an objective function.
This objective function is directly grabbed from the logged gaussians you derived above. The negative has been removed so it becomes a minimization problem
So we end up with a simplified, equivalent optimization problem
We can convert our objective function into a cleaner Matrix Form by cleverly stacking our known data.
We then define the following block-matrix quantities:
This lets us represent our objective function as:
Since is a paraboloid, there exists a closed form solution if we set its partial derivative with respect to to 0.
Which is a Normal Equation!
It is important to note here that Maximum A Posteriori will optimize to the mode of the distribution!
