Opposed to Maximum A Posteriori where we were calculating just the entire state as an optimization problem, Bayesian Inference aims to compute the full posterior (that is, it aims to calculate the full distribution of probable state trajectories).

We're not just calculating a single best guess, we are computing the distribution of possible guesses!

This is important because we get an uncertainty value of our estimated state.

Priors

From the LG Problem Statement, we see that.

This can be lifted into lifted matrix form (Matrix and Component Forms but we are lifting up component form of matrix expressions to one bigger matrix expression)

is a lifted transition matrix which as shown is lower-triangular The lifted mean, , and covariance, , is

where

We can see that our prior can be neatly represented as:

This gives us our priors

Posterior

From the LG Problem Statement, we see that

This can be lifted into the form

We write a Joint Gaussian PDFs like so

We can factor the joint gaussian to get

We only care about the first factor because that is the Bayesian Posterior. We know how to get the normal distribution parameters from the Joint Gaussian PDFs, and using that we get

Using the SMW identity…

From this we can get an expression for

Substituting what we had before

Computing turns out to be pretty elegant

We can also restack our matricies in a way to make a nicer equation.

Which simplifies the system of equations to.

which is exactly what we saw in Maximum A Posteriori! This is because we are functioning on Gaussian, whose mean and mode are the same.

Important to note here that Bayesian Inference lets us retreive the mean easier