Table of contents {: .text-delta }
Once we understand how probability density functions (PDFs) work, we can extend this to understand expectation
Expectation
It is a basically a weighted average (which is for discrete variables) for a continuous distribution
- If a random variable can have discrete outcomes, the probability of each outcome is weighted and an average is taken
- In the continuous distribution sense, this becomes an integral each event of a random variable (x) and it’s probability (p(x))
The above image shows some examples of how alpha(constant) and x(random varible) are computed
However, there are a few basic properties to understand for all the Kalman Filters and Particle Filters we will study:
- E[alpha + x] = alpha + E[x]
- E[x,y] (called Joint Expectation)
(called Conditional Expectation)
- E[x + y] = E[x] + E[y] (derived below)
Correlation and Uncorrelation
Uncorrelation and Joint Expectation
Above we mentioned Joint Expectation as E([x,y]. Now the only way we know x and y are uncorrelated here is if
E[x,y] = = E[x]*E[y] (as x and y are clearly independent random variables)
However, the inverse is not valid (i.e. if we only know they are uncorrelated, we cannot state independece like above equation)
Example to show that uncorrelation does not mean independence
Connecting Variance to Expectation (I think it’s important!)
Covariances and Thinking Vectors
If we have a simple vector equation of the form:
Then for an equation of the form: y = Ax + b
, we can find the covariance in terms of vector x
as:
Projecting Multivariate covariance
If z = f(x,y)
, then the covariance of z can be expressed as:
Important Learning
Properties of Covariance Matrix
Properties of PSD
Note: In the second property below, he means if A is positive semi definite and B is positive definite only then the sum will be positive definite. (At least one of them should be PSD and other PD)