Loading [MathJax]/jax/output/HTML-CSS/jax.js

This file gives the elementary information on Expectation, Variance and Covariance for the Random Variables.

Expectation, Variance, Covariance

Expectation

Definition

If X is a random variable with output space H and with density f(x).

Then, E[X]=Hxf(x)dx

Linearity

Sum of random variables

If X and Y are two random variables with respective outputs spaces H and K, with respective densities f and g, with a joint density f(x,y) and with a finite expectation.

We have E[X+Y]=E[X]+E[Y]

Idea of the proof

Let consider the function q defined by q(x,y)=x+y

E[q(X,Y)]=HKq(x,y)f(x,y)dxdy

E[Z]=HK(x+y)f(x,y)dxdy

E[Z]=HKxf(x,y)dxdy+HKyf(x,y)dxdy

E[Z]=HxKf(x,y)dydx+KyHf(x,y)dxdy

E[Z]=Hxf(x)dx+Ky(y)g(y)dy

E[Z]=E[X]+E[Y]

Product by a constant

We also have

E[aX]=aE[X]

Positivity

If X is a positive random variable i.e P(X0)=1 then

E[X]0

Where X is a positive random variable, if E[X]=0, then

P(X=0)=1

Constant

E[a]=a

Covariance and Variance

Definition

Cov(X,Y)=E[(XE[X])(YE[Y])]

Properties

Symmetry

Cov(X,Y)=Cov(Y,X)

Other expression

Sometimes more convenient

Cov(X,Y)=E[XY]E[X]E[Y]

In particular

Var(X)=Cov(X,X)=E[X2]E[X]2

Linearity

Cov(aX+bY,Z)=aCov(X,Z)+bCov(Y,Z)

Variance and covariance

Cov(X,X)=E[(XE[X])(XE[X])]=Var(X)

Note that the variance is always positive (as the expectation of a square of a random variable).

Covariance between a variable and a constant

Cov(X,a)=0

Consequence :

Var(a)=0

Reciprocally, if a random variable has a variance equal to 0, then the variable is constant.

Variance of a linear combination

Var(ni=1λiZi)=ni=1nj=1λiλjCov(Zi,Zj)

Applications

Var(aX)=a2Var(X)

Var(aX+bY)=a2Cov(X,X)+2abCov(X,Y)+b2Cov(Y,Y)

Var(aXbY)=a2Cov(X,X)2abCov(X,Y)+b2Cov(Y,Y)

Var(X+a)=Var(X)

Covariance matrix

When we have a set of random variables Z1,,Zn.

For each pair (k,l), if we denote ckl=Cov(Zk,Zl)

We can store the ckl’s in a matrix Σ=[c11c1nc21c2ncn1cnn]

Σ is named the covariance matrix of the random vector Z=[Z1Zn]

Note that we can rewrite

Var(ni=1λiZi)=λTΣλ

where λ=[λ1λn]

and T designates the transposition

λT=[λ1λn]

Since a variance is always positive, the variance of any linear combination as to be positive. Therefore, a covariance matrix is always (semi-)positive definite, i.e

For each λ λTΣλ0

Cross-covariance matrix

Let consider two random vectors X=(X1,,Xn) and Y=(Y1,,Yp).

We can consider the cross-covariance matrix Cov(X,Y) where element corresponding to the row i and the column j is Cov(Xi,Yj)

If A and B are some matrices (of constants)

Cov(AX,BY)=ACov(X,Y)BT

Exercise

Suppose that we want to estimate a quantity modeled by a random variable Z0 as a linear combination of known quanties Z1,,Zn stored in a vector Z=[Z1Zn]

We will denote Z0=ni=1λiZi=λTZ this (random) estimator.

We know the covariance matrix of the full vector (Z0,Z1,,Zn) that we write with blocks for convenience:

[σ20cT0c0C]

where

  • σ20=Var(Z0)
  • c0=Cov(Z,Z0)
  • C is the covariance matrix of Z.

Compute the variance of the error Z0Z0

Solution

Var(Z0Z0)=Cov(Z0Z0,Z0Z0)

Var(Z0Z0)=Var(Z0)2Cov(Z0,Z0)+Var(Z0)

Var(Z0Z0)=Var(λTZ)2Cov(λTZ,Z0)+σ20

Var(Z0Z0)=λTVar(Z)λ2λTCov(Z,Z0)+σ20

Var(Z0Z0)=λTCλ2λTc0+σ20

Correlation coefficient

The covariance is a measure of the link between two variables. However it depends on the scale of each variable. To have a similar measure which is invariant by rescaling, we can use the correlation coefficient:

ρ(X,Y)=Cov(X,Y)Var(X)Var(Y)

When the correlation coefficient is equal to 1 or 1, we have

Y=aX+b

with

  • a>0 if ρ(X,Y)=1
  • a<0 if ρ(X,Y)=1

Note that ρ(X,Y) can be equal to 0 even if the variables are strongly linked.

The usual example is a variable X with a pair density (f(x)=f(x)) and Y=X2:

Cov(X,Y)=Cov(X,X2)=E[X3]E[X]E[X2]=E[X3]=Rx3f(x)dx=0