Sheet 2

Prof. Leif Döring, Felix Benning
course: Wahrscheinlichkeitstheorie 1semester: FSS 2022tutorialDate: 28.02.2022dueDate: 10:15 in the exercise on Monday 28.02.2022
Exercise 1.

Calculate the following conditional expectations and determine the conditional distributions.

  1. (i)

    𝔼[(Y-X)+|X], where X,Y are independent uniform random variables on [0,1].

    Solution.

    Using our kernel representation we get

    𝔼[(Y-X)+|X] =(y-X)+(Ydy|X)
    =independent(y-X)+(Ydy)
    =01(y-X)+𝑑y=X1y-Xdy=01-Xs𝑑s
    =12(1-X)2
  2. (ii)

    𝔼[X|Y], where (X,Y) has a joint distribution with density:

    f(x,y)=4y(x-y)e-(x+y)𝟙0<y<x.
    Solution.

    We know that

    𝔼[X|Y]=x(Xdx|Y)=xf(x|Y)dx

    with

    f(x|y)=f(x,y)fY(y),

    where

    fY(y) =f(x,y)𝑑x=4y(x-y)e-(x+y)𝟙0<y<x𝑑x=y4y(x-y)e-(x+y)𝑑x
    =z=x-y04yze-(z+2y)𝑑z=4ye-2y0ze-z𝑑z=Γ(2)=1!
    =4ye-2y.

    Using

    xf(x,y)𝑑x =y4xy(x-y)e-(x+y)𝑑x=04(z+y)yze-(z+2y)𝑑z
    =4ye-2y0(z+y)ze-z𝑑z=4ye-2y(Γ(3)+yΓ(2))
    =4ye-2y(2+y)

    we therefore get

    𝔼[X|Y] =xf(x|Y)𝑑x=xf(x,Y)𝑑xfY(Y)=4Ye-2Y(2+Y)4Ye-2Y=2+Y.
  3. (iii)

    𝔼[X|X+Y], where X,Y are independent Poisson random variables with parameters λ and μ respectively.

    Solution.

    Since X and Y are both Poission we are in the discrete setting. So first, we want to compute the conditional probability:

    (X=x|X+Y=z)=(X=x,X+Y=z)(X+Y=z).

    Let us calculate each part separately. Starting with the nominator, we get:

    (X=k,X+Y=z) =(X=x,Y=z-x)
    =λxe-λx!μz-xe-μ(z-x)!𝟙zx

    For the denominator we can use the results from Stochastics 1, that X+YPoi(λ+μ). In case, you have not seen it so far, the calculation below shows the claim from Sto1:

    (X+Y=z) =x0(X=x,Y=z-x)
    =x=0zλxe-λx!μz-xe-μ(z-x)!
    =e-(λ+μ)z!x=0zz!x!(z-x)!μz-xλx=(μ+λ)z
    =(μ+λ)zz!e-(λ+μ)

    Now we can calculate the conditional expectation 𝔼[X|X+Y]:

    𝔼[X|X+Y] =x0x(X=x|X+Y=z)
    =x=0zxλxe-λx!μz-xe-μ(z-x)!(μ+λ)zz!e-(λ+μ)
    =λ(μ+λ)zx=0z(ddλλx)μz-xz!x!(z-x)!
    =λ(μ+λ)zddλx=0zλxμz-x(zx)=(λ+μ)z
    =λzμ+λ

    Thus, we can conclude that 𝔼[X|X+Y]=λλ+μ(X+Y). ∎

Exercise 2 ((Sub-/Super-)Martingales).
  1. (i)

    The (sub-/super-)martingale property holds over several time steps, i.e.

    𝔼[Xm|n]{=XnmartingaleXnsupermartingaleXnsubmartingalemn
    Solution.

    We use induction over m with induction start n, so assuming the statement holds for m we have for m+1

    𝔼[Xm+1|n] =𝔼[𝔼[Xm+1|m]|n]𝔼[Xm|n]Xn.
  2. (ii)

    Expectation increase/decrease/stay-constant, i.e.

    𝔼[Xm]{=𝔼[Xn]martingale𝔼[Xn]supermartingale𝔼[Xn]submartingalemn
    Solution.

    Simply taking the expectation over the previous result yields the claim. ∎

  3. (iii)

    X is a supermartingale iff -X is a submartingale.

    Solution.

    Adapted and 𝔼[|Xn|]< follows from both statements. So let X be a supermartingale, then we have

    𝔼[-Xn+1|n]=-𝔼[Xn+1|n]Xn-Xn

    therefore -X is a submartingale. The opposite direction is analogous. ∎

Exercise 3 (Markov Inequality for Conditional Expectation).

Prove that for two random variables X,Y and an increasing h:[0,)[0,) we have for any ϵ>0

(|X|ϵ|Y)𝔼[h(|X|)|Y]h(ϵ)a.s.
Remark.

The proof works the same when one conditions on a sigma algebra instead of a random variable Y.

Solution.

Simply using monotonicity of conditional expectation we get

(|X|ϵ|Y) =𝔼[𝟙|X|ϵ|Y]𝔼[𝟙h(|X|)h(ϵ)|Y]𝔼[h(|X|)h(ϵ)𝟙h(|X|)h(ϵ)|Y]
𝔼[h(|X|)|Y]h(ϵ).
Exercise 4.

Let X,Y be independent with standard normal distribution 𝒩(0,1). Show that, for any z, the conditional distribution (X|X+Y=z) is 𝒩(z/2,1/2).

Solution.

We know that

(XX+Y)=(1011)=:A(XY)𝒩(0,Σ),

with

Σ=AAT=(1112).

With

det(Σ)=2-1=1andΣ-1=(2-1-11)

we can write the density of this vector explicitly

f(x,z) =det(2πΣ)-12exp(-12(xz)Σ-1(xz))
=[(2π)2det(Σ)]-12exp(-12(2x-z-x+z)(xz))
=1|2π|exp(-12(2x2-zx-xz+z2)).

Lastly we have X+Y𝒩(0,2) and therefore the marginal density is

fX+Y(z)=14πexp(-z24).

This allows us to calculate the conditional density

fX|X+Y(x|z) =f(x,z)fX+Y(z)=4π2πexp(-12(2x2-2zx+z2)+z24)
=1πexp(-12(2x2-2zx+12z2))
=1πexp(-(x2-2z2x+14z2))
=1πexp(-(x-z2)2).

Which finally implies (X|X+Y)𝒩(z2,12). ∎

Exercise 5 (Stopping Times).
  1. (i)

    Suppose T1,T2, are (n)-stopping times. Prove that infkTk, supkTk, lim infkTk and lim supkTk are stopping times.

    Solution.

    S is a stopping time, iff {Sn}n}. So we just need to check this definition in every case:

    1. (a)

      {infkTkn}=k{Tkn}n

    2. (b)

      {supkTkn}=k{Tkn}n

    3. (c)

      {lim infkTkn}={limkinfjkTjn}=k{infjkTjn}n

    4. (d)

      {lim supkTkn}={limksupjkTjn}=k{supjkTjn}n

  2. (ii)

    Suppose T,S are n stopping times. Prove {ST}ST and {S=T}ST.

    Solution.

    To prove AST one needs to prove A{STn}n, so

    {ST}{STn} =[{Sn,ST}{S>n,ST}]{STn}
    =[k=0n{SkT}{S>n,ST}]{STn}
    =k=0n{SkT}{Sn,ST,STn}=
    =k=0n[{Sk}kn{kT}kn]n

    Similarly we have {TS}ST, therefore their intersection {S=T} is also in ST. ∎

  3. (iii)

    (Last Hitting Time is no Stopping Time in General) Consider the stochastic process X defined by X0=0, X1𝒰({0,1}) and Xk=1 for all k>1. Let n=σ(X1,,Xn) be the natural filtration generated by X. Show that the last hitting time of 0

    L:=sup{k0:Xk=0}

    is not a stopping time.

    Solution.

    We have 0={,Ω} and

    L={0X1=11X1=0

    therefore L is not a stopping time since

    {L0} ={X1=1}0.