Sheet 9
††course: Wahrscheinlichkeitstheorie 1††semester: FSS 2022††tutorialDate: 02.05.2022††dueDate: 10:15 in the exercise on Monday 02.05.2022Exercise 1 (Calculating Moments).
Using the characteristic function, calculate the first three moments of
-
(i)
Solution.
We have by Example 7.4.12 of the lecture, so by Theorem 7.6.1
-
(ii)
Solution.
We have by Example 7.4.12 of the lecture, so by Theorem 7.6.1
-
(iii)
Solution.
We have by Example 7.4.10 of the lecture, so by Theorem 7.6.1
Exercise 2 (Gaussian Vectors 2).
In Sheet 8 Exercise 4 we have defined Gaussian Vectors a little bit different than in the lecture. Here we will show these two definitions are equivalent.
So let be a Gaussian vector as defined in the lecture with independent . Since we have talked about centered Gaussian vectors before, let us first deal with the expectation.
-
(i)
Prove that , i.e.
Solution.
Let be the -th row of . Then
-
(ii)
Prove that for any random vector (not just Gaussian) the characteristic function of is simply
Solution.
We can interpret as a random variable and use 7.4.9 (iv) to prove that the characteristic function of the sum is the product of the individual characteristic functions (because a constant is independent from any random variable). So we only need to calculate the characteristic function of a constant
Now let us continue on to the (co-)variance and characteristic function.
-
(iii)
Prove that for .
Solution.
We can assume without loss of generality since we are subtracting from in the covariance anyway, so
-
(iv)
Prove that . (In particular every linear combination is a Gaussian random variable).
Solution.
We have
The first part is a constant while the second part is a linear combination of independent centered normal random variables. Therefore it is normal with expected value . And the variance is
-
(v)
Show that the characteristic function of is equal to
Solution.
Since we know the distribution of and the characteristic function of normal distributed random variables in one dimension. We have
Du to uniqueness of characteristic functions we have shown that the two definitions are equivalent now. Lastly
-
(vi)
prove that the density of is given by
You may assume to be invertible.
Hint.
Transformation Theorem.
Solution.
Let be any measurable function. Then
Exercise 3 (Binomial against Poission).
Let be a sequence of real numbers in . Suppose that . Using Lévy’s continuity theorem, show that
Solution.
From Example 7.4.10 we know that the characteristic functions of Bernoulli and Poisson random variables. So let and , then
as for . Hence, by Lévy’s continuity theorem 7.5.3 the distributions converge against each other. For
we want to apply Portmanteau. But (v) is not applicable because the border of the point set is generally not of probability zero. Instead we view our probability measures as probability measures on . Now using -balls it becomes obvious, that all sets on this metric space are open, and therefore all sets are closed. This allows us to use (iii) and (iv) of the Portmanteau theorem for the same result, finishing our proof. ∎
Exercise 4 (Inverse Fourier).
(Optional - Bonus) Assuming measure has density , then
for the characteristic function of .
Hint.
Simply plugging in the definition of the characteristic function and using Fubini won’t work. the result would be this
but would need to be a dirac measure on to have us get back from this point. But Fubini is only allowed for measurable functions and this is not a measurable function. It somehow is zero on all values but , so it is almost surely zero, but at the same time is supposed to integrate to one.
So we have to work around this infinitely high function on one point. And the idea here is to approximate this “delta function” by Gauß kernels
So you show
Here you can plug in definitions and use Fubini freely. Then use the fact that the fourier transform of the normal distribution is basically the density of a normal distribution again to iteratively get rid of all the transforms.
Solution.
We are using a Gauß kernel
to smudge the equation
Here we can see that another interpretation of the smushing is that we are using a regularized version of the fourier transform on . Now if is well defined (i.e. if is integrable), then we have for by the dominated convergence theorem. Continuing on we get
What is left to do is taking the limit on this side as well. This is a bit tricky. We are going to show that
(1) |
while the other limit was a pointwise limit! But this is okay due to Fatou’s Lemma
which implies Lebesgue almost everywhere. But densities are only unique up to Lebesgue zero sets, so we would be done.
Okay so let us show (1): Since is a density, we can write
where we used the substitution in the last equation. Note that the constant of the normal distribution eats up the in the substitution . Therefore we have
Writing for the translation by , we have
because a shift does not change the integral over . With this upper bound integrable against , we can use the DCT to move the limit into the integral
Where , because this holds for . And, as is dense in (since we can approximate any measurable function with linear combinations of indicators which can in turn be approximated by continuous indicators), it also holds by triangle inequality for all . ∎
Exercise 5 (Pólya’s Theorem).
-
(i)
Let be independently distributed. For prove that the density is
and the characteristic function is
Hint.
Lemma 7.4.9. And use the double angle formula
Solution.
The density of a sum of independent random variables is the convolution of their densities:
For the characteristic function we use Lemma 7.4.9 (iv) to get
where we have used the trigonometric double angle formula
-
(ii)
Prove that a random variable with density
has the characteristic function
Deduce the characteristic function of .
Hint.
Use (i) and Fourier inversion.
Solution.
Using symmetry of the density, we can substitute with to get
where we have used Fourier inversion in the last inequality with the definition of from (i) except using in place of . Since this is a probability measure. By linearity of the fourier transform we have
∎
-
(iii)
Assuming , calculate the slope of on . Prove that for given slopes we can select with , such that has slope on the interval with .
Solution.
For we can drop the absolute value to get
So for we have
-
(iv)
Prove Pólya’s Theorem using interpolation points , previous results and Lévy’s Continuity Theorem.
Theorem (Pólya).
Let be a function which is continuous, even, convex on and satisfies and . Then is a characteristic function for some probability measure.
Solution.
As is convex on the negative slopes between the interpolation points are increasing (decreasing in absolute value). Therefore the are positive. So the resulting is a measure. If we can show
we get the fact, that is a probability measure for free, as then . And by increasing the number of interpolation points we get convergence against , so would be a characteristic function by Lévy’s continuity theorem. Okay, so let us try to prove this equality