[SOUND] Hello, this week we'll discuss some important properties of stochastic processes, namely ergodicity, differentiability and continuity. Let me start with ergodicity. This notion is modulated by the law of large numbers. One of the most important theorems in the probability theory. You know that the law of large numbers, the ergodic theory, basically tells us that the psi 1, psi 2, and so on, is a sequence of independent, identically distributed random variables. Then, the median value of psi's, 1 divided by N, sum, psi small n, n from 1 to n converges to the mathematical expectation of psi 1, when n goes to infinity. Here, there are two main questions. The first one, which assumptions of psi's guarantees that this convergence holds? And the second question is, in which sense we should understand this convergence? There are various forms of the law of large numbers. The first form's a so-called classical form, tells us that if mathematical expectation of psi 1 squared is finite, then this convergence holds in probability. Another form is so called Hinchen form, tells us that if the first moment of psi exists, then this convergence holds also in probability. And there is also the so-called strong law of large numbers, which yields that if mathematical expectation of psi 1 is finite, then this convergence can be understood almost surely. I think that these issues are not very important at the moment because this theorem, in any case, can be used in various situations. For instance, it is well used in mathematical statistics, because it yields that the estimator of the mean of a sample convergence to the mathematical expectation. Therefore this mean value is a consistent estimator of the mathematical expectation. Okay, and ergodicity is an attempt to extend this notion to the case of stochastic processes. In the context of stochastic processes, you have dependent non-identical distributive observations. For instance, if you have stochastic process Xt with discrete time, Time here runs from 0, 1, 2 and so on. Then one can ask, what are the properties of the following sum? It's 1 divided by capital T sum small t from 1 to capital T X t when capital T goes to infinity. In the context of stochastic processes, this value, capital T, the maximal index is known as horizon. Well, the question is what are the conditions which guarantee that this sum divided by T converges to some constant? Well, here the situation is much more difficult than is the probability theory, and there is no trivial conditions like this one which currently is this convergence. And basically the notion of ergodicity is exactly given by this line. So say that the process x is ergodic if there is a constant C such that the min value of the values X1 and so on, x T converges to C. And here it will be very important in which sense should you understand this convergence. And it turns out that the most convenient way to understand this in the probability sense, and let me explain why this type of convergence is the most convenient. Let me first of all recall various type of convergence in probability theory. Namely there are about four essential types of convergence. The first type, the type is called convergence almost surely. Which says that the sequence psi n converges to psi almost surely if the probability of all omegas such that psi n of omega converges to psi of omega, is equal to 1. The second type convergence, the mean squared sense. So it's very close, what is called in the functional analysis converges to norm. And this convergence means the mathematical expectation of psi 1n minus psi squared tends to 0 as n goes to infinity. The third type of convergence, convergence and probability, means that for any epsilon larger than zero, probability of the event that psi n- psi, this difference is an absolute value larger than epsilon tends to 0 as n goes to infinity. And finally, there is a type of convergence which is known as convergence in law, convergence distributional with convergence, and this type of convergence is defined as follows. We say that psi converges to psi distribution if the distribution function of psi n, Converges the distribution function of psi for any real x, which is a point of continuity, Of the distribution function of psi. These four types of convergence are related to each other as follows. If psi n converges to psi almost surely, then it also converges to psi in probability. If psi n converges to psi in the mean squared sense, then it also converges to the same thing in probability. On the other side, if psi n converges to psi in probability, than it converges to psi in distribution. And if it converges to psi in distribution, and the limit is almost surely constant, then psi n converges to psi also in probability. So this diagram is very important, both for the probability theory and for the theory of stochastic processes. And let me comment what does this diagram yield in the context of the notion of ergodicity. Basically we have here defined this convergence as convergence in probability. And looking at this diagram, we immediately realize that there are four types how one can prove these conversions. First of all, one can show this converges directly. Secondly, one can show that psi converges to psi almost surely. Also, the mean square sense, and also that it converges to constant in distribution. So four types how one can show this convergence. And in various situation one can use various types of these proofs. And let me give a couple of comments how one can show that this process is ergodic or non-ergodic. The first examples are so trivial. Let me consider the process Xt which is equal to a random variable psi having standard normal distribution. The trajectory of this process looks as follows. So you should once choose some variable for psi and afterwards, a process X is equal to this value for all t. So t, Xt, good. Mathematical expectation of the process Xt is equal to 0. And therefore, the covariance function, it is equal to the variance of psi that is equal to 1. So the process is stationary, both in applied and weak senses. So let me write here, this outcome. But for the ergodicity, we should consider the following sum. So the sum in our case is equal to psi. And since psi is not equal to constant, this object doesn't converge to any constant when capital T goes to infinity. Therefore the process is stationary but non-ergodic. A very interesting situation. So quite simple process, which is definitely stationary, but this simple property is not fulfilled. We know that for most sequences of random variables the law of large numbers is fulfilled to the sequences in i.i.d. So this assumption's very, very mild. But here you see a very simple example of a process which is not ergodic. Okay, let me consider some further examples. The second example comes from kinemetrics. Let me consider the process Xt which is equal to epsilon t + a cosine pi t divided by 6. Here, a is some constant, which is not equal to 0. And epsilon 1, epsilon 2, etc, is a sequence of independent identically distributed standard normal random variables. The plot of the trajectory of the process Xt looks as follows. It starts from a and then it goes down, then up, and it has a period equal to 12. In this respect it is very natural to associate this time with amount of months. Well, now you add some random noise epsilon t, and this means that the process Xt changes near this curve. So it looks approximately like some, let me say, dancing near this curve. Okay. Now let me analyze a process Xt. First of all, let me think about stationarity. Of course, Xt is not stationary, both in the white and streak senses. Just because it's with mathematical expectation is equal to a Cosine pi t divided by 6. It does not equal to a constant, and therefore the process is not stationary. As for the ergodicity, let me consider the sum 1 divided by capital T sum Xt, t from 1 to capital T. This sum has a normal distribution with min value equal to a divided by capital T, sum, Cosine pi t divided by 6, sum, of pi t from 1 to capital T, and variance equal to 1 divided by capital T. And the probability theory there exists a fact that a sequence of normal distributed random variable converges also to a normally distributed random variable. And the very mean of the limiting random variable is equal to the limit of the corresponding means. And variance is equal to the limit of corresponding variances. And here, we have exactly this situation. So variance 1 divided by T converges to 0, when capital T goes to infinity. And as for the mean, what should look more precise at this sum. So what basically we have here? So the first two elements from this sum are positive numbers. This is 1, and the second here, we assume that a is positive. Then the element number 3 is equal to zero and afterwards it has some negative values. So, number 4, 5 and number 6. You look intensively to this picture, you can get to the following conclusion. The sum of the second element and the fourth is equal to 1. This is just because these elements have the same absolute value but different size. The same is true for the elements number 1 and 5 and so on. So the sum of the first five elements is equal to 0. And the same is true also if we consider farther elements of this sum, for instance, 6, 7, 8, 9. So we take three negative elements, element number 6, 7, and 8. And then you have three positive elements, and the sum of these six elements will be equal to 0. Therefore, what a divided by capital T multiplied with the sum cosine pi t divided by 6, the absolute value of this expression is less or equal than the sum of three first elements. And each of these element is less or equal than 1. So we have that this object is less or equal than a divided by T and multiplied by 3. And therefore, it tends to 0 as T goes to infinity. So this situation the process is ergotic. So we have now two examples. The first example of a stationary process but non-ergodic, and the second example of a non-stationary process but ergodic. And therefore, the notions of stationarity and ergodicity are essentially different. Nevertheless, if the process is stationary in a weak sense, then there exists some conditions which guarantee that the process is ergodic, and one can simply check these sufficient conditions. In the next section, we will study exactly the theory of ergodicity for stationary processes. [MUSIC]