[SOUND] So there is this very efficient method to compute transient probabilities in CTMCs and it is called uniformization. How does it work? Well, first we have to step from the CTMC to the uniformized DTMC. And I've already explained that in the previous lecture, u equals the identity matrix plus the generative matrix Capital Q. Divided by the so called uniformization constant small q and the uniformization constant has to be as large as the maximum of the diagonal entries in Q. And then each time step in the uniformized DTMC corresponds to one exponentially distributed delay with rate q. So in case the exit rate of a state s equals q, then the transition probabilities are as in the embedded DTMC. However, If the E(s) is smaller than q, we have to add a self loop with probability 1- E(s) divided by Q. And this is to make up for the extra time that we would spend in that state in the corresponding CTMC. So here is an example with three state CTMC, with lambda and mu in the edges. If I compute uniformized DTMC for you now, I'll first I have to check the uniformization constant. Well, this state has the maximum exit rate, lambda plus mu. And this is also what it choose as uniformization rate. Now I give you the uniformized DTMC and what you see is that we add self loops to these states. And they have to be there such that the exit, sorry, the transition probabilities of the outgoing transition sum up to one. So mu divided by Q plus lambda divided by Q nicely sums up to one. And additionally as I said, it makes up for the extra time that we would spend in that state. So let's do this together. Again, a very small example, two states, for which you can easily derive the rate matrix and the Q matrix. And then we need the uniformization matrix U for a uniformization constan. Which I choose to be three here and three is also the maximum entry on the diagonal of Q. So then let's derive U. It is just the identity matrix plus the Q matrix divided by the uniformization constant. So this is -1, 1, two-thirds and minus two-thirds, so it is 0, 1, two third, one third. Now with this unifromization matrix U, I can of course also give you a visualization of the uniformized DTMC. And again here you see that self-loop hatbar. So how do we proceed once we have the uniformized DTMC? Well, first let's do some real arranging. If U is equal to I plus Q divided by q. I can also derive that capital Q is equal to the uniformization constant times U minus I. And I'm going to use now. Here, we have the transient probabilities in the CTMC and they are given by the initial probability distribution p zero times e to the power of Qt. You've seen that before. Now instead of Q, I write what I've derived here. Q times U minus I. That I can split up and that gives me here E to the power of QTU and here, E to the power of minus qIT. Now the I the identity matrix I can simply omit. And then I can use a taylor series expansion to solve these terms, because the other is just a constant. If I do this, I obtain that the probability, the [INAUDIBLE] probability is again given by the initial probability distribution times. This constant e to the power of minus qt times now the infinite summation from 0 to infinity of QTU to the power of I Divided by the faculty of I. So you'll behave nicely, it is a stochastic matrix, so this already a quite good result but they can't rewrite this slightly more. And here, I've written P-hat of I. And this is transition probability distribution in the uniformized DTMC. So this is the initial vector times UI and this part here, I will refer to as the Poisson probabilities. And I'll explain to you why on the next slide. So this result you've seen Now the Poisson probabilities give me the probability to take exactly k steps in the uniformized DTMC before time T. So I'll refer to this later as gamma qt, k for k steps and uniformization constant q times t. So after k steps, the transient distribution in the uniformized DTMC is given by p hat k. And this is just p zero times n to the power of k. So what we do here is a combination of the transient probabilities in the uniformized DTMC, that's this part. Waiting with the probability to take exactly iI steps, before time t so this helps us to lift the transient probabilities in the DTMC back up to the level of the CTMC. Of course, k can be a small as zero or as big as infinity and for the CTMC, we have the weighted average of all possible step numbers k. Well, clearly we have to cut this infinite sum at some point and this will introduce an error. This is bad news. But good news is that we can bound this error our priority. How do we do that? Well, we truncate this infinite summation. And this part is what I would like to compute. It's the sum from zero to infinity. This part is what I can compute. From i to sum constant K epsilon. So then if I substrate one from the other, it leaves me with the difference. So the sum from i is k epsilon plus 1 to infinity. Now let's take a look at what's inside this summation here this is a probability, each entry of this is a probability. So each entry here has to be small or equal to one. And this I can now use to choose my cutting constant k epsilon such that well, the sum from i is k epsilon + 1 to infinity, so the part that I cannot compute equals to 1 minus what I can compute and this has to be smaller or equal to epsilon. So basically what you do is you iteratively keep computing these sum, substring one minus what you have computed so far and check what are the result. Is it smaller or equal to some, which is normally some. Ten to the power of minus six or so. And then once you have reached this bound you can stop and then you have completed your K epsilon and that your result is accurate up to this epsilon which you have pre defined. Now let's do this. Again, a simple example with two states, S0 and S1. You have already computed the R matrix and the Q matrix and the U matrix. What we need is the initial distribution. So we start in state as 0 with probability 1. And then instate it as one of [INAUDIBLE] probability 0. Now what I want is to compute the transient probability distribution for time one. And this is the formula that we use. So it's the infinite summation from. I is 0 to infinity off these Poisson probabilities times the initial distribution times u to the power of i. So let's do that for the first step we obtain the person probabilities for time step 0. And U to the power of 0 is just the identity matrix. Now, or the next step, I now need gamma, 3,1 times P0 times U to the power, just U. U to the power of 1. And then plus gamma 3,2 times p zero times U to the power of two. And so you keep going and keep going. And now I have computed this result for you. >> Let me clean it up first. And the transient probability distribution at time 1 is given by 0.404 and so on for state s0 and 0.5959 and so on for state s1. Now clearly I cannot give you all the information regarding uniformization or maybe other iterative methods that you can use for CTMCs. And this is why I would recommend to you this book. Performance of Computer Communication Systems: A Model-Based Approach by Boudewijn R. Haverkort. And he has a whole chapter on numerical methods. For example, [INAUDIBLE] that you can use to compute the steady probabilities on very large CTMCs, but also it has a whole subchapter on uniformization. [MUSIC].