Let's see how we can formulate an iterative way of solving a system of linear equations. Here is my system. As usual, A is a non-singular m by m matrix, X is a vector of unknowns, and b is a constant vector all for our constraints. The first step would be to rewrite the system identically, in a form which is similar to what we've seen when looking at iterative methods of solving non-linear equations. Remember that's fixed point iteration? So what I'm trying to do, I'm trying to rewrite my equation in a form similar to it. Here, B would be some matrix, just somehow related to my original A, and c is related to vector b. There are multiple ways of doing so. One simple way could be to separate the diagonal part of the matrix A. Let's see. So I'll just define D is a diagonal matrix, and then separate it from A. So graphically, it would look like this. Here is my matrix D which only has a non-zero elements on the diagonal, and the second part has a zero diagonal and non-zero elements everywhere else. Like I'm saying. Just one possible way. So if I've separated my diagonal part, then I can rewrite the original system in this form similar to a fixed-point method, where this B matrix is related to the inverse of my diagonal matrix in A, and this is C. We immediately see that for this to actually work, you need to have non-zero elements on our original matrix, on the diagonal of our original matrix. Then, the matrix elements of B are these ratios outside of the main diagonal, and zeros on the diagonal. So if we rewrote our original equation, did we achieve anything? What is useful for? How do we actually proceed to finding the solution for our linear system? Here is one method, it's called Jacobi iteration. So Jacobi iteration is actually quite simple. So you start from some initial estimate for our solution, and then do a fixed-point iteration essentially. So you plug your estimation into the right-hand side of our equation, and call the result the next iterate, then you proceed. You proceed until convergences, or until what you hope is convergence. This obviously raises quite a few questions. Does this actually converge? If it does, what's the rate of convergence? Can we estimate some error bounds? Can we somehow decide if we're close enough to convergence or not while calculating? Let's see. Let's tackle these questions one by one. First of all, let's address the question of convergence, and I'll just formulate a sufficient condition. Let's see. Let's call the solution x-hat, so that's true solution of our equation, and suppose that the norm of this matrix B is less than one. Then, I'm claiming that the Jacobi iteration does converge as the iteration number goes to infinity, and that you can formulate an error bound. So the derivation at the current iteration, the deviation from the true root is bounded by the initial distance to the root times the norm of the matrix to power n, where n is the iteration number. Again, this norm of n is less than one by assumption. So the way this proof goes is very similar to what we've seen in analyzing the fixed-point convergence, the convergence of the fixed-point iteration for non-linear systems, so quickly zoom through the proof. So what we do, we use the current iterate as a function of the previous one subtract the true solution, and what we get is an estimate of the distance at a given step. There's an estimate was bound actually for the distance to the true solution of the given step depending on the previous one. Then, we play the same game with the previous iteration and what we get is that the bound on the current step is related to the bound two steps back and B squared. We carry on doing this n times until we reach the initial point. By doing this, we've proven the statement two from the previous slide. So we have an error bound. Now, from this error bound, the convergence follows immediately because the absolute value of B is less than one by assumption. So we have convergence and we have the error bound provided that the norm of the B matrix is less than one. So we now have an a priori error bound. As usual, what we actually want is an A posterior error bounds so that we can check convergence while we're doing calculations. In fact, it's again very easy to find, and the calculation is very similar to a case of a single non-linear equation to the fixed point method. The statement I am making is this; if the norm of B is less than one, then the distance to the root at nth step is bounded by the step size times a constant, which I'll call q, which is related to the norm of my B matrix. Normal B is less than one, so that q is between zero and one. How do we prove it? It's very easy. So we consider the distance to the root at the current iteration, related to the what happens at the previous iteration. This is x_n, here's x_n minus 1. We add and subtract the value of the root of the current iteration, that's the sum of these two things are zero, and then we take the norm of the right-hand side and left-hand side. In the right-hand side, we have a normal the sum, and by triangle inequality, it's bounded from above by the sum of the norms, and then we have two terms on the right-hand side. Each of those terms will have a product, the normal product and that definitely bounded by the product of the norms, and then what we have is this combination. We have the distance to the root at the current iteration at the left-hand side. We have the same distance to the root at the current iteration at the right-hand side. We combine them and what we get with a little bit of algebra is exactly the estimate from the previous slide which we were trying to prove. So we have the distance to the root at the current iteration being related to the step size between the previous iteration and the current one. How tight the estimate is, is controlled by the norm of the matrix B.