[SOUND] Welcome. In this video, you will see several useful properties of matrices like transposition, trace, rank, and inverse. In matrix multiplications, it can be handy or necessary to switch the rows and columns of the matrix. Switching rows and columns is called transposition. The transpose of the five by two matrix A is a two by five matrix. We denote it by A-prime. The transpose of a five by one column vector y, is a one by five row vector, y-prime. When A is a p by q matrix, the matrix B that is equal to the transpose of A, has dimensions q by p. Transposition exchanges the elements below and above the diagonal. It means that the elements in row i, column j of matrix A, Moves to row j, column i of matrix b. The matrix c, that results from transposing the matrix B has each element C i j equal to B j i, which is again equal to A i j. We have just proved that the transpose of the transpose of A is A itself. A matrix is called symmetric if it is equal to its transpose. This means that each element A i j is equal to A j i. Because transposition exchanges rows and columns, a symmetric matrix is square. We can see a scalar c as a 1 by 1 matrix. It is symmetric by construction, so c-prime is always equal to c. This relation will be handy later on. Next, let's consider how transposition combines with addition. The transpose of the sum of two matrices is the sum of their transposes. We prove this by working out the sum for each matrix element on the left hand side and the on the right hand side of the equation, as you can see on the slide. The transpose of the products of A and B is equal to B-prime times A-prime. We have to change the order of the multiplication. We prove this result on the slides by applying the definition of matrix multiplication. Step four is the crucial part. Transposition exchanges rows and columns. So element j k of B-prime, corresponds with elements kj of B. Element k i of A-prime corresponds with element i k of A. This result shows that each element E k i is equal to C i j. So, because row j of B-prime corresponds with column j of B and column i of A-prime corresponds with row i of A, B-prime has to come first and A-prime second. Now a question for you. In a linear model, y=Xb+e, the vector e captures the unexplained part, also called residuals. We calculate them as y minus x times b. Derive an expression for the sum of squared residuals e-prime e. To find the solution we first substitute y minus x times b for e. Then we apply the transpose to the first part, and then to the products Xb. Next, we multiply out the parentheses. In our last step, we simplify the expression. The third term B-prime X-prime Y is a scalar. So we can also use its transpose, which equals Y-prime times X times B. And that is equal to the second term. The sum of squared residuals plays a central role in econometrics. And you will use expressions for it, like you see on the slide a lot. The next topic of this lecture is the trace of a square p by p matrix A, which is defined as the sum of its diagonal elements. We denote the trace by the letters tr. The trace of the transpose of the square matrix A is equal to the trace of A. Because transposition does not affect the diagonal. When A and B are both p by p matrices, The trace of the sum of A and B is the sum of the traces of A and B. The proof on the slide, uses that we can exchange the order of sums. When a is a p by q matrix, and b is a q by p matrix. The trace of the product AB is equal to the trace of the product BA. This is a remarkable result because the first product has dimensions p by p while the second has dimensions q by q. This equality arises because the trace operation only involves the diagonal. You can see the proof on the slide. I first apply a definition of matrix multiplication and trace. In the step marked with a star, I used the fact that we can exchange the order of multiplication and of summation when we deal with scalars. An important concept for matrices is linear independence. You can see a matrix as a set of row or column vectors. We say that the columns of a matrix are lineally independent where none of the columns is a linear combination of the others. The columns of matrix A are linearly independent because column two is not a multiple of column one. In matrix B, column 2 is a multiple of column 1, and column 4 is the sum of columns 2 and 3, so the columns in matrix B are not linearly independent. Now, the column rank is the maximum number of linearly independent columns in a matrix. For matrix A the column rank equals 2. For matrix B, it's also 2, as you can form columns 2 and 4 out of 1 and 3. The row rank gives the number of linearly independent rows. A question for you. What is the row rank of matrix B? The answer is 2. You can form the third row as row 2 minus 2 times row 1. For matrix B the column rank and the row rank are the same. This result is not a coincidence because for any matrix the row and column ranks are the same. I do not discuss the proof, but if you are interested you can consult a book on linear algebra. For this reason we drop row and column and simply use the term rank. Because the row and column rank are equal, the rank can never exceed the number of rows nor the number of columns. So the rank of a p by q matrix, A, is less than or equal to the minimum of p and q. A p by q matrix A with rank equal to q has full column rank. When its rank is equal to p, it has full row rank. When a square p by p matrix has rank p, we say that it has full rank. Transposition exchanges rows and columns. Because row and column rank are always equal, the rank of the transpose of A, is equal to the rank of A. We use the matrix rank in solving systems of linear equations. We write the systems as A times c equals d, with A and d given and c unknown. We are interested in the solution of the system when d equals 0. When the rank of A equals q, only c equal to zero solves the system, because all q columns in A are linearly independent. When the rank of A is smaller than q, there are also other solutions to the system than the zero vector. Please take a moment to consider the proofs on the slides. Let's apply this result to the two matrices A and B we looked at before. Matrix A has full column rank. The theorem of the previous slide implies that no non-zero vector c exists such that Ac is zero. You can check yourself that this is true. Matrix B also has rank two. But it has four columns. So, there should be non-zero vectors c, such that Bc equals zero. We use our earlier result that column two is three times column one, and that column four is the sum of columns two and three to construct two vectors c, as you can see on the slide. These vectors c solve the system. Moreover any combination of these two vectors solves the system too. Next let's consider the rank of a matrix product. The rank of AB is at most equal to the minimum of the rank of A and the rank of B. A very useful result in econometrics is the following relation. The rank of the product A-prime A, is equal to the rank of A. We have seen an expression of the form A-prime A in the sum of squared residuals of the linear model. I do not show a proof of either statement. If you are interested, you can consult a book on linear algebra. The final topic of this lecture is the inverse of a matrix. Matrix B is the inverse of A, if the product of B and A and the product of A and B yields the identity matrix, you know, the matrix with ones on the diagonal. We denote A-inverse by A to the power minus one. You see an example on the slide. Though it is possible to calculate a matrix inverse by hand, we mostly use computer software. We call matrices that are not invertible singular. A square matrix that is invertible has full rank. Moreover, the reverse is also true. A square matrix that has full rank, is invertible. For a proof, I refer again to books on linear algebra. Now let's consider some properties of the inverse. If B is the inverse of A it follows directly from the definition of the inverse, that A is the inverse of B. So the inverse of the inverse of A yields the original matrix A. The inverse of the transpose of A, is equal to the transpose of the inverse. This result follows from applying transposition to the definition of the inverse, and using that the matrix I is symmetric. If A and C are both invertible, p by p matrices, the inverse of the product AC is equal to C-inverse times A-inverse. Similar to transposition, the sequence is exchanged, with C-inverse coming first. We use the definition of the inverse to prove this result as you can check on the slide. If A is invertible, then A b equals c implies that b equals A inverse times c. The proof follows easily for multiplying both sides by A inverse and simplifying the result. It means that we can solve systems of linear equations by using the inverse. The solution only works when A is invertible so when it has full rank. Now a question for you. Let A be a p x q matrix with rank equal to q. What properties does the matrix C equal to A-prime A have? First, C is symmetric, since the transpose of A-prime A is equal to itself. Second, C has full rank and is invertible. C has dimensions q by q, and by an earlier result, the rank of C is equal to the rank of A, which is q. We have now shown that C-inverse exists. C-inverse is symmetric too. The transpose of C-inverse is equal to the inverse of C-prime. Which is equal to C because C is symmetric. So the inverse of a symmetric invertible matrix is symmetric too. This quiz concludes this lecture on special matrix operations. I invite you to make the training exercise to train yourself with the topics of this lecture. You can find this exercise on the website.