[MUSIC] So we've learned how to develop finite-difference operators for the wave equation. We applied it to a one dimensional case, two dimensional case, and if we write the equation as we've usually done, so that on the left hand side we have the second derivative of the pressure field for example. And on the right-hand side you have the Laplace operator acting on the pressure field plus a source term. That means so far we actually basically only looked at the right-hand side and how to improve the accuracy of these finite difference operators in space. But what about the left-hand side, the time extrapolation term? Actually that will basically be unchanged for all the other methods that we discuss in this course. And of course, that's the primary goal is to actually compare, introduce and compare these various numerical methods applied to this equation and the one dimensional elastic wave equation. So, I would like to show you briefly some concepts on how to improve the time extrapolation scheme that actually then equally applies also to all other schemes but this will be relatively short. Let's illustrate that with an example. Let's take the simplest possible partial differential equation, the advection equation. On the left hand side, we have the first derivative with respect to time of q. And on the right-hand side, we have c, a propagation velocity, and the first derivative of space. We assume here that we activate, basically, an advection by an initial condition. So, we have no source term. So, how do we usually, would go about to solve that equation, and, at least in terms of the time extrapolation. Well, we use the Euler method. So we replace the left-hand side here with a simplified difference scheme, so q, t + dt-q(x,t) the space dependence would be implicit divided by dt is equal to, on the right-hand side, c times the space derivative of p with respect to x. So that's the Euler scheme. Let's call this, then we can actually extrapolate to q at the time, t + dt, and let's call this q+, we will need this later. That's actually a very bad scheme, a very diffusive scheme for an advection problem, and would never be actually used in a real problem because it simply leads to an inaccurate result. A much better approach is the so called predictor-corrector scheme. It is also called Heun's method. It is also a part of the family of the Runge-Kutta methods. And let's illustrate how this works. So let's introduce the notion of a linear operator, capital L(q, t). That's actually the right-hand side of the advection equation. We also call this k1, and in this predictor-corrector scheme, that's actually the predictor. That's the normal Euler scheme. Now, that allows us to extrapolate, calculating q + dtq+ and we can now plug this back into our linear operator so we solve once more. Basically the forward problem or the right inside of the equation, which will be actually then called k2, that's our corrector. Which is then the linear operator of q+dtq+ at the time t+dt. And all this will be put together to the final extrapolation scheme. So q(x, t+dt) will then be q at the t previous times, q(x, t) + 1/2dt (k1 + k2). So that's the predictor-corrector scheme, which is actually far better than the original Euler scheme. But there's no free lunch, as we know. So the price is we've actually solved the forward problem once more. The forward problem here in the sense of the right-hand side, the spatial part of the advection equation, which of course can be very expensive if you have a full three-dimensional problem. But the solution in the end will be much more accurate. So there is a tradeoff. And you have to evaluate whether you want to go this step and have a better extrapolation scheme. But that scheme is actually much used in schemes like the finite volume method or the discontinuous Galerkin method. So that's an important part. Usually, and we don't focus in this course on this, usually the time extrapolation is also done using a high order scheme for realistic simulations. Because otherwise you would not be efficient in actually achieving a high accuracy for the solution, at least for a realistic problem. Another powerful concept is the so-called Lax-Wendroff scheme, also called the Cauchy-Kovalewski procedure. And to understand that, we actually first look at simply the Taylor expansion of our field which in this case is q(x,t+dt) = q(x,t) + all the terms leading to arbitrary higher order. Which can also be expressed using the sum sign. So, in principle if you sum up to infinity you should be extremely accurate or even perfect. Now, at the same time we know, we look again here at the equation, simply replacing on the left hand side, the time derivative make it general. So this will be a j plus 1th time derivative. And on the right hand side we have the jth time derivative. Now if you look at this equation carefully, it actually means we can calculate the arbitrary derivatives, so the nth derivative in time as a function of space or applying a space derivative, to the field, actually, at the previous time level. Now that's very powerful because this is an iterative procedure and that means we've been basically, in principle, estimate arbitrary high order terms in this Taylor expansion and make the time extrapolation as accurate as we want. Actually, there are schemes that are called high order or arbitrarily high order in space and time. And they actually use this approach to have the same order of accuracy in space and time to make solutions of complex partially-differential equations very, very accurate. So this was another example of a way to improve the extrapolation scheme on the left hand side as we usually wrote it, of the advection equation in this case, or any other time dependent partially-differential equation. There are other approaches, this would go beyond the scope of this course here, for example the Newmark scheme, Crank Nicolson scheme or high order, Runge-Kutta methods. So before we conclude this relatively long part on the Finite-difference method, I would like to show once more the operators that we had used, the simplest operators, in explicit form as shown here. In the top, you see basically the time stencil with time would be here in the vertical direction, and in the bottom you see the space stencil, the classic operators for the second time or space derivative. Now, this is called explicit because remember, we can express the future as a function of everything at the present and the immediate past. Now, actually, this can be relaxed. And this has been tried in several examples, and I just wanted to show one example. These are called so-called optimal operators, where in principle you are trying to make up for the error you commit in space compensating it with some terms in time, so when it comes to terms with space to basically come up with a solution that's accurate to a certain order. And an example is shown here. So suddenly now this so called stencil is 3 by 3 matrix is full, so in time and space. Now first of all, that means implicit. This is an implicit scheme. So in principle if you think about that, the future depends on the future. So if you think of the point x in the middle, at t + dt, it also depends at the points outside. That's of course bizarre and weird. But this can be solved also with some specific skill. We're not going to go into details here. You can look at this more in the additional material that is available. But this actually leads to, also, very, very powerful and very accurate schemes. And I just want to show you an example, which you can also see in our Python notebooks. There is a notebook that has this code. And it shows the acoustic wave equation in 1D. Comparing various solutions amongst one is the actual optimal operator solution, as just shown with these stencils here. And you can see here in the background, the wave here as it propagates, and we compare a standard finite difference scheme with the optimal operator. And you can actually appreciate that this optimal operator leads to much, much more accurate results. These schemes can, in 2D or 3D, be actually fairly complicated to implement, which is maybe one of the reasons why they have not been used as widely as maybe they should be. But the message here is even though many other methods like the spectral element method, for example, might be considered more modern and more powerful, I think it's fair to say that at least for wave propogation problems with not too much geometric complexity, you can actually come up with very, very powerful and performing finite-difference schemes that basically provide you with a solution as accurate as you want.