In this final segment, we show the formulation, of some of the recovery problems, we mentioned at the beginning of the class last week. Most specifically, we showed the formulation of the, image super resolution, video resolution, pansharpening, and the dual exposure problems. It should then be clear that these are indeed inverse problems, and in principle we could use any of the techniques we covered in the last two weeks, to provide solutions to these problems. So, let's have a closer look. We will show next the formulation of these four, recovery problems. Namely. Image, and video super resolution, pansharpening, and the dual exposure problem. We will talk about in painting, another one of the, recovery problems I mentioned earlier in the last week of classes when we talked about sparsity. And then for some of the other, recovery problem, such as the [UNKNOWN] we do need information that will start covering in next week. Namely image and video compression. We explain here, the basic idea behind super-resolution, or so called geometric super-resolution, in somewhat idealized way. The first scenario shown. A single camera takes multiple pictures, of the same scene. The assumption is that, there is a sub-pixel shift between the acquired images. Each of the acquisition is a low resolution image. The second scenario. Is that multiple cameras take a picture of the same scene. The same underlying assumption holds, that these, there is a [UNKNOWN] of shift among the images. The third scenario, is that the video camera is recording a dynamic scene. Due to the motion of the objects in the scene, the, the picture in each frame, is with sub-pixel shifts with respect to their location in a neighboring frame. So, in all these three scenarios, low resolution still images or video frames are acquired, as shown here. These frames are related by sub-pixels shifts. Globally, as the the first two scenarios, or locally, due to the motion of the objects, according to the last scenario. If we show the low-resolution images on the grid, for the first two cases of a global shift,. Then the picture shown here, results. So this is a, pixel shift here, one for example here, this is a sub-pixel shift. So conceptually. Each low resolution image, samples the continuous 2 dimensional scene at different points. And therefore if we, estimate the sub pixel shifts we can then utilize all these observations and be able to construct an image frame at the higher resolution. So this is the objective of super resolution. So objective of super-resolution is to estimate, the global, or local sub-pixel shift and then generate a regular breed, either a still image or video frame at a high resolution. Based on, the discussion we had in the previous slide we show here the image super resolution acquisition model or degradation model. X is the high res image, and y of k, we'll say y 1, y 2, y 5 where the low resolution observed images. So according to this model. The high resolution image is [UNKNOWN] motion warped. This, could mean just sub-pixel shifted or rotation component might be present or any [UNKNOWN] information might be included in this motion warping operator, which is. Described by the parameter s of k. Then this motion warp high resolution image is blurred, by a system with point spread function H of k, and this blurred warped high resolution image is finally down sampled according to this operator A. To generate the low resolution image. So all these operators can be, combined into, this operator here. So the problem is given the, low resolution observations,. K equals 1 to 5 for this particular image, and knowledge of b of k, s of k, we tried to find an estimate of the high resolution image, given also maybe some knowledge about the noise. Now, the motion parameters need to be estimated. The blur may or may not be known so if it's not, known then we have a blind, image super resolution problem. So it's clearly, an inverse problem. A recovery, problem. And, any of the approaches we've covered so far could be potentially applied to solving this problem. One could follow a sequential approach, according to which first the motion parameters are estimated, maybe followed by the blur, and then, after both of them are estimated, they're utilized here to define, B of k and solve this inverse problem. >> But, maybe ideally, as was argued at an earlier point, we would like to estimate all the unknown parameters, along with the high-res image simultaneously, so that errors that were calved during the estimation itself, the motion parameter, are, utilized in estimating the high-res image. [INAUDIBLE] image. We show here the acquisition model for the video super-resolution problem. As you'll see, is very similar to the acquisition model for the still, image super-resolution problem. So, according to these, the high resolution frames denoted by X, each of them is blurred by B, down sampled by D, noise is added to generate the sequence of low resolution. Frames y. So that, high resolution frame x of k is blurred. Down sample, noise is added to generate the low resolution frame, y at time k. Now, the high resolution frame. X at time k can also generate, additional, low resolution observations, at time, at different time, instances according to this formula. So, the high resolution frame, x at time k is, motion, compensated or motion, warp. And mapped to a time i, lead down sampled noises that I did, and the low resolution frame at time i is generated. That way. So, we see that, one high resolution frame at time k, here can generate as many low resolution frames as we would like, at different time instances based on the motion field. So, it's exactly the same formulation as the previous case. Each case, the main difference is that, this now, c of dik is shown here, is, does not refer to a global motion or the global warping of the frame, x of k, but instead to a local one. So, we need to find the motion of the moving. Objects in the scene, so we need to find the motion for each and every pixel in the frame K, with respect to the frame at time I, and this again, motion field is going to do this mapping. Between the two frames, and the motion fill can be, just a purely translation of field, or also include rotation or any other transformation that, seems to be necessary to model the particular data we're working with. An interesting and more challenging, video super-resolution problem occurs when the observed video is now compressed. This is indeed the case with many low end video cameras, that do not make available the. Original data or the source data, but only a compressed version of that. I know we have not talked about video compression yet. We'll actually start doing so next week, but for the purpose of this discussion, we only need the, the, general idea about it. The notation is. Different than the previous slide but, I will not be writing equations. I just want to, describe here the concept. So in [INAUDIBLE] three we start with the high resolution, video here which, is acquired by a camera. And based on an, acquisition model like the one we described in the previous slide, this low resolution video is observed. Now this is input into a compression system, and the output of the compression system, are the low resolution compressed. Video. But also, information that is generated by the, the coder, the video coder. And such information is a, sparse motion field. As well as information about the, quantization that was used. The [UNKNOWN] step sizes. So then, the problem, in this case, is to utilize the available data, which is. That compressed low res video, but also this, you might say, auxiliary information provided by the coding scheme, motion vectors and [UNKNOWN] information and we want to obtain an estimate of this high resolution video frames. So clearly, in addition to, the acquisition system that we described in the previous slide. We also have to model the compression system, and utilizing these two models, again, we want to reverse that path and somehow from, low res go back to the high res data. So this is an, indeed, an inverse recovery problem like the ones we're describing here. We discuss here the acquisition model for the Pansharpening problem. Which is also, a super resolution problem. So in remote sensing applications, a satellite such as Lansat. Is imaging part of the Earth, as shown here. Now, ideally the sensor on the satellite, would have generated these high resolution multi-spectral images, which we denote by y of b. And let's assume that we have B channels. But instead of generating this higher res multispectral images, a spectral decimation is taking place so, we are, adding up the multispectral images this way and a so called panchromatic image. Is generated, according to this model. So, a panchromatic image x, is the weighted sum of this high-resolution multispectral images. And we assume we have b of those big channels. And the lambdas are obtained from the central spectral characteristics. In addition, a spatial decimation is taking place so each of these multispectral images, is down sampled and here is an example color coded. Low resolution multispectral image. And the model, of this decimation is shown here so, the high res multispectral image b is blurred and then down sampled. Noise is added and this low res. Multispectral image is produced. And again, we have capital B of those. So the pansharpening problem is to, work with, the available data which are the low [UNKNOWN] images and the [UNKNOWN] image and to obtain an estimate of the high res multispectral images. If H is known, it is a non-blind problem. If, however, H is to be estimated, it becomes a blind pansharpening, or as we saw, a super-resolution problem. Taking high-quality photographs under low lighting conditions, is a major challenge. [INAUDIBLE] A longer exposure is required to obtain an image with, low noise but any motion of a camera causes blur. So, the proposed solution is to obtain both a long, and the short exposure images, as shown here. So, this is the long exposure and this is. The short exposure image. Clearly due to, the camera shake, blur is present in the long exposure image. On the other hand, in the short exposure image, because there's not enough light. The image is noisy, although sharp. And in addition, there is color loss. So, the objective is to combine these two images, these two observations, and obtain an estimate, of the original scene. One could, work with this image, for example, and remove the noise. But the color loss. It's still there. Combine the two and obtain an estimate of the blur and then, work with, this long exposure image and remove the blur with any of the method we coverd in class so far. However, doing this simultaneously, while estimating all the unknown parameters is, by and large, the direction to go. The acquisition process can be modeled using two couple linear, special invading systems. So the model for the long exposure is shown here. Why you want to observe the image, h the, blurring matrix that introduces the [UNKNOWN] of the camera shake and x is, the original image. Due to the camera movement between, acquisitions, the image pair has to be registered both. Photomatically, and geometrically. So the short exposure, image y2, involves these two, lambda 1 lambda 2 parameters that are responsible for the, for photometric registration and, the matrix or operator C data counts for the geometric registration. Lambda 1, lambda 2, and C can be a part of the problem or they can be handled separately as a pre processing step, in which case the observed short exposure image y 2 equals the original x plus noise. So, the problem at hand in this dual exposure problem, is to utilize the two observations, y1, long exposure, y2, short exposure, and obtain an estimate of the blur and an estimate of the original image, x. In principle, the, techniques we covered so far in class. Could be utilized to, solve this particular problem. So, yeah we've reached the, end of week seven. I have to admit that the last two weeks were, rather intense. Overall, however these are not representative of the course in the sense that they're the most intense or dense. Two weeks out of the 12 weeks of the course. There is almost, four and a half hours of material during these two weeks and this by itself is a mini, course on image recovery. We saw that the material is rather mathematical, utilizing results from linear algebra. Optimization, and stochastic processes. But, as I mentioned multiple times already, do not be too concerned if for some of you, not everything is crystal clear at this point. You, have enough information to further study the topic, and useful algorithms that you can utilize right away. The first week we covered deterministic image restoration approaches, obtaining the solution in a direct or in an iterated fashion. This week we covered stochastic, restoration approaches. We derived the celebrated Wiener filter and also the commonly used maximum likelihood and map. Estimaters, we also discuss the fully [UNKNOWN] hierarchical approach, which results in some of the state of the art, results. I hope you found, the material interesting. I believe it will, prove to be of use in a number of recovery problems. But also, other applications, since we do not just discuss specific algorithms, but more general estimation frameworks. Next week, we'll start with another exciting topic that of image and video compression. So, I'll see you next time and, it's promised to be a, lighter week.