So now we know those two optimality conditions. Now we know less this is a very quick summary when we minimize f over a convex, feasible region capital F. If our small f is convex, we actually only need to search for a local minimum. Or if our small f is concave, we only need to search among extreme points and then maybe you already see it. If we are talking about a Linear Program where our f is either convex or concave and then we have both. So first, maybe I need to remind you about how a linear function is both convex and concave. So when you are talking about a convex function, we need the cutting point, the line segment to lie above the function values, okay. And of course, for a both we mean, the equality is acceptable. So for concave is somewhat similar, we need the functional value to be both your line segment and again here equality is okay. So if you have a linear function, then for any two points that you may get to do the connection, that particular line segment is above that particular function, and below that particular function at the same time. So that's why we say we have both. So if we have both conditions, then that's why for a linear program, a local min is a global min, okay, and we only need to focus on the extreme points. So that's how this helps us verify whether we have done with the simplex method, or other things somewhat makes sense for the simplex method. We always go along extreme points. We only focus on extreme points. And once we are done with a point where there is no way to improve, we say we are at an optimal solution. Okay, so that's why the simplex method works in that way. So maybe we want to go through some rigorous proof. So first we need to show that the feasible region is convex. So if that's the case, well, that's considered the feasible region of ending your program, which is the intersection of several half spaces. Okay, here, there. There something like that. So it is trivial to show that half spaces and hyperplanes are always convex. I think we all agree with that. It seems so true and we meant to show that the intersection of convex sets are convex, which is left as an exercise. What does that mean? This is saying that if you have two convex sets, then their intersection would also be convex. Okay, I think that makes sense because if you take two points in their intersection, then it's going to have the line segment completely lie in the first set also in the second set. So it should also lie in the intersection, right? So try to write down the proof a little bit by yourself. You should not be too hard. Then also, this is another thing we mentioned. A linear function is actually both convex and concave. So the idea I draw the picture for you is that we have a line segment, then it lies also above or below that particular original function, so ultra practically we may still do derivations. What we need to do is to show that when we have two points, if we get their functional values and connect them, okay, we need to have equality for all other cases where we first combine them and then do the functional plugging. Okay, so if we want to show that then it's not really too difficult, right? Because our f function may be expressed as c transpose, multiplied by your x value plus some constant that we don't really care. This is because of function can be expressed as c transpose x + b if your functions convex. Okay, so once this is the case, then everything is separable. So you may take Lambda outside and 1 minus lambda also outside. You're going to get Lambda multiplied by c transpose x 1 + b +1- lambda time c transpose x 2 + b and then this is f of x 1 this is f of x 2, nothing really difficult. So that's how a linear function is both convex and concave. The key is that a linear function is separable. So to solve an LP. As we just mentioned, a great research focusing on extreme point makes sense, and works and guarantee to give you a global optimal solution. That's why the simplex method always gives you an optimal solution. As long as you follow the simplex method, it guarantees to give you that very special local min, which guarantees to be a global min and lie on an extreme point. Okay, so that's the simplex method. So we don't really always work on linear programs for most of the cases, when we talk about NLP is not linear. Okay, so now we define one concept called convex programming. If we have a nonlinear program like this, and then if the feasible region, which is the intersection of all these regions satisfying these constraints, okay? So if the feasible region is convex and the small f, the objective function is also convex over the capital F then a local minimum is a global minimum. Okay, this is something that we would heavily rely on in this particular lecture or course, pretty much it says that what we need is to have a convex program. All right, this is something we already know, right? So all we need to do is to give it a definition so a nonlinear program is a convex program. If the feasible region is convex and the objective function is convex over the feasible region. So that's the definition of convex program. So one thing I would say is that in many cases. Some people also take a look. Consider a maximization problem. So for maximization problem, if you're feasible region is convex and you are maximizing a concave function. We also say you are doing convex programming. Okay, so there is no such term called concave programming. No, we only have convex programming. We are minimizing a convex function or maximizing a concave function subject to a feasible region that is convex. Then we say you are doing convex programming. Okay, so I guess it is easy to convince all of us that if your program is convex, then global min can be found by looking for local min, so efficient algorithms should exist great in the same Newton's method and many other methods that we have no time to teach you. These methods should exist for solving convex programs. So if we are talking about formulating and solving convex programs that subject, that field is called convex programming. So sometimes we would need to show that a set is convex or to show that a program is convex. How may we do that? In many cases, we need to focus on this particular set of constraints. Okay, so the thing is that if your f is a convex function and all your g functions, where g is at the left hand side of less than or equal to constraint. If all your f and g are convex functions, then your nonlinear program is a convex program. So what does that mean? Let's take a look at the proof so that maybe you would get some ideas. Basically, if f is convex, then we are done with part of the condition. So now we only need to prove that the feasible region is also convex. So we are talking about the fact that we need to show that the said Fi, given by a convex function g in this way is convex. So if each constraint gives you a feasible region which is convex, then their combination would I mean, intersection would also be convex. Okay, so we know only need to focus on this fact. Okay,we want to show that as long as your gi is convex, then the corresponding Fi the set would also be convex. So this can be done. All we need to do is to apply the definition if x 1, x 2 are Fi then we know we want to have this particular inequality, right, then we can take a look at x 1 and x 2. We now have a convex combination and we put that into the g function. The interesting thing is that if we know that this particular thing is true, where your g function is convex then in this case, if you split this convex combination to two functional evaluation after,and then do a combination we all know that according to convexity, we have this inequality. Okay, so I hope that makes sense. And also we know your g and gi of x 1 is less than or equal to bi. Gi of x 2 is also less than or equal to bi because they are in the set, and then this combination gives you bi, so what does that mean? This means the new point that is obtained by doing a convex combination satisfies this g of something less than or equal to bi constraint. So that point also lies in the set. And then Fi is also convex. Okay, so each convex function formulated in this way is going to give you a set that is convex. So if you have the intersection of all these sets is also convex and then we're done. We have a convex program. So let me give a quick conclusion of this particular section. We have learned a lot of algorithms in the past, right? Gradient descent, Newton's method. At that moment, we have no way to check whether the reporting solution is really globally optimal. All we know is that, well, it shouldn't be too bad. But now we should be convinced that we actually may efficiently solve those convex programs, because convex programs they have nice properties local min is global min, so searching for local min not too difficult. People can do it, even if you have a lot of constraints, as long as it's a complex program is fine. On the other hand, if you have a general nonlinear program which is not a convex program, then people really don't have a specific way to do it. So later we will focus on how to analytically solve nonlinear programs, and we will see those analytical solutions are the foundations to give us some managerial insights