We're going to start our divide and conquer algorithms with what might be considered a degenerate form of divide and conquer: Searching in an unsorted array using linear search. Here's an example of an array. To find a particular element of the array, we look at the first element, if it's not there, we look at the second element. We continue until we either find the element we're interested in, or until we reach the end of the array. This same type of search is also used to free elements that are stored in a linked list. Let me describe a real-life use of linear search. Twenty years ago I did consulting for a company developing software for one the first hand-held computers, the Apple Newton. The application translated words between any two of the languages: English, French, Italian, German, or Spanish. Its data was stored in five parallel arrays. So for example, car was in English at the second position. In Spanish, car is auto, so the second position of the Spanish array contained auto. The program would take the user's input word along with from and to languages. Then it would search through the corresponding from array, English for example with trying to translate car from English to Spanish. If it found a match, it returned the element at the same index location and target language. With a small dictionary of three words, as in this example, this linear search is quick. However, I was brought in as a consultant to speed up the application. When users clicked on the translate button, it'd take seven to ten seconds to retrieve the translated word, an eternity as far as the user was concerned. There were about 50,000 words in the dictionary, so on average it took 25,000 word checks in order to find a match. The next video, I'll show you how we sped up this application using binary search. The problem statement for linear search is as follows: given an unsorted array with n elements in it and a key k, find an index, i of the array element that's equal to k. If no element array is equal to k, the output should be NOT_FOUND. Note that we say an index rather than the index, to account for the fact that there may be duplicates in the array. This might seem pedantic, but it's important to be as careful as possible in specifying our problem statement. The well known solution to this problem is a linear search. Iterate through the array until you find the chosen element. If you reach the end of the array and haven't yet found the element, return NOT_FOUND. We can construct a divide and conquer recursive algorithm to solve this problem. Our recursive function will take four parameters: A, the array of values; low, the lower bound of the array in which to search; hgh, the upper bound of the array in which to search; and k, the key for which to search. It will return either: an index in the range low to high, if it finds a matching value; or NOT_FOUND, if it finds no such match. As with all recursive solutions, we'll need to accurately handle the base case. In particular, base cases for this problem will be either: be given an empty array, or finding a match on the first element. The subproblem is to search through the sub array constructed by skipping the first element. We'll recursively search through that smaller sub array, and then just return the result of the recursive search. Although this is a recursive routine that breaks the problem into smaller problems, some would argue that this shouldn't be called divide and conquer. They claim that a divide and conquer algorithm should divide the problem into a smaller subproblem, where the smaller subproblem is some constant fraction of the original problem. In this case the su-problem isn't 50%, or 80%, or even 95% of the original problem size. Instead, it's just one smaller than the original problem size. I don't know, maybe we should call this algorithm subtract and conquer rather than divide and conquer. In order to examine the runtime of our recursive algorithm it's often useful to define the time that the algorithm takes in the form of a recurrence relation. A recurrence relation defines a sequence of values in terms of a recursive formula. The example here shows the recursive definition of the values in the Fibonacci sequence. You can see that we defined the value for the n'th Fibonacci as the sum of the preceding two values. As with any recursive definition, we need one or more base cases. Here, we define base cases when evaluating F(0) and F(1). From this recursive definition, we've defined values for evaluating F(n) for any non-negative integer, n. The sequence starts with 0, 1, 1, 2, 3, 5, 8, and continues on. When we're doing run-time analysis for divide and conquer algorithms, we usually define a recurrence relation for T(n). where T stands for the worst time taken for the algorithm, and n is the size of the problem. For this algorithm, the worst-case time is when an element isn't found because we must check every element of the array. In this case we have a recursion for a problem of size n which consists of a subproblem of size n minus one plus a constant amount of work. The constant amount of work includes checking high versus low, checking A at low equals key, preparing the parameters for the recursive call, and then returning the result of that call. Thus the recurrence is T(n) equals T(n-1) plus c, where c is some constant. The base case of the recursion is in an empty array, there's a constant amount of work: checking high less than low and then returning NOT_FOUND. Thus T(0) equals c. Let's look at a recursion tree in order to determine how much total time the algorithm takes. As is normal, we're looking at worst-case runtime, which will occur when no matching element is found. In a recursion tree, we show the problem along with the size of the problem. We see that we have an original problem of size n which then generates a subproblem of size n-1, and so on all the way down to a subproblem of size zero: an empty array. The work column shows the amount of work that is done at each level. We have a constant amount of work at each level which we represent by c, a constant. Alternatively, we could have represented this constant amount of work with big theta of one. The total work is just the sum of the work done at each level that's a summation from zero to n of a constant c. Which is n plus one times c, or just big theta of n. This analysis seems overly complicated for such a simple result. We already know that searching through n elements of the array will take big theta of n time. However, this method of recurrence analysis will become more useful as we analyze more complicated divide and conquer algorithms. Many times a recursive algorithm is translated into an iterative one. Here we've done that for the linear search. We search through the elements of array A from index low to index high. If we find a match, we return the associated index. If not, we return NOT_FOUND. To summarize, what we've done is one: created a recursive solution; two: defined a corresponding recurrence relation, T; three: solved T of n to determine the worst-case runtime; and four: created an iterative solution from the recursive one. What you've seen in this video, then, is an example of a trivial use of our divide and conquer technique in order to do a linear search. In our next video we'll look at a non-trivial use of the divide an conquer technique for searching in a sorted array: the well known binary search.