heapsort(list L) { make heap H from L make empty list X while H nonempty remove smallest from H and add it to X return X }Remember that a heap is just a balanced binary tree in which the value at any node is smaller than the values at its children. We went over most of this last time. The total number of comparisons is n log n + however many are needed to make H. The only missing step: how to make a heap?
To start with, we can set up a binary tree of the right size and shape, and put the objects into the tree in any old order. This is all easy and doesn't require any comparisons. Now we have to switch objects around to get them back in order.
The divide and conquer idea: find natural subproblems, solve them recursively, and combine them to get an overall solution. Here the obvious subproblems are the subtrees. If we solve them recursively, we get something that is close to being a heap, except that perhaps the root doesn't satisfy the heap property. To make the whole thing a heap, we merely have to percolate that value down to a lower level in the tree.
heapify(tree T) { if (T is nonempty) { heapify(left subtree) heapify(right subtree) let x = value at tree root while node containing x doesn't satisfy heap propert switch values of node and its smallest child } }The while loop performs two comparisons per iteration, and takes at most log n iterations, so the time for this satisfies a recurrence
T(n) <= 2 T(n/2) + 2 log nHow to solve it?
We analyze this in some generality: suppose we have a pieces, each of size n/b and merging takes time f(n). (In the heapification example a=b=2 and f(n)=O(log n) but it will not always be true that a=b -- sometimes the pieces will overlap.)
The easiest way to understand what's going on here is to draw a tree with nodes corresponding to subproblems (labeled with the size of the subproblem)
n / | \ n/b n/b n/b /|\ /|\ /|\ . . . . . . . . .For simplicity, let's assume n is a power of b, and that the recursion stops when n is 1. Notice that the size of a node depends only on its level:
size(i) = n/(b^i).What is time taken by a node at level i?
time(i) = f(n/b^i)How many levels can we have before we get down to n=1? For bottom level, n/b^i=1, so n=b^i and i=(log n)/(log b). How many items at level i? a^i. So putting these together we have
(log n)/(log b) T(n) = sum a^i f(n/b^i) i=0This looks messy, but it's not too bad. There are only a few terms (logarithmically many) and often the sum is dominated by the terms at one end (f(n)) or the other (n^(log a/log b)). In fact, you will generally only be a logarithmic factor away from the truth if you approximate the solution by the sum of these two, O(f(n) + n^(log a/log b)).
Let's use this to analyze heapification. By plugging in parameters a=b=2, f(n)=log n, we get
log n T(n) = 2 sum 2^i log(n/2^i) i=0Rewriting the same terms in the opposite order, this turns out to equal
log n T(n) = 2 sum n/2^i log(2^i) i=0 log n = 2n sum i/2^i i=0 infty <= 2n sum i/2^i i=0 = 4nSo heapification takes at most 4n comparisons and heapsort takes at most n log n + 4n. (There's an n log n - 1.44n lower bound so we're only within O(n) of the absolute best possible.)
This was an example of a sorting algorithm where one part used divide and conquer. What about doing the whole algorithm that way?
Let's look at the combine step first. Suppose you have some data that's close to sorted -- it forms two sorted lists. You want to merge the two sorted lists quickly rather than having to resort to a general purpose sorting algorithm. This is easy enough:
merge(L1,L2) { list X = empty while (neither L1 nor L2 empty) { compare first items of L1 & L2 remove smaller of the two from its list add to end of X } catenate remaining list to end of X return X }Time analysis: in the worst case both lists empty at about same time, so everything has to be compared. Each comparison adds one item to X so the worst case is |X|-1 = |L1|+|L2|-1 comparisons. One can do a little better sometimes e.g. if L1 is smaller than most of L2.
Once we know how to combine two sorted lists, we can construct a divide and conquer sorting algorithm that simply divides the list in two, sorts the two recursively, and merges the results:
merge sort(L) { if (length(L) < 2) return L else { split L into lists L1 and L2, each of n/2 elements L1 = merge sort(L1) L2 = merge sort(L2) return merge(L1,L2) } }This is simpler than heapsort (so easier to program) and works pretty well. How many comparisons does it use? We can use the analysis of the merge step to write down a recurrence:
C(n) <= n-1 + 2C(n/2)As you saw in homework 1.31, for n = power of 2, the solution to this is n log n - n + 1. For other n, it's similar but more complicated. To prove this (at least the power of 2 version), you can use the formula above to produce
log n C(N) <= sum 2^i (n/2^i - 1) i=0 log n = sum n - 2^i i=0 = n(log n + 1) - (2n - 1) = n log n - n + 1So the number of comparisons is even less than heapsort.
Merge sort worked no matter how you split the lists (one obvious way is to take first n/2 and last n/2 elements, another is to take every other element). But if you could perform the splits so that everything in one list was smaller than everything in the other, this information could be used to make merging much easier: you could merge just by concatenating the lists.
How to split so one list smaller than the other? e.g. for alphabetical order, you could split into A-M, N-Z so could use some split depending on what data looks like, but we want a comparison sorting algorithm that works for any data.
Quicksort uses a simple idea: pick one object x from the list, and split the rest into those before x and those after x.
quicksort(L) { if (length(L) < 2) return L else { pick some x in L L1 = { y in L : y < x } L2 = { y in L : y > x } L3 = { y in L : y = x } quicksort(L1) quicksort(L2) return concatenation of L1, L3, and L2 } }(We don't need to sort L3 because everything in it is equal).
C(n) = n-1 + C(a) + C(b)where a and b are the sizes of L1 and L2, generally satisfying a+b=n-1. In the worst case, we might pick x to be the minimum element in L. Then a=0, b=n-1, and the recurrence simplifies to C(n)=n-1 + C(n-1) = O(n^2). So this seems like a very bad algorithm.
Why do we call it quicksort? How can we make it less bad? Randomization!
Suppose we pick x=a[k] where k is chosen randomly. Then any value of a is equally likely from 0 to n-1. To do average case analysis, we write out the sum over possible random choices of the probability of that choice times the time for that choice. Here the choices are the values of k, the probabilities are all 1/n, and the times can be described by formulas involving the time for the recursive calls to the algorithm. So average case analysis of a randomized algorithm gives a randomized recurrence:
n-1 C(n) = sum (1/n)[n - 1 + C(a) + C(n-a-1)] a=0To simplify the recurrence, note that if C(a) occurs one place in the sum, the same number will occur as C(n-a-1) in another term -- we rearrange the sum to group the two together. We can also take the (n-1) parts out of the sum since the sum of 1/n copies of 1/n times n-1 is just n-1.
n-1 C(n) = n - 1 + sum (2/n) C(a) a=0The book gives two proofs that this is O(n log n). Of these, induction is easier.
One useful idea here: we want to prove f(n) is O(g(n)). The O() hides too much information, instead we need to prove f(n) <= a g(n) but we don't know what value a should take. We work it out with a left as a variable then use the analysis to see what values of a work.
We have C(1) = 0 = a (1 log 1) for all a. Suppose C(i) <= a i log i for some a, all i<n. Then
C(n) = n-1 + sum(2/n) C(a) <= n-1 + sum(2/n)ai log i = n-1 + 2a/n sum(i=2 to n-1) (i log i) <= n-1 + 2a/n integral(i=2 to n)(i log i) = n-1 + 2a/n (n^2 log n / 2 - n^2/4 - 2 ln 2 + 1) = n-1 + a n log n - an/2 - O(1)and this will work if n-1 < an/2, and in particular if a=2. So we can conclude that C(n) <= 2 n log n.
Note that this is worse than either merge sort or heap sort, and requires random number generator to avoid being really bad. But it's pretty commonly used, and can be tuned in various ways to work better. (For instance, let x be the median of three randomly chosen values rather than just one value).
ICS 161 -- Dept.
Information & Computer Science -- UC Irvine
Last update: