Return to the Lecture Notes Index

15-111 Lecture 14 (Monday, Febrary 16, 2009)

Merging Two Sorted Lists

Let's imagine that we have two sorted lists:

7 6

We can merge these two lists together into a single sorted list in O(n) time. The idea is that we start at the beginning of each list and compare values. We copy the lowest value into the new list, and advance our position in that list. We repeat this compare-and-advance process until we've emptied one of the lists. At that point, we dump the remaining items into the merged sorted list.

The algorithm is as follows:

  int[] mergeSorted(int[] numbers1, count1, int[] numbers2, int count2) {

    int[] mergedNumbers = new int[count1+count2]

    int index1 = 0;
    int index2 = 0;
    int indexM = 0;

     while ((index1 < count1) && (index2 < count2)) {
      if (numbers1[index1] < numbers2[count2]) 
         mergedNumbers[indexM++] = numbers1[count1++];
         mergedNumbers[indexM++] = numbers2[count2++];

    while (index1 < count1)
      mergedNumbers[indexM++] = numbers1[count1++];

    while (index2 < count2)
      mergedNumbers[indexM++] = numbers2[count2++];

    return mergedNumbers;

The reason that this works is that both lists are initially sorted. So, we'll never need to go backwards in a list. As we move forward through each list, we need only consider the current item. And, once one list is empty, everything in the other list is necessarily greater than everything so far. And, even better, already in the right order for the new, merged list.

Remember, the lists are sorted. So, if we've already emptied one list, but have things remaining in the other list -- those things must be greater. And, since they, themselves, are ordered, they are in the correct order.

Merge Sort

It is worth noting that a list containing zero or one items is necessarily sorted. This leads to an idea for a sort. We can recursively divide a list in halves, until we end up with lists of size one. Then, as the recursion unwinds, we can merge each of the pairs of sorted lists that we created going down back together into a single -- now sorted -- list.

Please consider the following:

  5 2 7 8 9 1 6 3
  5 2 7 8     9 1 6 3
  5 2     7 8     9 1     6 3
  5     2     7     8     9     1     6     3
  -- End recursive division, Begin merging while unwinding -- 

  2 5     7 8     1 9     6 3
  2 5 7 8     1 3 6 9
  1 2 3 5 6 7 8 9

Temporary Space and The Copy-Over

When implementing a merge sort, it is necessary to use temporary space to store the numbers while merging. Because the last step is two merge together the two halves of the original list, the temporary space needs to be as big is the original list, itself. This is a significant cost.

In theory, this space could be allocted and freed within each of the recursive calls. But, this is a lot of redundant work and doesn't reduce the maximum space requirement. So, implementations generally allocated a buffer the same size as the original array, and then just an increasingly large portion of it within each merge operation at the recursion unwinds.

The merge operation described above is a bit naive in that it might copy over extra values. Consider what happens when sorting an already sorted list. As the example below shows, there is no need to copy from the temporary array back to the original array. They contain the same things.

  1 2 3 5 6 7 8 9
  1 2 3 5     6 7 8 9
  1 2     3 5     6 7     8 9
  1     2     3     5     6     7     8     9
  -- End recursive division, Begin merging while unwinding -- 

  1 2     3 5     6 7     8 9
  1 2 3 5     6 7 8 9
  1 2 3 5 6 7 8 9

In general, if we consider our left and right partitions. We do not need to copy over the first portion of our left partition that matches up to the beginning of the list. And, we do not need to copy over the last portion of our right partition that matches up to our list. The matching prefix and/or suffic can be of size zero, or larger. This optimization is extremely useful in the case of sorting an already sorted, or mostly sorted list -- in which case no (or very little) copying is done.

For a quick example of how this works, please see below:

  1 2 7 8 6 9 10 11
  1 2 7 8     6 9 10 11
  1 2     7 8     6 9     10 11
  1     2     7     8     6     9     10     11
  -- End recursive division, Begin merging while unwinding -- 
  1 2     7 8     6 9     10 11
  * *     * *     * *     ** **   ..... no need to copy anything

  1 2 7 8     6 9 10 11
  * * * *     * * ** **   ..... still no need to copy anything

  1 2 6 7 8 9 10 11
  * *       * ** **   ..... only the "middle" needs to be copied.

This prefix portion of this implementation is implemented by recording the left index the first time that the right index is advanced. This represents the first time that a value from the right partition is interleaved into the merged list, denoting the end of the left prefix.

A right prefix exists only if there are items remaining in the right list at the conclusion of the merge. These items need not be copied over -- they were at the end and are remaining at the end.

So, putting the two together, we find that it is only necessary to copy over those items between our recorded index and the right index at the time we exited the merge loop.

Merge Sort Implementation (Straight-Forward)

Please find below an unoptimized version of merge sort. You are encouraged to use it as the basis for an optimized version of your own construction.

  private static void mergeSort(int[] numbers, int[] temp, int left, int right){

    if (left >= right) return;

    int middle = (left + right) / 2;

    mergeSort(numbers, temp, left, middle); 
    mergeSort(numbers, temp, middle+1, right); 

    merge (numbers, temp, left, middle+1, right);

  private static void merge (int[] numbers, int[] temp, 
                             int leftStart, int rightStart, int rightEnd) {
    int leftEnd = rightStart-1;

    // Could really just re-label arguments, since we don't need to 
    // keep the start positions for anything.
    int leftPosition=leftStart;
    int rightPosition=rightStart; 

    int tempPosition = leftPosition;

    // Select the lowest item from each list and copy to next open
    // slot in temp. Stop when either list is empty
    // Use the corresponding chunk of temp
   while ((leftPosition <= leftEnd) && (rightPosition <= rightEnd)) {
      if (numbers[leftPosition] < numbers[rightPosition]) {
        temp[tempPosition++] = 
      } else {
        temp[tempPosition++] = 

    // Copy over balance of non-empty list. 
    // Must have at least one item as both lists can't hit end simultaneously.
    // So, only one of these two loops will fly
    while (leftPosition <= leftEnd)
        temp[tempPosition++] = numbers[leftPosition++];
    while (rightPosition <= rightEnd)
        temp[tempPosition++] = numbers[rightPosition++];

    // Copy temp back over to real array -- unoptimized
    // Note: If we hadn't have maintained leftStart, we could just
    // have copied backward from rightEnd
    for (tempPosition=leftStart; tempPosition<=rightEnd; tempPosition++)
      numbers[tempPosition] = temp[tempPosition];

  public static void sort(int[] numbers, int size) {

    if (size <= 0) return;

    int temp[] = new int[size];

    mergeSort(numbers, temp, 0, size-1); 

Merge Sort: Big-O and other Performance Considerations

If we consider the Big-O of Merge Sort, the analysis is the same as the analysis of "Magical" Quick Sort. This is because both sorts employ a common strategy sometimes known as divide-and-conquer.

We divide the partition in half, until we get to partitions of size one. This takes log-n levels of recursion. At each level, we are presented pairs of lists to merge. Regardless of the number of partitions at some level, they are all non-overlapping and carved from the same original list of numbers.

The result is that the total amount of work done, across all recursive calls at the same level of the recursion tree is constant: The merge operation copies each element from the intial array, to a temporary array, and back. This is O(n) work per level of the recursive tree.

So, there we have it: O(log n) levels and O(n) work per level gives us an asymptotic bound of O(n log n).

So, given that Merge sort is O(n log n) -- always. And, given that Quick Sort is O(n log n) -- sometimes. Why do people use Quick Sort. The answer is that all of the copying around is a lot of overhead. For large partitions, in the average case, Quick Sort simply runs faster.

Recall that it avoids the need to make these copies by ensuring that the partitions are independent by throwing things to the correct side of the pivot. This is not the case for Merge Sort, which can, until the very end, maintain items in the wrong partition.

Regardless, at the end of the day, there is more to life than Big-O!