Recursive Programming


When we write a method for solving a particular problem, one of the basic design techniques is to break the task into smaller subtasks. For example, the problem of adding (or multiplying) n consecutive integers can be reduced to a problem of adding (or multiplying) n-1consecutive integers:
1 + 2 + 3 +... + n = n + [1 + 2 + 3 + .. + (n-1)]

1 * 2 * 3 *... * n = n * [1 * 2 * 3 * .. * (n-1)]
Therefore, if we introduce a method sumR(n) (or timesR(n)) that adds (or multiplies) integers from 1 to n, then the above arithmetics can be rewritten as
sumR(n) = n + sumR(n-1)

timesR(n) = n * timesR(n-1)
Such functional definition is called a recursive definition, since the definition contains a call to itself. On each recursive call the argument of sumR(n) (or timesR(n)) gets smaller by one. It takes n-1 calls until we reach the base case - this is a part of a definition that does not make a call to itself. Each recursive definition requires base cases in order to prevent infinite recursion.

In the following example we provide iterative and recursive implementations for the addition and multiplication of n natural numbers.

public int sum(int n)                   public int sumR(int n)
{                                       {
   int res = 0;                           if(n == 1)
   for(int i = 1; i = n; i++)                return 1;
      res = res + i;                      else
                                             return n + sumR(n-1);
   return res;                          }

To solve a problem recursively means that you have to first redefine the problem in terms of a smaller subproblem of the same type as the original problem. In the above summation problem, to sum-up n integers we have to know how to sum-up n-1 integers. Next, you have to figure out how the solution to smaller subproblems will give you a solution to the problem as a whole. This step is often called as a recursive leap of faith. Before using a recursive call, you must be convinced that the recursive call will do what it is supposed to do. You do not need to think how recursive calls works, just assume that it returns the correct result.

Towers of Hanoi

In the great temple of Brahma in Benares group of spiritually advanced monks have to move 64 golden disks from one diamond needle to another. And, there is only one other location in the temple (besides the original and destination locations) sacred enough that a pile of disks can be placed there. The 64 disks have different sizes, and the monks must obey two rules:

  1. only one disk can be moved at a time
  2. a bigger disk can never be placed on a top of a smaller disk.

The legend is that, before the monks make the final move to complete the new pile in the new location, the next Maha Pralaya will begin and the temple will turn to dust and the world will end. Is there any truth to this legend?

See the simulation applet at

The Tower of Hanoi puzzle was invented by the French mathematician Edouard Lucas in 1883. The puzzle is well known to students of Computer Science since it appears in virtually any introductory text on data structures or algorithms.

Recursive solution: first we move the top n - 1 discs to an empty pole, then we move the largest disc to the other empty pole, then complete the job by moving the n - 1 discs onto the largest disc. Let T(n) represent the number of steps needed to move n discs. Then T(n) can be counted as follows

T(n) = T(n-1) + 1 + T(n-1)


One might wonder how the runtime system handles recursive functions. There is a lot of bookkeeping information that one has to keep track of: for each call one has to record who made the call and what arguments are to be handed over. Most importantly, though, one has to keep track of all the pending calls, which may be very deeply nested inside each other. As it turns out, all that is needed is a single stack. Whenever a function call is made (recursive or not), all the necessary bookkeeping information is pushed onto the stack. When the execution of the function terminates, the return value is handed over to whoever made the call (pop from the stack). Consider the following call sumR(5). Here is the bookkeeping information

               return 1
            return 2 + 1
         return 3 + 2 + 1
      return 4 + 3 + 2 + 1
   return 5 + 4 + 3 + 2 + 1

Comparing recursive implementation against iterative implementation, we can say that the former is at least twice slower, since, first, we unfold recursive calls (pushing them on a stack) until we reach the base case and ,second, we traverse the stack and retrieve all recursive calls. Note, actual computation happends when we pop recursive calls from that system stack.

Tail and Head recursions

If the recursive call occurs at the end of a method, it is called a tail recursion. The tail recursion is similar to a loop. The method executes all the statements before jumping into the next recursive call.

If the recursive call occurs at the beginning of a method, it is called a head recursion. The method saves the state before jumping into the next recursive call. Compare these:

public void tail(int n)                 public void head(int n)
{                                       {
   if(n == 1)                             if(n == 0)
      return;                                return;
   else                                   else
      System.out.println(n);                 head(n-1);

   tail(n-1);                              System.out.println(n);

Mathematical Induction

Recursive programming is directly related to mathematical induction

The base case is to prove the statement true for some specific value or values of N.

The induction step -- assume that a statement is true for all positive integers less than N,then prove it true for N.

Binary Search

Locate the element x in a sorted array by first comparing x with the middle element and then (if they are not equal) dividing the array into two subarrays and repeat the whole procedure in one of them. If x is less than the middle element you search in the left subarray, otherwise - in the right subarray.

Let T(n) denote the number of comparisons required to find a key in a sorted array of size n. Then we have the following recurrent equation for T(n);

T(n) = T(n/2) + 1
This directly translates into the following recursive code:
public int searchR(int[] a, int key) {
  return helper(a, key, 0, a.length-1);

private int helper(int[] a, int key, int left, int right) {
   if (left > right) return -1;
   int mid=(left+right)/2;
   if (key == a[mid])  return mid;
   if (key > a[mid])
      return helper(a, key, mid + 1, right);
      return helper(a, key, left, mid - 1);

The Mandelbrot Set

    The Mandelbrot set is the set of all complex numbers c for which sequence defined by the iteration
f(n+1) = f(n)2  + c, f(0) = c
remains bounded or converges to a fixed point when n tends to infinity. In the picture the Mandelbrot set is that blue shape in the middle. The Mandelbrot set is named after Benoit Mandelbrot who constructed the first images of this set in 1978.

Applets to explore the Mandelbrot set, and other fractals, can be found at Dynamical Systems and Technology Project website.

The Mandelbrot set is a famous example of a fractal - fragmented geometric shape that can be split into parts, each of which is a copy of the whole.

Here are two examples of bunded and unbounded sequences:

  • Let c = 1. This sequence is NOT bounded
  • f(0) = 1
    f(1) = f(0)  + 1 = 2
    f(2) = f(1)  + 1 = 5
    f(3) = f(2)  + 1 = 26
    and so, the sequence is growing.

  • Let c = 0.1 This sequence has a fixed point
  • f(0) = 0.1
    f(1) = f(0)  + 0.1 = 0.11
    f(2) = f(1)  + 0.1 = 0.1121
    f(3) = f(2)  + 0.1 = 0.112566
    f(8) = 0.112702

    Fibonacci Numbers

    Fibonacci was born 1170 in Pisa, Italy and died in 1250. His real name is Leonardo Pisano. In 1202 he wrote a book: Liber Abbaci, meaning "Book of Calculating".

    The Fibonacci number is defined as the sum of the two preceding numbers:

        0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ...

    This recursive definition translates directly into code

       public int fibonacci(int n)
          if (n <= 0) return 0;
          else if (n == 1) return 1
          else return fibonacci(n-1) + fibonacci(n-2);
    This is a binary tree of recursive calls for fibonacci(5). The picture shows that the tree for fibonacci(5) has 5 levels, and thus, the total number of nodes is about 2^5. Based on this estimate we guess that the complexity of recursive implementation is exponential, namely O(2n). We can formally prove this statement by deriving a recursive equation for the number of calls:

    Linked Lists Recursively

    A linked list is a recursive data structure. A linked list is either empty or consistes of a node followed by a linked list. As an example, consider iterative and recursive implementations of the addLast() method

               iterative implementation                                               recursive implementation

    public void addLast(Object item)         public void addLast(Object item)
    {                                        {
       if( head == null)                        if( head == null)
          addFirst(item);                          addFirst(item);
       else                                     else
       {                                           addLast(head, item);
          Node tmp = head;                   }
                                             private void addLast(Node node,
          while( != null)                                 Object item)
             tmp =;                 {
                                                if( != null)
 = new Node(item, null);         addLast(, item);
       }                                        else
    }                                   = new Node(item, null);

    As an exercise implement

    public String toString()
    public void insertAfter(Object key, Object toInsert)
    public LinkedList clone()
    Our next example is the insertBefore method - find the key and insert a new node before this node.
    public void insertBefore(Object key, Object toInsert)
    	head = insertBefore(key, head, toInsert);
    public Node insertBefore(Object key, Node curNode, Object toInsert)
    	if(curNode == null)
    		return null;
    		return new Node(toInsert, curNode);
    	else = insertBefore(key,, toInsert);
    	return curNode;
    Suppose we want to insert before "C". Let us trace the above code by creating a system stack of calls
    head = insertBefore(A, C, toInsert);
    "A".next = insertBefore(B, C, toInsert);
    "B".next = insertBefore(C, C, toInsert);
    insertBefore(C, C, toInsert) returns new Node(toInsert, C)
    As soon as we reach the base case, we pop calls from a system stack. The first two pops will insert a new node between "B" and "C"
    "B".next = insertBefore(C, C, toInsert) = new Node(toInsert, C);
    All following assignments
    head = insertBefore(A, C, toInsert);
    "A".next = insertBefore(B, C, toInsert);
    are redundant, they do not add anything to the list. Another important sub-case of the above implementation is when we need to insert a new node before the head. The assignment head = insertBefore(head, key, toInsert); takes care of this case.

    As an exercise implement

    public void delete(Object key)
    public void insertInOrder(Comparable key)

    See for a complete implementation.