1

In my Algorithms class we discussed Amortized Complexity. Unfortunately I was not able to attend due to being away on an athletic competition. After attempts to contact the professor to explain this failed, I am stuck asking it here. What is Amortized Complexity and how do I find it? I was assigned work to do and have no idea how to do it. If you guys can help me with one question that would be extremely helpful or provide references to other explanations.

Here is the problem:

Consider the following algorithm and adding 1 to a binary number, represented as an array of n bits, assuming that there is no overflow:

increment is
    local
        i: INTEGER;
    do
        from i:=n until a.item(i) = 0 loop
            a.put(0,i);
            i:=i - 1
        end;
        a.put(1,i)
    end

This algorithm is clearly O(n) in the worst case. Show that its amortized complexity is O(1).

I can see why the worst case is O(n), but I have no idea why its amortized complexity is O(1). Or even what amortized complexity is for that matter.

  • http://stackoverflow.com/questions/15079327/amortized-complexity-in-laymans-terms – C.B. Apr 08 '15 at 01:43
  • If you have not looked up, read and understood what amortised complexity is, your have not done as much research as you should before asking such a question. – PJTraill May 23 '15 at 19:16

1 Answers1

1

Consider how the actual bits in the number influence in how much time the algorithm will take.

The time will be heavily dependent on the position of the last zero in the number. So, `01010111' will take more time to process than '01010110', even both of them having the same number of bits. This happens because the stop condition in the loop looks for the rightmost zero in linear time.

The intuition

Now, think for a series of operations, on every call you make every non-zero bit from the end of the number a zero. So the next execution will certainly not enter the loop (because it will end with 0).

The amortized complexity looks for the average complexity in a expected series of operations. In this case, let's prove that starting on some arbitrary number, calling increment repeatedly will have an average O(1) complexity.

The proof

Let's call loop(n) the number of times that loop inside increment executes. It's trivial that loop(n) is the dominant factor for the complexity of increment.

From that, we start arguing that loop(n) = 0 if and only if n is even. This is so because if n%2 = 0, then the rightmost bit in the number is 0. This happens at least once every 2 subsequent calls to increment.

We can follow this argument and see that loop(n) = 1 if and only if n%4 = 1. This is so because if n%4 = 1, then the last 2 bits of n are 01. This happens at least once every 4 subsequent calls to increment.

Using the same logic, loop(n) = 2 if and only if n%8 = 3. This is so because if n%8 = 3, then the last 3 bits of n are 011. This happens at least once every 8 subsequent calls to increment.

We can generalize and say that loop(n) = x if and only if n % 2^(x+1) = 2^x-1. This is so because if that condition is true, the last x+1 bits of n are 011...11. This happens at least once every 2^(x+1) calls to increment.

To find the average value of loop(n) after subsequent calls to increment, we must weight the possible costs by their chance of occurring.

average(loop(n)) = 1/2 + 1/4 + 1/8 + ... = 1

This is so because in every 2 calls, one will have loop(n) = 0, in every 4 calls one will have loop(n) = 1, and so on...

Community
  • 1
  • 1
Juan Lopes
  • 9,563
  • 2
  • 23
  • 41