4

In computer science, the iterated logarithm of n, written log* n (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to 1. The simplest formal definition is the result of this recursive function:

Is there any algorithm with time complexity O(lg * n) ?

templatetypedef
  • 328,018
  • 92
  • 813
  • 992
Naveen N
  • 147
  • 6
  • Checking if a sorted array of values does contain a given value can be done in O(log n). It can be done with a recursive or iterative algorithm. – Luca Angeletti Oct 20 '15 at 10:48
  • Of course there is. Just write an iterative algorithm that does precisely what you describe and you'll have an algorithm of that complexity. – aioobe Oct 20 '15 at 10:48
  • 1
    @appzYourLife, that's O(log *n*) not O(log* *n*). – aioobe Oct 20 '15 at 10:51
  • https://en.wikipedia.org/wiki/Iterated_logarithm#Analysis_of_algorithms – cohoz Oct 20 '15 at 11:05
  • 2
    Consider that everything that grows even slower is also in that class, for example O(1) is contained within O(log* n) – harold Oct 20 '15 at 11:49
  • @harold is right. you should ask for algoirthm that runs in `theta(log*n)` rather – dsharew Oct 20 '15 at 12:50
  • you have examples here.http://stackoverflow.com/questions/3797617/what-does-log-mean – dsharew Oct 20 '15 at 12:56

2 Answers2

3

If you implement union find algorithm with path compression and union by rank, both union and find will have complexity O(log*(n)).

Ivaylo Strandjev
  • 64,309
  • 15
  • 111
  • 164
0

It's rare but not unheard of to see log* n appear in the runtime analysis of algorithms. Here are a couple of cases that tend to cause log* n to appear.

Approach 1: Shrinking By a Log Factor

Many divide-and-conquer algorithms work by converting an input of size n into an input of size n / k. The number of phases of these algorithms is then O(log n), since you can only divide by a constant O(log n) times before you shrink your input down to a constant size. In that sense, when you see "the input is divided by a constant," you should think "so it can only be divided O(log n) times before we run out of things to divide."

In rarer cases, some algorithms work by shrinking the size of the input down by a logarithmic factor. For example, one data structure for the range semigroup query problem work by breaking a larger problem down into blocks of size log n, then recursively subdividing each block of size log n into blocks of size log log n, etc. This process eventually stops once the blocks hit some small constant size, which means that it stops after O(log* n) iterations. (This particular approach can then be improved to give a data structure in which the blocks have size log* n for an overall number of rounds of O(log** n), eventually converging to an optimal structure with runtime O(α(n)), where α(n) is the inverse Ackermann function.

Approach 2: Compressing Digits of Numbers

The above section talks about approaches that explicitly break a larger problem down into smaller pieces whose sizes are logarithmic in the size of the original problem. However, there's another way to take an input of size n and to reduce it to an input of size O(log n): replace the input with something roughly comparable in size to its number of digits. Since writing out the number n requires O(log n) digits to write out, this has the effect of shrinking the size of the number down by the amount needed to get an O(log* n) term to arise.

As a simple example of this, consider an algorithm to compute the digital root of a number. This is the number you get by repeatedly adding the digits of a number up until you're down to a single digit. For example, the digital root of 78979871 can be found by computing

7 + 8 + 9 + 7 + 9 + 8 + 7 + 1 = 56

5 + 6 = 11

1 + 1 = 2

2

and getting a digital root of two. Each time we sum the digits of the number, we replace the number n with a number that's at most 9 ⌈log10 n⌉, so the number of rounds is O(log* n). (That being said, the total runtime is O(log n), since we have to factor in the work associated with adding up the digits of the number, and adding the digits of the original number dominates the runtime.)

For a more elaborate example, there is a parallel algorithm for 3-coloring the nodes of a tree described in the paper "Parallel Symmetry-Breaking in Sparse Graphs" by Goldberg et al. The algorithm works by repeatedly replacing numbers with simpler numbers formed by summing up certain bits of the numbers, and the number of rounds needed, like the approach mentioned above, is O(log* n).

Hope this helps!

templatetypedef
  • 328,018
  • 92
  • 813
  • 992