347

Are there any O(1/n) algorithms?

Or anything else which is less than O(1)?

Mau
  • 13,256
  • 2
  • 28
  • 49
Shalmanese
  • 5,096
  • 9
  • 27
  • 41
  • Most of the questions assume you mean "Are there any algorithms with a time complexity of O(1/n)?" Shall we assume this is the case? Big-O (and Big-Theta, etc.) describe functions, not algorithms. (I know of no equivalence between functions and algorithms.) – jyoungdev Jul 08 '10 at 22:43
  • 4
    That is the commonly understood definition of "O(X) algorithm" in computer science: an algorithm whose time complexity is O(X) (for some expression X). – David Z Jul 09 '10 at 01:17
  • 2
    I have heard such bound in case of I/O efficient priority queue algorithm using Buffer Tree. In a Buffer Tree, each operation takes O(1/B) I/Os; where B is block size. And total I/Os for *n* operations is O(n/B.log(base M/B)(n/B)), where log part is the height of the buffer tree. – CODError Dec 05 '15 at 01:01
  • There are lots of algorithms with O(1/n) error probability. For example a bloom filter with O(n log n) buckets. – Thomas Ahle May 16 '16 at 10:21
  • You can't lay an egg faster by adding chickens. – Wyck Oct 05 '19 at 01:19

32 Answers32

321

This question isn't as silly as it might seem to some. At least theoretically, something such as O(1/n) is completely sensible when we take the mathematical definition of the Big O notation:

Now you can easily substitute g(x) for 1/x … it's obvious that the above definition still holds for some f.

For the purpose of estimating asymptotic run-time growth, this is less viable … a meaningful algorithm cannot get faster as the input grows. Sure, you can construct an arbitrary algorithm to fulfill this, e.g. the following one:

def get_faster(list):
    how_long = (1 / len(list)) * 100000
    sleep(how_long)

Clearly, this function spends less time as the input size grows … at least until some limit, enforced by the hardware (precision of the numbers, minimum of time that sleep can wait, time to process arguments etc.): this limit would then be a constant lower bound so in fact the above function still has runtime O(1).

But there are in fact real-world algorithms where the runtime can decrease (at least partially) when the input size increases. Note that these algorithms will not exhibit runtime behaviour below O(1), though. Still, they are interesting. For example, take the very simple text search algorithm by Horspool. Here, the expected runtime will decrease as the length of the search pattern increases (but increasing length of the haystack will once again increase runtime).

Konrad Rudolph
  • 482,603
  • 120
  • 884
  • 1,141
  • +1 this is the correct answer. Basically, the OP really asks for a function that's o(1) (small-oh) which does indeed exist (as you demonstrated). – mmx May 25 '09 at 13:10
  • 24
    'Enforced by the hardware' also applies to a Turing Machine. In case of O(1/n) there will always be an input size for which the algorithm is not supposed to execute any operation. And therefore I would think that O(1/n) time complexity is indeed impossible to achieve. – Roland Ewald May 25 '09 at 14:10
  • 2
    @__roland__: Not true. O notation is not really related to steps an algorithm takes but the *growth* of time function as a function of input. The step-wise metaphor of O-notation is an unfortunate consequence of machines we're working on being finite (as I described in my answer). It's not what O is really about. – mmx May 25 '09 at 14:23
  • 3
    @Mehrdad: I am not talking about the O-notation as such (this just describes a set of functions, no connection to time whatsoever), but the notion of an *algorithm* is usually defined on a Turing machine. A TM proceeds in discrete steps. When talking about the time complexity of an algorithm, you're really discussing the growth function of the number of *steps* an equivalent TM has to take for a solution. That's why I doubt that there is an O(1/n) algorithm (=TM). – Roland Ewald May 25 '09 at 14:42
  • 1
    @__roland__: Yes, but it's not *absolute* number of steps it takes to run an algorithm. It's about the rate of growth of steps as N becomes larger. If an algorithm takes a half of steps for input=N+1 relative to input=N. It'll be O(1/2^N). – mmx May 25 '09 at 14:53
  • 2
    @Mehrdad: You are right, absolute numbers are abstracted away by the constants in the definition. But with O(1/n) the absolute number of steps doesn't matter at all: since 1/n is approaching zero I can always choose a large enough n to let any (discrete!) number of steps be zero, no matter what constants are used. – Roland Ewald May 25 '09 at 15:10
  • @__roland__: And that's another reason this is not possible in finite machines, but in a Turing machine, it can take infinite steps to process Algo(N=0) :) For any N you choose, I can choose initial time large enough so that it doesn't approach zero! – mmx May 25 '09 at 15:16
  • @Mehrdad: The game is just the other way round: you have to define the algorithm first, then this algorithm's complexity can be expressed with O(*) [which includes choosing N]. BTW a TM is finite in that is has a finite number of states, and (theoretically) infinite runtime like you describe is easily possible in finite machines (if(input.size()==1)while(true){}). But this of course won't help. – Roland Ewald May 25 '09 at 15:24
  • @__roland__: That's just how we're used to look at it, as we're using finite machines. A TM is infinite as it has infinite tape (and thus, can be in infinitely different situations, this is what I call a state not the Q set that's finite). Note that a program that doesn't halt is *not* equal to a program that takes infinitely long to halt. You just got to think out of the box... – mmx May 25 '09 at 15:30
  • 29
    Mehrdad, you don't understand. The O notation is something about the *limit* (technically lim sup) as n -> ∞. The running time of an algorithm/program is the number of steps on some machine, and is therefore discrete -- there is a non-zero lower bound on the time that an algorithm can take ("one step"). It *is* possible that upto *some finite N* a program takes a number of steps decreasing with n, but the only way an algorithm can be O(1/n), or indeed o(1), is if it takes time 0 for all sufficiently large n -- which is not possible. – ShreevatsaR May 25 '09 at 20:04
  • 1
    ShreevatsaR: You are misinterpreting the limit: it's not required to take time 0 for all sufficiently large N but it approaches 0 as N approaches infinity. If you assume T(0) = infinity, T(N+1) = T(N)/2, this would satisfy the condition of O(1/2^N). It doesn't need to be *equal* to zero for a given large value of N. It just approaches zero in *infinity*, which is not an issue. O notation requires the N0 in `n > N0` to be a real number. – mmx May 25 '09 at 20:37
  • 6
    ShreevatsaR: Your discretization is all well and nice but since we're talking theory here anyway I'll just go ahead and define that the `sleep` function in my example has no discrete step size, so there! Please don't make this discussion even more confusing by mixing theoretical concepts and reality. All are in agreement: in reality, there is no O(1/n) runtime. Satisfied? (But then, in reality, all algorithms are O(1) due to Von Neumann architecture limitations. This gets us absolutely nowhere.) – Konrad Rudolph May 25 '09 at 20:47
  • 30
    We are not disagreeing that O(1/n) *functions* (in the mathematical sense) exist. Obviously they do. But computation is inherently discrete. Something that has a lower bound, such as the running time of a program -- on either the von Neumann architecture or a purely abstract Turing machine -- *cannot* be O(1/n). Equivalently, something that is O(1/n) cannot have a lower bound. (Your "sleep" function has to be invoked, or the variable "list" has to be examined -- or the input tape has to be examined on a Turing machine. So the time taken would change with n as some ε + 1/n, which is not O(1/n)) – ShreevatsaR May 26 '09 at 00:14
  • 1
    @ShreevatsaR: Thanks, that way exactly my point. There simply is no TM (=algorithm) for 'sleeping' that doesn't require a discrete step size - not even in theory ;-) – Roland Ewald May 26 '09 at 05:31
  • 2
    @Mehrdad: "f(n) = O(g(n))" means that, whenever n is above some "threshold" value N0, f(n) must be <= M*g(n). The main thing to understand is that although you can choose the constants N0 and M freely, you must choose them **beforehand** -- they are not allowed to be functions of n. And, for *any* choice of N0 and M that you make, any algorithm that takes 1 or more steps will eventually exceed M*(1/n) (i.e. there is some input size k such that k > N0 and f(k) > M*(1/k)). – j_random_hacker May 26 '09 at 11:16
  • @j_random_hacker: I know. What I said was a response to the discretization idea, not the limit. What I was basically saying is that if "T(0) = infinity" rather than a real number, it's possible for T(n) not too reach "= 0" except in infinity. It can be arbitrarily close but since you already have infinite number of steps in range 0..1, it's no longer supposed to be 0. It's mapping an infinite number of discrete points to a finite range. Anyway, I think both of them make sense. I prefer not to discuss any further as I'm not a math expert. – mmx May 26 '09 at 11:46
  • 1
    @Mehrdad: OK. I have to confess that the difference between programs that don't halt and those that take infinitely long to halt is too much for my maths as well... :) – j_random_hacker May 26 '09 at 13:05
  • 16
    If T(0)=∞, it doesn't halt. There is no such thing as "T(0)=∞, but it still halts". Further, even if you work in R∪{∞} and define T(0)=∞, and T(n+1)=T(n)/2, then T(n)=∞ for all n. Let me repeat: if a discrete-valued function is O(1/n), then for all sufficiently large n it is 0. [Proof: T(n)=O(1/n) means there exists a constant c such that for n>N0, T(n)max(N0,1/c), T(n)<1, which means T(n)=0.] No machine, real or abstract, can take 0 time: it *has* to look at the input. Well, besides the machine that never does anything, and for which T(n)=0 for all n. – ShreevatsaR May 27 '09 at 03:50
  • ShreevatsaR: I'm not an expert here. What you are saying seems right but I'm not still sure that computation is carried on using discrete steps inherently. While it might be for TM model, there are other models such as quantum computers out there. Anyway, the whole question was a great discussion. Unfortunately, I don't think I have enough knowledge to carry it on. – mmx May 27 '09 at 20:22
  • 4
    Even in quantum computers, (measurable) changes only happen discretely. In any case, discreteness is actually not crucial: the initial step (of examining the input) takes some time (say ε), so even if we ignore the "sleep" time, we have T(n) ≥ ε for all n, while on the other hand if T(n) = O(1/n), we would have had T(n) -> 0 as n -> ∞. – ShreevatsaR May 28 '09 at 00:25
  • 43
    You have to like any answer that begins "This question isn't as stupid as it might seem." – Telemachus Jun 27 '09 at 12:53
  • Really interesting discussion. And to prove ShreevatsaR's point further, even a theoretical oracle machine needs to do **one** operation, so not even an oracle can do stuff in O(1/n) → O(0) because it would make no sense. – David Titarenco Jul 09 '10 at 01:43
  • 1
    Really, the fact that this answer got nearly 100 votes while it was still incorrect shows the worst thing about Stack Overflow — answers that *look* correct get voted higher than actually correct answers. (But thanks to Konrad Rudolph for finally editing his answer and making it correct.) – ShreevatsaR Jul 09 '10 at 03:14
  • 1
    @ShreevatsaR: This post was nowhere near to 100 votes until *long after* my edit on June 27 '09. But I’d like to point out that this edit served as clarification only. That was what I’d meant all along. But I admit that this point was unclear and easy to misunderstand, and I made the error worse in the comments by confusing the mathematical concept with its application in computer science. (And I very much enjoyed the discussion. Your explanations were well received.) – Konrad Rudolph Jul 09 '10 at 08:48
  • Sorry, I just looked at the edits, and you're right: I must admit there was nothing strictly wrong in the first answer, although it it was missing an explicit "no" :-) (and the limit is not a practical limitation of hardware but even in theory). I must have got confused with the *discussion* in the comments here and with other answers, sorry. But (especially after the clarification :p) this is really a great answer, both the sleep(1/n) example and the actual Boyer-Horspool algorithm are very good examples of how runtime can decrease within certain ranges. Apologies for the previous comment! – ShreevatsaR Jul 09 '10 at 09:10
  • 1
    To put it more clearly: the direct answer to the given question: "Are there any O(1/n) algorithms? Or anything else which is less than O(1)?" is "No", for trivial reasons. (The time would have to go arbitrarily small, but even the tiniest operation like reading the input takes some fixed time.) But *beyond* this trivial answer, there is still an interesting question, which is whether run time can ever decrease, even for a while, and your answer gives excellent examples. I think because this answer had only the second part initially, I got the impression it wasn't correct (until I just looked). – ShreevatsaR Jul 09 '10 at 09:21
  • @Brian: it isn’t. `list` in the sample code obviously means some generic list, not necessarily a linked list. – Konrad Rudolph Jul 09 '10 at 20:38
  • @KonradRudolph Oh, yeah, I missed that line. Sorry for the oversight and I'll delete that comment. – Shou Ya Feb 28 '15 at 19:08
  • While computation is discrete, this is irrelevant. Simply pass floats into the above function and discover that O(1/n) is actually much worse than O(1) as n -> 0. Consider any algorithm that takes ceil(1/n) discrete steps to finish. – John K Jan 24 '21 at 20:51
148

Yes.

There is precisely one algorithm with runtime O(1/n), the "empty" algorithm.

For an algorithm to be O(1/n) means that it executes asymptotically in less steps than the algorithm consisting of a single instruction. If it executes in less steps than one step for all n > n0, it must consist of precisely no instruction at all for those n. Since checking 'if n > n0' costs at least 1 instruction, it must consist of no instruction for all n.

Summing up: The only algorithm which is O(1/n) is the empty algorithm, consisting of no instruction.

Tobias
  • 5,950
  • 3
  • 33
  • 60
  • 2
    So if someone asked what the time complexity of an empty algorithm is, you'd answer with O(1/n) ??? Somehow I doubt that. – phkahler Feb 16 '10 at 20:42
  • Well, it is Θ(0), and Θ(0) happens to be equal to O(1/n). – Tobias Feb 17 '10 at 15:47
  • 25
    This is the only correct answer in this thread, and (despite my upvote) it is at zero votes. Such is StackOverflow, where "correct-looking" answers are voted higher than actually correct ones. – ShreevatsaR Feb 24 '10 at 17:15
  • 5
    No, its rated 0 because it is incorrect. Expressing a big-Oh value in relation to N when it is independent of N is incorrect. Second, running any program, even one that just exists, takes at least a constant amount of time, O(1). Even if that wasn't the case, it'd be O(0), not O(1/n). – kenj0418 Feb 28 '10 at 17:16
  • 35
    Any function that is O(0) is also O(1/n), and also O(n), also O(n^2), also O(2^n). Sigh, does no one understand simple definitions? O() is an upper bound. – ShreevatsaR Apr 15 '10 at 03:52
  • 17
    @kenj0418 You managed to be wrong in every single sentence. "Expressing a big-Oh value in relation to N when it is independent of N is incorrect." A constant function is a perfectly goof function. "Second, running any program, even one that just exists, takes at least a constant amount of time, O(1)." The definition of complexity doesn't say anything about actually running any programs. "it'd be O(0), not O(1/n)". See @ShreevatsaR's comment. – Alexey Romanov Aug 24 '10 at 20:12
  • Interesting. The same argumentation holds for `O(1/n^2)` as well. Given that we get `O(1/n)=O(1/n^2)`, yet this is obviously wrong since `1/n^2` decreases faster than `1/n` ==> there is no constant `c` such that `1/n <= c/n^2` for any `n>n0` – SomeWittyUsername Oct 02 '16 at 22:52
  • 1
    @SomeWittyUsername: Big Oh notation wasn't invented only for algorithm complexity, it's also useful for convergence of series, and estimating errors; e.g. we know a series converges if its terms are O(1/n^2), but may not if its terms are O(1/n); allowing infinite terms, anything in O(1/n^2) is also in O(1/n) but not vice versa. This isn't just abstract math, either: if you're building a calculator to get arctan(x) in floating point, you might use a Taylor series, calculate finitely many terms, and bound the error to check you have desired accuracy. Out of space to go into that further. – Elizabeth S. Q. Goodman Mar 22 '17 at 06:20
  • @phkahler: You're right, but that doesn't change the answer. If an algorithm's time complexity is O(1), it really is also O(n) by definition, and also O(n^2), even though you probably wouldn't answer O(n) when asked what its time complexity was. The question was whether there are any O(1/n) algorithms, and the empty algorithm is (the only) one. – LarsH Sep 02 '17 at 14:31
  • 1
    Wouldn't that be O(0)? – flarn2006 Jul 29 '19 at 00:33
25

sharptooth is correct, O(1) is the best possible performance. However, it does not imply a fast solution, just a fixed time solution.

An interesting variant, and perhaps what is really being suggested, is which problems get easier as the population grows. I can think of 1, albeit contrived and tongue-in-cheek answer:

Do any two people in a set have the same birthday? When n exceeds 365, return true. Although for less than 365, this is O(n ln n). Perhaps not a great answer since the problem doesn't slowly get easier but just becomes O(1) for n > 365.

Coltin
  • 3,616
  • 7
  • 29
  • 36
Adrian
  • 1,148
  • 1
  • 12
  • 18
  • 7
    366. Don't forget about leap years! – Nick Johnson May 25 '09 at 12:17
  • 1
    You are correct. Like computers, I am occasionally subject to rounding errors :-) – Adrian May 26 '09 at 00:14
  • 10
    +1. There are a number of NP-complete problems that undergo a "phase transition" as n increases, i.e. they quickly become much easier or much harder as you exceed a certain threshold value of n. One example is the Number Partitioning Problem: given a set of n nonnegative integers, partition them into two parts so that the sum of each part is equal. This gets dramatically easier at a certain threshold value of n. – j_random_hacker May 26 '09 at 11:34
23

That's not possible. The definition of Big-O is the not greater than inequality:

A(n) = O(B(n))
<=>
exists constants C and n0, C > 0, n0 > 0 such that
for all n > n0, A(n) <= C * B(n)

So the B(n) is in fact the maximum value, therefore if it decreases as n increases the estimation will not change.

YSC
  • 34,418
  • 7
  • 80
  • 129
sharptooth
  • 159,303
  • 82
  • 478
  • 911
  • 42
    I suspect this answer is the "right one", but unfortunately I lack the intellect to understand it. – freespace May 25 '09 at 06:24
  • 12
    AFAIK this condition does not have to be true for all n, but for all n > n_0 (i.e., only when the size of the input reaches a specific threshold). – Roland Ewald May 25 '09 at 07:37
  • 30
    I don't see how the definition (even corrected) contradicts the question of the OP. The definition holds for completely arbitrary functions! 1/n is a completely sensible function for B, and in fact your equation doesn't contradict that (just do the math). So no, despite much consensus, this answer is in fact *wrong*. Sorry. – Konrad Rudolph May 25 '09 at 08:00
  • 1
    I wouldn't say the answer is wrong. It's just incomplete (and the part that's missing has nothing to do with the math). While O(1/n) is perfectly fine mathematically speaking (I've already seen this for discussing overhead), we're still talking about an algorithm's *time complexity*, where O(1/n) would imply as a limiting case that solving an infinitely large problem costs an infinitely small amount of time. Which sort of answers the question, at least for me :) – Roland Ewald May 25 '09 at 08:32
  • 10
    Wrong! I don't like downvoting but you state that this is impossible when there is no clear consensus. In practice you are correct, if you do construct a function with 1/n runtime (easy) it will eventually hit the some minimum time, effectively making it an O(1) algorithm when implemented. There is nothing to stop the algorithm from being O(1/n) on paper though. – jheriko May 25 '09 at 09:56
  • 1
    @jheriko correct, although as n tends to infinity a O(1/n) algorithm would theoretically take no time at all to exectute (tending to O(0)), which is obviously idiotic, thus the minimum possible big-oh class is O(1). Oh, and yes, this answer is not the correct one for this question so -1 =] – Ed James May 25 '09 at 12:36
  • 1
    @Roland: If the condition holds for sufficiently large n for some constant, you can find another constant so that the condition holds for all n. The converse holds and thus the two definitions are equivalent. – jason May 25 '09 at 13:21
  • 3
    @Jason: Yep, now that you say it... :) @jheriko: A time complexity of O(1/n) does not work on paper IMHO. We're characterizing the growth function f(input size) = #ops for a Turing machine. If it does halt for an input of length n=1 after x steps, then I will choose an input size n >> x, i.e. large enough that, if the algorithm is indeed in O(1/n), no operation should be done. How should a Turing machine even notice this (it's not allowed to read once from the tape)? – Roland Ewald May 25 '09 at 13:54
  • 2
    @Konrad, jherico: Here's a proof (or just a restatement): If an algorithm is O(1/n), it means there is a constant C such that for all (sufficiently large) n, the time taken is T(n) < C*(1/n). Now take any n > 1/C. You would need T(n) < 1, i.e. T(n)=0, which is not possible. – ShreevatsaR May 25 '09 at 20:07
  • -1, not because the conclusion is incorrect (I believe it is correct), but because I can't see any sensible argument that leads to the stated conclusion. At the very least it's missing some important steps. (Also, as __roland__ observes, it's definition is wrong -- the condition only has to be true for all n above some threshold value n_0.) – j_random_hacker May 26 '09 at 11:22
  • 1
    The shortest program you can have would have to execute at least single statement. So, regardless of anything else that happens as the input size grows, you have at least O(1). Whether it performs even worse for smaller inputs than it does for larger inputs, is irrelevant to the fact that it will always AT LEAST one statement to execute, so it will be at least O(1). – kenj0418 Feb 28 '10 at 17:01
  • @Jason, can you further explain why the two definitions are equivalent? (with and without the "n > n0") Are you saying those two definitions are always equivalent, or only for O(1/n)? If always equivalent, why is "n > n0" part of the definition of O( ) notation at all? – LarsH Sep 29 '10 at 05:30
  • @LarsH: They are always equivalent. It is clear that if you have some constant `C` such that for all `n` we have `A(n) <= C * B(n)` then clearly we have a constant `n_0` such that for all `n > n_0` we have `A(n) <= C * B(n)` (just use `n_0 = 0`). The converse is almost as easy. Suppose that you have a constant `C` and an integer `n_0` such that `A(n) <= C * B(n)` for all `n > n_0`. Let `C_m = max(C, max(A(n) / B(n) | 1 <= n <= n_0))`. Then `C_m` satisfies `C_m >= C` so that `A(n) <= C_m * B(n)` when `n > n_0`. Further, it is chosen so that `A(n) <= C_m * B(n)` when `n <= n_0`. – jason Sep 29 '10 at 21:12
  • @LarsH: To note, for simplicity I am assuming that our functions `A` and `B` are defined on the positive natural numbers. It is clear how to extend this to a more general scenario. – jason Sep 29 '10 at 21:23
  • @Jason: OK I'm back, and I believe I got it: given that there are a finite number of values of n below n_0, you can always find a constant that satisfies `A(n) <= C_m * B(n)` for all of them. But then why is Big O notation defined using n_0? Isn't it redundant? – LarsH Sep 29 '10 at 22:01
  • @LarsH: You are correct. In response to your question, mathematicians try to get away with the weakest definition that they can. Further, keep in mind that `O` is about what happens asymptotically and this is clearly reflected in the definition that has the `n > n_0` clause. – jason Sep 30 '10 at 19:16
17

From my previous learning of big O notation, even if you need 1 step (such as checking a variable, doing an assignment), that is O(1).

Note that O(1) is the same as O(6), because the "constant" doesn't matter. That's why we say O(n) is the same as O(3n).

So if you need even 1 step, that's O(1)... and since your program at least needs 1 step, the minimum an algorithm can go is O(1). Unless if we don't do it, then it is O(0), I think? If we do anything at all, then it is O(1), and that's the minimum it can go.

(If we choose not to do it, then it may become a Zen or Tao question... in the realm of programming, O(1) is still the minimum).

Or how about this:

programmer: boss, I found a way to do it in O(1) time!
boss: no need to do it, we are bankrupt this morning.
programmer: oh then, it becomes O(0).

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
nonopolarity
  • 130,775
  • 117
  • 415
  • 675
  • Your joke reminded me of something from the Tao of Programming: http://www.canonical.org/~kragen/tao-of-programming.html#book8 (8.3) – kenj0418 Feb 28 '10 at 17:06
  • An algorithm consisting of zero steps is O(0). That's a very lazy algorithm. – nalply Oct 10 '11 at 20:41
8

No, this is not possible:

As n tends to infinity in 1/n we eventually achieve 1/(inf), which is effectively 0.

Thus, the big-oh class of the problem would be O(0) with a massive n, but closer to constant time with a low n. This is not sensible, as the only thing that can be done in faster than constant time is:

void nothing() {};

And even this is arguable!

As soon as you execute a command, you're in at least O(1), so no, we cannot have a big-oh class of O(1/n)!

Ed James
  • 9,985
  • 15
  • 68
  • 102
7

What about not running the function at all (NOOP)? or using a fixed value. Does that count?

SpliFF
  • 35,724
  • 15
  • 80
  • 113
  • 16
    That's still O(1) runtime. – Konrad Rudolph May 25 '09 at 08:10
  • 2
    Right, that's still O(1). I don't see how someone can understand this, and yet claim in another answer that something less than NO-OP is possible. – ShreevatsaR May 25 '09 at 20:21
  • 4
    ShreevatsaR: there is absolutely no contradiction. You seem to fail to grasp that big O notation has got *nothing to do* with the time spent in the function – rather, it describes how that time *changes* with changing input (above a certain value). See other comment thread for more. – Konrad Rudolph May 25 '09 at 20:42
  • I grasp it perfectly well, thank you. The point — as I made several times in the other thread — is that if the time decreases with input, at rate O(1/n), then it must eventually decrease below the time taken by NOOP. This shows that no algorithm can be O(1/n) asymptotically, although certainly its runtime can decrease up to a limit. – ShreevatsaR Jul 09 '10 at 07:25
  • @ShreevatsaR: It depends on whether you define the null algorithm to take a small amount of time or no time at all – Casebash Jul 09 '10 at 10:56
  • 1
    Yes... as I said elsewhere, any algorithm that is O(1/n) should also take zero time for all inputs, so depending on whether you consider the null algorithm to take 0 time or not, there is an O(1/n) algorithm. So *if* you consider NOOP to be O(1), *then* there are no O(1/n) algorithms. – ShreevatsaR Jul 09 '10 at 16:51
  • Python has a `pass` statement, which runs in `O(1)` :) – Hamish Grubijan Aug 05 '10 at 01:33
7

I often use O(1/n) to describe probabilities that get smaller as the inputs get larger -- for example, the probability that a fair coin comes up tails on log2(n) flips is O(1/n).

Dave
  • 9,441
  • 1
  • 35
  • 32
  • 6
    That's not what big O is though. You can't just redefine it in order to answer the question. – Zifre May 25 '09 at 20:11
  • 11
    It's not a redefinition, it's exactly the definition of big O. – ShreevatsaR May 25 '09 at 20:18
  • 2
    Big O is about time complexity, not probability. – Zifre May 25 '09 at 22:57
  • 10
    I am a theoretical computer scientist by trade. It's about the asymptotic order of a function. – Dave May 25 '09 at 23:03
  • 4
    Big O is a property of an arbitrary real function. Time complexity is just one of its possible applications. Space complexity (the amount of working memory an algorithm uses) is another. That the question is about O(1/n) _algorithms_ implies that it's one of these (unless there's another that applies to algorithms that I don't know about). Other applications include orders of population growth, e.g. in Conway's Life. See also http://en.wikipedia.org/wiki/Big_O_notation – Stewart Aug 17 '09 at 15:51
  • 5
    @Dave: The question wasn't whether there exist O(1/n) functions, which obviously do exist. Rather, it was whether there exist O(1/n) algorithms, which (with the possible exception of the null function) can't exist – Casebash Jul 09 '10 at 10:58
6

For anyone whose reading this question and wants to understand what the conversation is about, this might help:

|    |constant |logarithmic |linear|  N-log-N |quadratic|  cubic  |  exponential  |
|  n |  O(1)   | O(log n)   | O(n) |O(n log n)|  O(n^2) |  O(n^3) |     O(2^n)    |
|  1 |       1 |          1 |     1|         1|        1|       1 |             2 |
|  2 |       1 |          1 |     2|         2|        4|       8 |             4 |
|  4 |       1 |          2 |     4|         8|       16|      64 |            16 |
|  8 |       1 |          3 |     8|        24|       64|     512 |           256 |
| 16 |       1 |          4 |    16|        64|      256|   4,096 |         65536 |
| 32 |       1 |          5 |    32|       160|    1,024|  32,768 | 4,294,967,296 |
| 64 |       1 |          6 |    64|       384|    4,069| 262,144 |   1.8 x 10^19 |
Learner
  • 311
  • 3
  • 8
Craig O'Connor
  • 150
  • 2
  • 10
6

O(1) simply means "constant time".

When you add an early exit to a loop[1] you're (in big-O notation) turning an O(1) algorithm into O(n), but making it faster.

The trick is in general the constant time algorithm is the best, and linear is better then exponential, but for small amounts of n, the exponential algorith might actually be faster.

1: Assuming a static list length for this example

LapTop006
  • 512
  • 4
  • 9
4

many people have had the correct answer (No) Here's another way to prove it: In order to have a function, you have to call the function, and you have to return an answer. This takes a certain constant amount of time. EVEN IF the rest of the processing took less time for larger inputs, printing out the answer (Which is we can assume to be a single bit) takes at least constant time.

Brian Postow
  • 10,227
  • 14
  • 69
  • 113
4

I believe quantum algorithms can do multiple computations "at once" via superposition...

I doubt this is a useful answer.

Jeff Meatball Yang
  • 34,069
  • 26
  • 85
  • 118
  • 1
    That would still be constant time, i.e. O(1), meaning it takes the same amount of time to run for data of size n as it does for data of size 1. – freespace May 25 '09 at 06:22
  • 1
    Good point. The main use of quantum algorithms is to tackle exponential classical algorithms to bring them down to polynomial time. No algorithm would get faster as n grows lager, as O(1/n) would imply. – Jeff Meatball Yang May 25 '09 at 06:30
  • 2
    But what if the problem was a pale ale? (ah. hah. ha.) – Jeff Meatball Yang May 25 '09 at 06:31
  • 7
    That would be a super position to be in. – Daniel Earwicker May 25 '09 at 07:27
  • 2
    Quantum algorithms can do multiple computations, but you can only retrieve the result of one computation, and you can't choose which result to get. Thankfully, you can also do operations on a quantum register as a whole (for example, QFT) so you're much likelier to find something :) – Gracenotes May 25 '09 at 08:02
  • 2
    it's perhaps not useful, but it has the advantage of being true, which puts it above some of the more highly voted answers B-) – Brian Postow Jul 09 '10 at 19:20
2

If solution exists, it can be prepared and accessed in constant time=immediately. For instance using a LIFO data structure if you know the sorting query is for reverse order. Then data is already sorted, given that the appropriate model (LIFO) was chosen.

Larsson
  • 21
  • 1
2

You can't go below O(1), however O(k) where k is less than N is possible. We called them sublinear time algorithms. In some problems, Sublinear time algorithm can only gives approximate solutions to a particular problem. However, sometimes, an approximate solutions is just fine, probably because the dataset is too large, or that it's way too computationally expensive to compute all.

Hao Wooi Lim
  • 3,748
  • 4
  • 27
  • 34
  • 1
    Not sure I understand. Log(N) is less than N. Does that mean that Log(N) is a sublinear algorithm? And many Log(N) algorithms do exist. One such example is finding a value in a binary tree. However, these are still different than 1/N, Since Log(N) is always increasing, while 1/n is a decreasing function. – Kibbee May 25 '09 at 14:41
  • Looking at definition, sublinear time algorithm is any algorithm whose time grows slower than size N. So that includes logarithmic time algorithm, which is Log(N). – Hao Wooi Lim May 26 '09 at 02:08
  • 2
    Uh, sublinear time algorithms can give exact answers, e.g. binary search in an ordered array on a RAM machine. – A. Rex May 28 '09 at 00:39
  • @A. Rex: Hao Wooi Lim said "In some problems". – LarsH Sep 29 '10 at 05:41
2

Which problems get easier as population grows? One answer is a thing like bittorrent where download speed is an inverse function of number of nodes. Contrary to a car, which slows down the more you load it, a file-sharing network like bittorrent speeds the more nodes connected.

Niklas R.
  • 22,209
  • 67
  • 202
  • 380
  • Yes, but the number of bittorrent nodes is more like the number of processors in a parallel computer. The "N" in this case would be the size of the file trying to be downloaded. Just as you could find an element in an unsorted array of length N in constant time if you had N computers, you could download a file of Size N in constant time if you had N computers trying to send you the data. – Kibbee May 25 '09 at 14:54
1

As has been pointed out, apart from the possible exception of the null function, there can be no O(1/n) functions, as the time taken will have to approach 0.

Of course, there are some algorithms, like that defined by Konrad, which seem like they should be less than O(1) in at least some sense.

def get_faster(list):
    how_long = 1/len(list)
    sleep(how_long)

If you want to investigate these algorithms, you should either define your own asymptotic measurement, or your own notion of time. For example, in the above algorithm, I could allow the use of a number of "free" operations a set amount of times. In the above algorithm, if I define t' by excluding the time for everything but the sleep, then t'=1/n, which is O(1/n). There are probably better examples, as the asymptotic behavior is trivial. In fact, I am sure that someone out there can come up with senses that give non-trivial results.

Casebash
  • 100,511
  • 79
  • 236
  • 337
  • "As has been pointed out, apart from the possible exception of the null function, there can be no O(1/n) functions, as the time taken will have to approach 0." Uh... what about 1/n, or 1/n², or 1/n³, or... – A.P. Oct 16 '20 at 19:58
1

Most of the rest of the answers interpret big-O to be exclusively about the running time of an algorithm. But since the question didn't mention it, I thought it's worth mentioning the other application of big-O in numerical analysis, which is about error.

Many algorithms can be O(h^p) or O(n^{-p}) depending on whether you're talking about step-size (h) or number of divisions (n). For example, in Euler's method, you look for an estimate of y(h) given that you know y(0) and dy/dx (the derivative of y). Your estimate of y(h) is more accurate the closer h is to 0. So in order to find y(x) for some arbitrary x, one takes the interval 0 to x, splits it up until n pieces, and runs Euler's method at each point, to get from y(0) to y(x/n) to y(2x/n), and so on.

So Euler's method is then an O(h) or O(1/n) algorithm, where h is typically interpreted as a step size and n is interpreted as the number of times you divide an interval.

You can also have O(1/h) in real numerical analysis applications, because of floating point rounding errors. The smaller you make your interval, the more cancellation occurs for the implementation of certain algorithms, more loss of significant digits, and therefore more error, which gets propagated through the algorithm.

For Euler's method, if you are using floating points, use a small enough step and cancellation and you're adding a small number to a big number, leaving the big number unchanged. For algorithms that calculate the derivative through subtracting from each other two numbers from a function evaluated at two very close positions, approximating y'(x) with (y(x+h) - y(x) / h), in smooth functions y(x+h) gets close to y(x) resulting in large cancellation and an estimate for the derivative with fewer significant figures. This will in turn propagate to whatever algorithm you require the derivative for (e.g., a boundary value problem).

Andrew Lei
  • 335
  • 2
  • 9
1

I guess less than O(1) is not possible. Any time taken by algo is termed as O(1). But for O(1/n) how about the function below. (I know there are many variants already presented in this solution, but I guess they all have some flaws (not major, they explain the concept well). So here is one, just for the sake of argument:

def 1_by_n(n, C = 10):   #n could be float. C could be any positive number
  if n <= 0.0:           #If input is actually 0, infinite loop.
    while True:
      sleep(1)           #or pass
    return               #This line is not needed and is unreachable
  delta = 0.0001
  itr = delta
  while delta < C/n:
    itr += delta

Thus as n increases the function will take less and less time. Also it is ensured that if input actually is 0, then the function will take forever to return.

One might argue that it will be bounded by precision of machine. thus sinc eit has an upper bound it is O(1). But we can bypass that as well, by taking inputs of n and C in string. And addition and comparison is done on string. Idea is that, with this we can reduce n arbitrarily small. Thus upper limit of the function is not bounded, even when we ignore n = 0.

I also believe that we can't just say that run time is O(1/n). But we should say something like O(1 + 1/n)

user1953366
  • 889
  • 2
  • 9
  • 18
1

O(1/n) is not less then O(1), it basically means that the more data you have, the faster algorithm goes. Say you get an array and always fill it in up to a 10100 elements if it has less then that and do nothing if there's more. This one is not O(1/n) of course but something like O(-n) :) Too bad O-big notation does not allow negative values.

vava
  • 22,949
  • 11
  • 60
  • 78
  • 1
    "O(1/n) is not less then O(1)" -- if a function f is O(1/n), it's also O(1). And big-oh feels a lot like a "lesser than" relation: it's reflexive, it's transitive, and if we have symmetry between f and g the two are equivalent, where big-theta is our equivalence relation. ISTR "real" ordering relations requiring a <= b and b <= a to imply a = b, though, and netcraft^W wikipedia confirms it. So in a sense, it's fair to say that indeed O(1/n) is "less than" O(1). – Jonas Kölker May 26 '09 at 03:15
1

What about this:

void FindRandomInList(list l)
{
    while(1)
    {
        int rand = Random.next();
        if (l.contains(rand))
            return;
    }
}

as the size of the list grows, the expected runtime of the program decreases.

Shalmanese
  • 5,096
  • 9
  • 27
  • 41
  • i think you dont understand the meaning of O(n) – Markus Lausberg May 25 '09 at 06:40
  • Not with list though, with array or hash where `constains` is O(1) – vava May 25 '09 at 06:43
  • ok, the random function can be thought of as a lazy array, so you're basically searching each element in the "lazy random list" and checking whether it's contained in the input list. I think this is *worse* than linear, not better. – hasen May 25 '09 at 06:54
  • He's got some point if you notice that int has limited set of values. So when l would contain 264 values it's going to be instantaneous all the way. Which makes it worse than O(1) anyway :) – vava May 25 '09 at 07:01
0

In numerical analysis, approximation algorithms should have sub-constant asymptotic complexity in the approximation tolerance.

class Function
{
    public double[] ApproximateSolution(double tolerance)
    {
        // if this isn't sub-constant on the parameter, it's rather useless
    }
}
Sam Harwell
  • 92,171
  • 18
  • 189
  • 263
  • do you really mean sub-constant, or sublinear? Why should approximation algorithms be sub-constant? And what does that even mean?? – LarsH Sep 29 '10 at 05:45
  • @LarsH, the error of approximation algorithms is proportional to the step size (or to a positive power of it), so the smaller your step size, the smaller the error. But another common way to examine an approximation problem is the error as compared to how many times an interval is divided. The number of partitions of an interval is inversely proportional to the step size, so the error is inversely proportional to some positive power of the number of partitions - as you increase the number of partitions, your error decreases. – Andrew Lei Sep 02 '17 at 02:22
  • @AndrewLei: Wow, an answer almost 7 years later! I understand Sam's answer now better than I did then. Thanks for responding. – LarsH Sep 02 '17 at 02:32
0

OK, I did a bit of thinking about it, and perhaps there exists an algorithm that could follow this general form:

You need to compute the traveling salesman problem for a 1000 node graph, however, you are also given a list of nodes which you cannot visit. As the list of unvisitable nodes grows larger, the problem becomes easier to solve.

Shalmanese
  • 5,096
  • 9
  • 27
  • 41
  • 4
    It's different kind of n in the O(n) then. With this trick you could say every algorithm has O(q) where q is number of people living in China for example. – vava May 25 '09 at 06:35
  • 2
    Boyer-Moore is of a similar kind (O(n/m)), but that's not really "better than O(1)", because n >= m. I think the same is true for your "unvisitable TSP". – Niki May 25 '09 at 06:39
  • Even in this case the runtime of the TSP is NP-Complete, you're simply removing nodes from the graph, and therefore effectively decreasing n. – Ed James May 25 '09 at 12:41
0

I see an algorithm that is O(1/n) admittedly to an upper bound:

You have a large series of inputs which are changing due to something external to the routine (maybe they reflect hardware or it could even be some other core in the processor doing it.) and you must select a random but valid one.

Now, if it wasn't changing you would simply make a list of items, pick one randomly and get O(1) time. However, the dynamic nature of the data precludes making a list, you simply have to probe randomly and test the validity of the probe. (And note that inherently there is no guarantee the answer is still valid when it's returned. This still could have uses--say, the AI for a unit in a game. It could shoot at a target that dropped out of sight while it was pulling the trigger.)

This has a worst-case performance of infinity but an average case performance that goes down as the data space fills up.

Loren Pechtel
  • 8,549
  • 3
  • 27
  • 45
-1

It may be possible to construct an algorithm that is O(1/n). One example would be a loop that iterates some multiple of f(n)-n times where f(n) is some function whose value is guaranteed to be greater than n and the limit of f(n)-n as n approaches infinity is zero. The calculation of f(n) would also need to be constant for all n. I do not know off hand what f(n) would look like or what application such an algorithm would have, in my opinion however such a function could exist but the resulting algorithm would have no purpose other than to prove the possibility of an algorithm with O(1/n).

Greg
  • 21
  • 1
  • Your loop requires a check which takes at least constant time, so the resulting algorithm has at least complexity O(1). – Stefan Reich Jun 10 '19 at 13:01
-2
inline void O0Algorithm() {}
sth
  • 200,334
  • 49
  • 262
  • 354
Stewart
  • 3,675
  • 3
  • 25
  • 34
-2

Here's a simple O(1/n) algorithm. And it even does something interesting!

function foo(list input) {
  int m;
  double output;

  m = (1/ input.size) * max_value;  
  output = 0;
  for (int i = 0; i < m; i++)
    output+= random(0,1);

  return output;
}

O(1/n) is possible as it describes how the output of a function changes given increasing size of input. If we are using the function 1/n to describe the number of instructions a function executes then there is no requirement that the function take zero instructions for any input size. Rather, it is that for every input size, n above some threshold, the number of instructions required is bounded above by a positive constant multiplied by 1/n. As there is no actual number for which 1/n is 0, and the constant is positive, then there is no reason why the function would constrained to take 0 or fewer instructions.

sth
  • 200,334
  • 49
  • 262
  • 354
ejspencer
  • 102
  • 7
  • 1
    Since O(1/n) will fall below the horizontal line =1, and when n reaches infinite, your code will still execute a given number of steps, this algorithm is an O(1) algorithm. Big-O notation is a function of all the different parts of the algorithm, and it picks the biggest one. Since the method will always run some of the instructions, when n reaches infinite, you're left with those same instructions executing every time, and thus the method will then run in constant time. Granted, it won't be much time, but that's not relevant to Big-O notation. – Lasse V. Karlsen Jan 23 '10 at 23:55
-2

I don't know about algorithms but complexities less than O(1) appear in randomized algorithms. Actually, o(1) (little o) is less than O(1). This kind of complexity usually appears in randomized algorithms. For example, as you said, when the probability of some event is of order 1/n they denote it with o(1). Or when they want to say that something happens with high probability (e.g. 1 - 1/n) they denote it with 1 - o(1).

A. Mashreghi
  • 1,465
  • 2
  • 15
  • 28
-2

If the answer is the same regardless of the input data then you have an O(0) algorithm.

or in other words - the answer is known before the input data is submitted - the function could be optimised out - so O(0)

pro
  • 957
  • 2
  • 9
  • 24
-2

Big-O notation represents the worst case scenario for an algorithm which is not the same thing as its typical run time. It is simple to prove that an O(1/n) algorithm is an O(1) algorithm . By definition,
O(1/n) --> T(n) <= 1/n, for all n >= C > 0
O(1/n) --> T(n) <= 1/C, Since 1/n <= 1/C for all n >=C
O(1/n) --> O(1), since Big-O notation ignores constants (i.e. the value of C doesn't matter)

Lawrence Barsanti
  • 27,683
  • 10
  • 43
  • 64
  • No: Big O notation is also used to talk about average-case and expected time (and even best-case) scenarios. The rest follows. – Konrad Rudolph May 25 '09 at 13:23
  • The 'O' notation certainly defines an *upper bound* (in terms of algorithmic complexity, this would be the worst case). Omega and Theta are used to denote best and average case, respectively. – Roland Ewald May 25 '09 at 13:59
  • 2
    Roland: That's a misconception; upper bound is not the same thing as worst-case, the two are independent concepts. Consider the expected (and average) runtime of the `hashtable-contains` algorithm which can be denoted as O(1) -- and the worst case can be given very precisely as Theta(n)! Omega and Theta may simply be used to denote other bounds but *to say it again*: they have got nothing to do with average or best case. – Konrad Rudolph May 25 '09 at 14:17
  • Konrad: True. Still, Omega, Theata and O are usually used to *express* bounds, and if *all* possible inputs are considered, O represents the upper bound, etc. – Roland Ewald May 25 '09 at 15:03
  • 1
    The fact that O(1/n) is a subset of O(1) is trivial and follows directly from the definition. In fact, if a function g is O(h), then any function f which is O(g) is also O(h). – Tobias Jun 11 '09 at 17:43
-2

Nothing is smaller than O(1) Big-O notation implies the largest order of complexity for an algorithm

If an algorithm has a runtime of n^3 + n^2 + n + 5 then it is O(n^3) The lower powers dont matter here at all because as n -> Inf, n^2 will be irrelevant compared to n^3

Likewise as n -> Inf, O(1/n) will be irrelevant compared to O(1) hence 3 + O(1/n) will be the same as O(1) thus making O(1) the smallest possible computational complexity

-3

There are sub-linear algorithms. In fact, the Bayer-Moore search algorithm is a very good example of one.

Esteban Araya
  • 27,658
  • 22
  • 99
  • 139
  • Nice, but the size of the input should really be the sum of the string lengths (searched + searched_for). – phkahler Feb 15 '10 at 14:34
  • 1
    OK, there are sub-linear algorithms, but what does that have to do with the question? Linear is O(n). Constant is O(1). – LarsH Sep 29 '10 at 05:54
-3

I don't understand the mathematics but the concept appears to be looking for a function that takes less time as you add more inputs? In that case what about:

def f( *args ): 
  if len(args)<1:
    args[1] = 10

This function is quicker when the optional second argument is added because otherwise it has to be assigned. I realise this isn't an equation but then the wikipeadia pages says big-O is often applied to computing systems as well.

SpliFF
  • 35,724
  • 15
  • 80
  • 113