You’ve asked several questions here. Let’s address each in turn:
The book seems to state as long as I can find a constant C and some number n > no, where f(x) <= C*g(x), then I can conclude f(x) = O(g(x)). It would seem inductive reasoning would be required to prove it holds true for all n, which the book doesn't state anything about!
That is indeed the formal definition of big-O notation. However, nothing about this definition necessitates the use of induction. For example, suppose I want to prove that 2n + 137 = O(n). I’ll pick c = 3 and n0 and show these constants have the properties we want. Specifically, pick any n ≥ 137. Then
2n + 137
≤ 2n + n (since 137 ≤ n)
= 3n,
and we’ve arrived here without using any induction.
I am also seeing some people answering questions using a definition of big O(it does for little o though) I am not seeing in the book. Namely O is:
f(x) = O(g(x)) if lim n->∞ f(x) / g(x) exists
Is this considered a more rigorous proof of determing asymptotic bounds?
This one is interesting. Imagine that f and g are functions where limn → ∞ (f(n) / g(n)) exists. Then indeed you will have f(n) = O(g(n)). One way to see this is to use the formal definition of limits at infinity. Specifically, if L is finite, then by definition
limn → ∞ h(n) = L if and only if for any ε > 0, there is an nε such that |h(n) - L| ≤ ε for all n ≥ nε.
So suppose that limn → ∞ = L for some constant L. Now, pick ε = 1, so there’s some n1 where for any n ≥ n1 we have
|f(n) / g(n) - L| ≤ 1.
In particular, that means that
f(n) / g(n) - L ≤ 1,
So
f(n) ≤ (1 + L)g(n)
Picking c = 1 + L and n0 = n1 then gives us the constants we need to prove f(n) = O(g(n)).
Stated differently, if that limit exists, then f(n) = O(g(n)).
However, it’s possible that f(n) = O(g(n)) even if that limit doesn’t exist. For example, pick f(n) = 2 + sin n and g(n) = 1. Then f(n) = O(g(n)), but then limit of f(n) / g(n) doesn’t exist because the values oscillate out toward infinity. So the converse isn’t true.
I think - but am not 100% sure - that if you replace the limit with a limit superior that you get an equivalence, but that’s something I’ll need to think more about.
Is this a more “rigorous” way to do big-O proofs? I wouldn’t say so. It’s just as rigorous as doing it the other way, though this may be more beneficial if you’re trying to build an intuition.
If that is the case I take it Ω is:
f(x) = Ω(g(n)) if lim n->∞ f(x) / g(x) > 0
and Θ is: f(x) = Ω(g(n)) Λ f(x) = Ω(g(n))
Very close! As mentioned above, the limit trick isn’t a “definition” of big-O notation because it doesn’t handle all cases, so you can’t define Ω notation this way. However, your definition of Θ notation is spot-on.
Hope this helps, and good luck!