I'm practicing writing algorithms optimized for space or time complexity. With prime sieves, at minimum you must store a list of all primes found. It seems data in proportion to the number of primes found is the least amount of space such an algorithm may use.
- Is this rationale valid?
- How would the space complexity for this algorithm be evaluated?
From Wikipedia about the sieve of Atkin - What I'm unsure about is how a sieve can use O(n^1/2) space when the number of primes exceeds this. This is why it seems at minimum the space must be proportional to the number of primes. Am I confusing countable numbers with space complexity?
In this paper on the sieve of Atkin, their algorithm prints "the prime numbers up to N...Here “memory” does not include the paper used by the printer." This seems like an unfair calculation of space.
- I would appreciate clarification on how this should be / is actually measured objectively.
def prime_sieve(limit):
factors = dict()
primes = []
factors[4] = (2)
primes.append(2)
for n in range(3, limit + 1):
if n not in factors:
factors[n * 2] = (n)
primes.append(n)
else:
prime = factors.get(n)
m = n + prime
while m in factors:
m += prime
factors[m] = (prime)
del factors[n]
return primes