Questions tagged [amortized-analysis]

An amortized analysis is an analysis of the total runtime of a set of operations rather than the individual runtime of any one operation.

Typically, in computer science theory, asymptotic worst case algorithm performance is used as the most important metric. However, this ignores that in reality we want the overall computation time to be low.

Sometimes this desire for overall computation time to be low requires algorithms that may have high worst case complexity for a single instance, but that will have good time complexity on average.

A well-known example of this is how it is best to grow the memory in an std::vector<> after a push_back that exceeds the currently allocated memory, i.e. by reallocating an array that is M times bigger, where M could be around 1.5. The amortized run-time of this is constant (O(1)) as long as M>1. Naively reallocating a fixed amount of memory each such push_back is average time O(N), where N is the number of push_backs.

Here is the MIT Open Courseware lecture where they touch upon amortized analysis.

122 questions
85
votes
7 answers

What is amortized analysis of algorithms?

How is it different from asymptotic analysis? When do you use it, and why? I've read some articles that seem to have been written well, like…
GrowinMan
  • 4,600
  • 11
  • 38
  • 57
78
votes
6 answers

Amortized complexity in layman's terms?

Can someone explain amortized complexity in layman's terms? I've been having a hard time finding a precise definition online and I don't know how it entirely relates to the analysis of algorithms. Anything useful, even if externally referenced,…
Bob John
  • 3,504
  • 14
  • 35
  • 55
44
votes
3 answers

Why is the time complexity of python's list.append() method O(1)?

As seen in the documentation for TimeComplexity, Python's list type is implemented is using an array. So if an array is being used and we do a few appends, eventually you will have to reallocate space and copy all the information to the new…
ohad edelstain
  • 1,075
  • 2
  • 13
  • 18
19
votes
3 answers

Amortized analysis of std::vector insertion

How do we do the analysis of insertion at the back (push_back) in a std::vector? It's amortized time is O(1) per insertion. In particular in a video in channel9 by Stephan T Lavavej and in this ( 17:42 onwards ) he says that for optimal performance…
jemmanuel
  • 456
  • 4
  • 13
19
votes
2 answers

Union/find algorithm without union by rank for disjoint-set forests data structure

Here's a breakdown on the union/find algorithm for disjoint set forests on wikipedia: Barebone disjoint-set forests... (O(n)) ... with union by rank ... (now improved to O(log(n)) ... with path compression (now improved to O(a(n)), effectively…
15
votes
1 answer

Haskell vector C++ push_back analogue

I've discovered that Haskell Data.Vector.* miss C++ std::vector::push_back's functionality. There is grow/unsafeGrow, but they seem to have O(n) complexity. Is there a way to grow vectors in O(1) amortized time for an element?
12
votes
1 answer

Amortization of functional array-doubling stack

I'm playing around with the idea of a compact stack—one whose space requirements approach that of an array as its size increases. A candidate structure: data Stack a = Empty | Zero (Stack a) | One !(SmallArray a) (Stack a) | Two !(SmallArray…
dfeuer
  • 44,398
  • 3
  • 56
  • 155
11
votes
1 answer

Haskell collections with guaranteed worst-case bounds for every single operation?

Such structures are necessary for real-time applications - for example user interfaces. (Users don't care if clicking a button takes 0.1s or 0.2s, but they do care if the 100th click forces an outstanding lazy computation and takes 10s to…
Petr
  • 60,177
  • 8
  • 136
  • 295
7
votes
2 answers

Amortized time of dynamic array

As a simple example, in a specific implementation of the dynamic array, we double the size of the array each time it fills up. Because of this, array reallocation may be required, and in the worst case an insertion may require O(n). However, a…
laynece
  • 191
  • 1
  • 2
  • 6
6
votes
1 answer

amortized cost of splay tree : cost + P(tf) - P(ti) ≤ 3(rankf(x) - ranki(x)) explanation

While reading about splay trees I found some expression about the rank of the splay node 'X' and the amortized cost in wikipedia. It is given as, { We can bound the amortized cost of any zig-zig or zig-zag operation by: amortized cost = cost + P(tf)…
poddroid
  • 795
  • 12
  • 26
6
votes
3 answers

What data structure should I use for these operations?

I need a data structure that that stores a subset—call it S—of {1, . . . , n} (n given initially) and supports just these operations: • Initially: n is given, S = {1, . . . , n} at the beginning. • delete(i): Delete i from S. If i isn't in S…
user8170229
6
votes
1 answer

std::map Known-Position Erase Amortized Complexity And Number of Red-Black Tree Recolorings

The complexity of std::map::erase(iterator) is amortized O(1) (see here, for example). While the standard library does not dictate implementations, this de-facto means that the number of rebalancing operations needed for a red-black tree is…
Ami Tavory
  • 66,807
  • 9
  • 114
  • 153
6
votes
1 answer

Efficiency of growing a dynamic array by a fixed constant each time?

So when a dynamic array is doubled in size each time an element is added, I understand how the time complexity for expanding is O(n) n being the elements. What about if the the array is copied and moved to a new array that is only 1 size bigger when…
4
votes
1 answer

Equivalent data structures with same bounds in worst case (vs. amortized)

I could not make my title very descriptive, apologies! Is it the case that for every data structure, supporting some operations with certain amortized running times, another data structure supporting the same operations in the same running times in…
4
votes
1 answer

need to find the amortized cost of a sequence using the potential function method

There is a sequence of n operations, The ith operation costs 2i if i is an exact power of 2, costs 3i if i is an exact power of 3, and 1 for all other operations.Hi first up I want to say that it is a homework problem and I don't want you to solve…
ocwirk
  • 1,049
  • 1
  • 13
  • 31
1
2 3
8 9