An amortized analysis is an analysis of the total runtime of a set of operations rather than the individual runtime of any one operation.
Typically, in computer science theory, asymptotic worst case algorithm performance is used as the most important metric. However, this ignores that in reality we want the overall computation time to be low.
Sometimes this desire for overall computation time to be low requires algorithms that may have high worst case complexity for a single instance, but that will have good time complexity on average.
A well-known example of this is how it is best to grow the memory in an std::vector<> after a push_back
that exceeds the currently allocated memory, i.e. by reallocating an array that is M
times bigger, where M
could be around 1.5. The amortized run-time of this is constant (O(1)
) as long as M>1
. Naively reallocating a fixed amount of memory each such push_back
is average time O(N)
, where N
is the number of push_back
s.
Here is the MIT Open Courseware lecture where they touch upon amortized analysis.