1

What is the official theoretical definition of amortized vs. non amortized complexity of a modification function? The question is especially relevant inside the C++ standard for the STL, but it's a more general question.

Can "constant" (non mutating) functions (observers, or "getters") have amortized complexity?

EDIT: clarification of apparently confusing "observers" (which can either mean observing function or external observer, according to context).

curiousguy
  • 7,344
  • 2
  • 37
  • 52
  • 1
    Removed the C++ tag since you are asking in general. The C++ tag is specifically for C++ code. – NathanOliver Nov 01 '19 at 21:32
  • @NathanOliver-ReinstateMonica In general but esp. in C++. – curiousguy Nov 01 '19 at 21:34
  • *Can observers have amortized complexity?* It's completely unrelated to what the standard library does. Why would you bring up what the standard library does in the context of your questiion? – R Sahu Nov 01 '19 at 21:36
  • @RSahu How is that unrelated? Stdlib doesn't have observers? (getters) – curiousguy Nov 01 '19 at 21:38
  • @curiousguy, I see. I was thinking of Observers from design patterns. – R Sahu Nov 01 '19 at 21:39
  • 1
    related: https://stackoverflow.com/questions/15079327/amortized-complexity-in-laymans-terms – 463035818_is_not_a_number Nov 01 '19 at 21:42
  • @RSahu Edited for clarity. Thanks for pointing out the potential ambiguity. – curiousguy Nov 01 '19 at 21:42
  • @formerlyknownas_463035818 Somewhat related but that raises more questions. What does it mean for a particular function (not the whole program or all algorithms) to have a non amortized complexity? – curiousguy Nov 01 '19 at 22:06
  • 1
    "non armotized" makes as much sense as "non average" ;) – 463035818_is_not_a_number Nov 01 '19 at 22:44
  • "Amortized" is one possible method to measure complexity; other methods include "worst-case" and "average". When specific method is not mentioned, worst-case is usually implied; thus "this algorithm is linear" usually means "linear in the worst case". In light of this, it doesn't make sense to talk about "non-amortized complexity" - if you don't want to use amortized method, you need to specify what other method you want to use. – Igor Tandetnik Nov 02 '19 at 00:07
  • @IgorTandetnik F.ex the C++ std uses amortized complexity and just complexity... Many STL-like lib function also do that. How is the average complexity computed? Averaged over what? – curiousguy Nov 02 '19 at 01:06
  • 1
    Like I said, "just complexity" is a common shorthand for "worst-case complexity". Average over the universe of all possible inputs (sometimes weighted by the probability of their occurrence). Average complexity is not often used; when discussed, the details of the domain are usually explicitly spelled out, or else clear from context. For example, [quicksort](https://en.wikipedia.org/wiki/Quicksort) is `O(n log n)` on average (across all possible finite sequences of natural numbers, say), O(n^2) in the worst case. – Igor Tandetnik Nov 02 '19 at 01:17

1 Answers1

1

There are at least three (indeed more) independent and orthogonal axes along which asymptotic analysis of algorithms may occur: the case (best, average, worst); the bound (big-O, little-omega, Theta); and whether amortization is to be considered.

The case describes the class of input under consideration. It is literally a subset of the inputs to the algorithm. When performing asymptotic complexity analysis, it is natural to partition the input state into subsets based on the algorithm's performance w.r.t. elements of the subset. So, you might have a best case for which the algorithm does asymptotically as well as possible; a subset where it does asymptotically as poorly as possible; or, in the case of average complexity, a set of inputs along with their relative weight or probability of occurring. In the absence of a specifically described case, it's natural to assume that all inputs are included and have equal weight.

The bound describes the how the algorithm's complexity behaves asymptotically for inputs of a certain class. The complexity can be bounded from above, from below, or both; the bounds can be tight or not; and while which bound you choose might in practice be informed by the case under consideration (lower bound on best case, upper bound on worst case, etc.) in theory the choice is completely independent.

Amortized analysis is performed on top of the underlying complexity analysis and contemplates not single inputs but sequences of inputs. Amortized analysis seeks to explain how the aggregate time complexity of a sequence of operations behaves. For instance, consider the simple case of inserting a new element into an array-backed vector structure. If we have enough capacity, the operation is O(1). if we lack capacity, the operation is O(n). If we increase the capacity of the vector arithmetically (adding, e.g., k new spots each time), then we will have O(1) accesses (k-1)/k of the time, and O(n) accesses 1/k of the time, for O(n) amortized complexity. However, if we geometrically increase the capacity each time we need more capacity, then we find a sequence of adds will have O(1) amortized complexity.

Truly constant functions can have amortized analysis performed but there is really no reason to do so. Amortized analysis only makes sense when a potentially small number of repeated requests have poor individual performance, while the majority (asymptotically speaking) of requests has asymptotically better performance.

Patrick87
  • 25,592
  • 3
  • 33
  • 69