I am studying algorithms on my own and turned to the Open Data Structures (C++ Edition) free ebook as a reference. In my attempt to master the topic, I am determined to finish all the challenges in the book. However, I am have a lot of trouble understanding how one particular unpopular algorithm could have O(1) for add() and remove.
One of the challenges is to create a random queue with these properties (exercises).
Exercise 2.2. Design and implement a RandomQueue. This is an implementation of the Queue interface in which the remove() operation removes an element that is chosen uniformly at random among all the elements currently in the queue. (Think of a RandomQueue as a bag in which we can add elements or reach in and blindly remove some random element.) The add(x) and remove() operations in a RandomQueue should run in constant time per operation.
The chapter deals with array-backed-list so the addition and removal of elements is rather trivial in that sense. However, the array sometimes has to be recreated for size. You are suppose to copy the old array to the new one. This essentially could be O(n). Also I believe that I need to utilize a circular array. So essentially I would have to shift indices within the array producing O(n-1) for time complexity.
I am very confused about how to calculate and measure these algorithms. The book does talk about O(m) but it is rather vague at times.
Theorem 2.2. An ArrayQueue implements the (FIFO) Queue interface. Ignoring the cost of calls to resize(), an ArrayQueue supports the operations add(x) and remove() in O(1) time per operation. Furthermore, beginning with an empty ArrayQueue, any sequence of m add(i, x) and remove(i) operations results in a total of O(m) time spent during all calls to resize().
Like how can you just ignore that, I am not making the connection on how you can just lop off that portion of the time complexity?