2

I am just wondering why the initial estimation of the remaining time of each download in most (if not all) applications is only based on the current download speed of the particular download and does not take into account the other concurrent downloads.

For example, if we have 2 concurrent downloads that started at the same time (t=0), download A=10MB and download B=5MB and a total available bandwidth of 1MB/s, shared equally among the two downloads (i.e. 0.5MB/s per download when the two downloads are simultaneous), according to the commonly used approach, the estimated remaining download time for each download at time t=0 will be:

  • Download A: will be finished in 20 seconds

  • Download B: will be finished in 10 seconds

However, if for the initial estimation of the remaining download time of download A we took into account that download B will finish after 10s and thus the allocated bandwidth of download A will be increased from 0.5MB/s to 1MB/s, then the following, more accurate initial estimation could be made at time t=0:

  • Download A: will be finished in 15 seconds (at time t=10s 5MB of download A will have been downloaded and the rest 5MB of download A will be downloaded at 1MB/s)

  • Download B: will be finished in 10 seconds

Thus, the second approach can give us a more accurate initial estimate at time t=0.

Does anybody know why this approach is not utilized?

George
  • 31
  • 3
  • Considering that bandwidth is not a constant, how would your second approach be any more precise than the first? – Chase Jul 16 '17 at 07:38
  • The bandwidth is not a constant, but the estimation according to the second approach at time t=0 is still more precise at the particular time instant and under the specific conditions. Don't you agree? – George Jul 17 '17 at 18:37

0 Answers0