20

I noticed that when incrementing a counter, it is significantly slower when the value of the counter is a large number. I tried it in Chrome, Firefox, and IE11, all show worse performance in large numbers.

See jsperf test here (code below):

var count1 = 0;
var count2 = new Date().getTime();
var count3 = 1e5;
var count4 = 1e9;
var count5 = 1e12;
var count6 = 1e15;

function getNum1() {
  return ++count1;
}

function getNum2() {
  return ++count2;
}

function getNum3() {
  return ++count3;
}

function getNum4() {
  return ++count4;
}

function getNum5() {
  return ++count5;
}

function getNum6() {
  return ++count6;
}

Why does it happen?

hichris123
  • 9,735
  • 15
  • 51
  • 66
Malki
  • 2,175
  • 6
  • 27
  • 57
  • I cannot reproduce your findings in Firefox 41. It claims the small dataset is 55% slower. – k-nut Oct 25 '15 at 12:34
  • @k-nut That's very strange, I tested with Firefox 41 and see that the large dataset is 45% slower. Consistently so. – Malki Oct 25 '15 at 12:40
  • Can confirm for Safari and Chrome independently, up to 2x faster on small numbers. – Oleg Sklyar Oct 25 '15 at 12:41
  • FireFox 43 64 bit, just the opposite, bigger numbers are twice as fast - here the JS engine is doing a different kind of optimization – edc65 Oct 25 '15 at 15:55
  • @edc65 probably dead code elimination, if you share your benchmark I might be able to help. – Benjamin Gruenbaum Oct 25 '15 at 17:27
  • The benchmark is linked in the question. I just tried it with my version of FireFox (that is an alpha 64bit for windows: 43.0a2 (2015-10-22)). @BenjaminGruenbaum (no need for help, thanks, just sharing a fact) – edc65 Oct 25 '15 at 18:49

2 Answers2

29

Modern JavaScript runtimes and compilers perform an optimization called SMI (Small Integers).

All numbers in JavaScript are double precision floating points which are relatively slow to perform calculations on. However, in practice in a lot of cases (for example the majority of for loops) we're working with integers.

So - it is very useful to optimize numbers to perform efficient calculations when possible. When the engine can prove that a number is a small integer - it will gladly treat it as such and perform all calculations as if the number is an integer.

Incrementing a 32-bit integer is a single processor operation and is very cheap. So you get better performance doing it.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Benjamin Gruenbaum
  • 246,787
  • 79
  • 474
  • 476
  • See [this slideshow](http://v8-io12.appspot.com/index.html#34) for more information, it's slightly outdated but still pretty good. – Benjamin Gruenbaum Oct 25 '15 at 13:04
  • 4
    Note that in many JavaScript engines you only get that speedup for *31* bit integers, because the MSB will be a tagging bit specifying whether to treat the number as an integer or a reference to a double somewhere. – Joey Oct 25 '15 at 14:10
  • “double precision floating points which are relatively slow to perform calculations on” erm... really? If that were relevant, then scientific applications had long stopped to be written in Fortran. No, in fact [x86-64 doesn't make much speed difference between int and float at all](http://stackoverflow.com/a/2550851/745903). I suppose in ARM it does make a significant difference, but I'd bet that is still rather neglectable vs the overhead of a dynamic type system. And that's likely the real reason: JavaScript can stuff extra type information into a 64-bit field if it only holds a 31-bit int. – leftaroundabout Oct 25 '15 at 18:35
  • @leftaroundabout I'm not sure how you interpreted what I said like that - 64 bit floats are _boxed_ in JavaScript engines and 32 bit ints are _not_. That's why there is such a speed difference. Obviously, there _is_ a performance difference between ints and doubles in some scenarios - doing ++ when running a JS script is not likely one of them - which can be easily validated by creating a typed array object in JS and measuring it. – Benjamin Gruenbaum Oct 25 '15 at 18:41
  • @leftaroundabout moreover, that's literally explained with a big picture in the link in the first line of my answer. If you think it's unclear I'll gladly accept an edit. – Benjamin Gruenbaum Oct 25 '15 at 18:45
  • 1
    Well, it's just that you made it sound like big numbers are slow _because they're floats_. Really it would be no faster even if they were implement by boxed Word8s... — I rather wouldn't edit your answer because I know little about JavaScript, but I think a short explanation about boxed values would be necessary here. – leftaroundabout Oct 25 '15 at 18:56
  • @leftaroundabout: That's a naive perspective on the issue. In practice you have numbers that are being used as array indices or in other contexts where they're semantically integers, and performing a computation in floating point then converting the result to an integer is going to be significantly slower than performing the computation as integer math, even if the floating point math itself is not slow. In practice however floating point math is still moderately slower than integer. You use it in scientific apps because you need the semantics, not as an optimization. – R.. GitHub STOP HELPING ICE Oct 25 '15 at 20:04
  • @R..: right. — (I should resist the urge to point out the conclusion that it's a good idea to have a static type system which never performs implicit conversions... but I don't.) – leftaroundabout Oct 25 '15 at 20:24
0

This 'large' number you are using though, is really large, I bet it's the difference between processing a 32-bit quantity and a more-than-32-bits quantity. Try a base of 1,500,000,00 (sub 32-bit signed), 3,000,000,000 (sub 32-bit unsigned) and 5,000,000,000 (over 32 bit).

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Michael
  • 3,421
  • 12
  • 27