2

I was trying to explore how javascript Object performs in comparison to Map or Set for normal key accesses.

So i ran the below 3 codes on JSBEN.CH (here is the link to test: https://jsben.ch/q4RPK)

Objects

  const object = {};
  
  for (let i = 0; i < 10000; ++i) {
    object[`key_${i}`] = 1;
  }
  
  let result = 0;
  
  for (let i = 0; i < 10000; ++i) {
    result += object[`key_${i}`];
  }

Maps

  const map = new Map();
  
  for (let i = 0; i < 10000; ++i) {
    map.set(`key_${i}`, 1);
  }
  
  let result = 0;
  
  for (let i = 0; i < 10000; ++i) {
    result += map.get(`key_${i}`);
  }

Sets

  const set = new Set();
  
  for (let i = 0; i < 10000; ++i) {
    set.add(`key_${i}`);
  }
  
  let result = 0;
  
  for (let i = 0; i < 10000; ++i) {
    result += set.has(`key_${i}`);
  }

As you can check in the test link, map and set seem to perform almost similar however objects are much slower every time. Can someone explain what could be the reason that objects perform worse than map or set for basic key access operation?

Edit 1: As it turns out, just setting keys on objects is also slower than map/set: https://jsben.ch/al9ef

D_S_X
  • 848
  • 2
  • 8
  • 27
  • 1
    `set.add` only takes one argument. – Jonas Wilms Apr 03 '21 at 13:18
  • Most engines have various representations for objects. It might take some time for the engine to find the right one ("to get hot"). – Jonas Wilms Apr 03 '21 at 13:20
  • The real question is whether the JSBENC is reliable. Have you tried any other performance testers besides JSBENC? – Edison Pebojot Apr 03 '21 at 13:22
  • @JonasWilms my bad ... updated the code and link – D_S_X Apr 03 '21 at 13:24
  • @EdisonPebojot Can you name any other testers that you think is reliable? Jsperf is not working anymore and i'm really new to performance testing so don't know how to do this manually – D_S_X Apr 03 '21 at 13:26
  • you are not adding the `set` values correctly. `result += set.has(\`key_${i}\`) ? i : 0;` – Get Off My Lawn Apr 03 '21 at 13:29
  • @GetOffMyLawn The second answer talks about performance but there is no clear answer there. I want to know whether we have a definite answer to this? Also more important do we know WHY maps are performing better? – D_S_X Apr 03 '21 at 13:40
  • The second answer posts a link to an article by 1 of the authors of the v8 javascript engine. In short: "This is because Javascript engines compile objects down to C++ classes in the background" – dehart Apr 03 '21 at 13:48
  • @dehart But that would assert that `objects` perform better which is not so in above tests – D_S_X Apr 03 '21 at 13:52
  • 1
    They answer the question by saying that a `Map` loops over the values in order that they were added whereas an object does not. See [Map Description](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map#description) and [Map vs Object](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map#objects_vs._maps) – Get Off My Lawn Apr 03 '21 at 13:56
  • Guessing here: There keys are not looked up in the order of insertion of keys. And object lookup has to convert the key to string before comparing. And Map and sets just search in the key-value collection while Object checks with the prototype if a key is not found? – adiga Apr 03 '21 at 14:07
  • So if someone wants to stop guessing, one can run `node --allow-natives-syntax` and then add a `%DebugPrint(object)` to the loop. You'll see that the internal representation changes multiple times – Jonas Wilms Apr 03 '21 at 14:33
  • 1
    @D_S_X They are performing better simply because they were optimised for that task, whereas objects are optimised for static lookups on non-changing shapes (while they also support dynamic lookup, it's not their main use case). – Bergi Apr 03 '21 at 15:47

1 Answers1

1

Looking at relative numbers only is always dangerous, here are some absolute numbers, run on NodeJS v14.14.0 on an Intel 8350U:

Iterations Object write Object read Map write Map read
100 0ms 0ms 0ms 0ms
1.000 3ms 1ms 0ms 0ms
10,000 7ms 4ms 8ms 1ms
1.000.000 1222ms 527ms 632ms 542ms

So as one can see, for 10.000 iterations the difference between objects and maps is 1 millisecond in the run above, and as that is the accuracy of the time measurement, we can't really derive any conclusion from that test. The results are absolutely random.

For 1 Million iterations one can see a clear advantage of Map writes over Object writes, the read performance is very similar. Now if we look at absolute numbers, this is still one million writes / s. So although object writes are a lot slower, this will unlikely be the bottleneck of your application.

For an accurate explanation, one would have to analyze all the steps the engine performs. For that you can run node --print-code and analyze the bytecode that gets run. I don't have the time for that, though here are some observations:

  1. If the object gets constructed with Object.create(null) (having no prototype) the performance is about the same, so prototype lookup does not influence performance at all.

  2. After the 20th iteration, V8 chooses the internal representation dictionary_map for object, so this is basically one hash map competing with another hashmap (one can run node --allow-natives-syntax and then use %DebugPrint(object) to get the internal representation).

For reference, here is the code used to run the benchmark:

function benchmark(TIMES) {  
  console.log("BENCHMARK ", TIMES);

  const object = Object.create(null);

    let start = Date.now();
    for (let i = 0; i < TIMES; ++i) {
        object[`key_${i}`] = 1;
    }

    console.log("Object write took", Date.now() - start);
    start = Date.now();

    let result = 0;
  
    for (let i = 0; i < TIMES; ++i) {
      result += object[`key_${i}`];
    }

    console.log("Object read took", Date.now() - start);
    start = Date.now();

  


  const map = new Map();
  
  for (let i = 0; i < TIMES; ++i) {
    map.set(`key_${i}`, 1);
  }
  
  console.log("Map write took", Date.now() - start);
  start = Date.now();

  result = 0;
  
  for (let i = 0; i < TIMES; ++i) {
    result += map.get(`key_${i}`);
  }

  console.log("Map read took", Date.now() - start);

}

benchmark(100);
benchmark(1_000);
benchmark(10_000);
benchmark(1_000_000);
Jonas Wilms
  • 106,571
  • 13
  • 98
  • 120
  • Thanks for putting all these test cases together. I would accept this answer but this still doesn't explain the why (although it does rule out the fact that prototype chaining maybe causing this which is good). Key take away up until now is that for cases involving frequent addition and removal `maps` will perform better as they are somehow internally optimised for it. – D_S_X Apr 04 '21 at 11:24
  • 1
    @D_S_X well, the answer is somewhere [in here](https://github.com/v8/v8) (and the same applies to other engines). You could probably spend some time digging through the sourcecode and somewhen find the "why", though that's a very time intensive process (and I don't have time for that). You could ask a [tag:v8] specific question and hope that a maintainer sees your question, though they are also usually not giving concrete answers as the implementations are frequently changing and they won't answer that question again and again ... – Jonas Wilms Apr 04 '21 at 12:20