Consider the following two snippets (from this jsperf entry):
let val = 0;
for(let i of indices) {
val += map.get(i);
}
// ---
let val = 0;
for(let i of indices) {
val += obj[i];
}
Here, map
is a Map
, obj
is a plain old JavaScript object (let obj = {}
), and indices
is an array of random indices. Both obj
and map
have been pre-populated with data so the lookups actually return data. Check out the jsperf for the full code.
Question:
Why does the plain old javascript object out-perform the Map
by a factor of 5+? Is this simply because as of writing, Map
s are still very new and un-optimised? Or is there some overhead in Map
lookups that will always keep it from being as fast as a POJO?
If it's just not optimised yet, can we expect it to be faster than a POJO for random lookups eventually? Why? Why not?