I did a test with this
for (i32 i = 0; i < 0x800000; ++i)
{
// Hopefully this can disable hardware prefetch
i32 k = (i * 997 & 0x7FFFFF) * 0x40;
_mm_prefetch(data + ((i + 1) * 997 & 0x7FFFFF) * 0x40, _MM_HINT_NTA);
for (i32 j = 0; j < 0x40; j += 0x10)
{
//__m128 v = _mm_castsi128_ps(_mm_stream_load_si128((__m128i *)(data + k + j)));
__m128 v = _mm_load_ps((float *)(data + k + j));
a_single_chain_computation
//_mm_stream_ps((float *)(data2 + k + j), v);
_mm_store_ps((float *)(data2 + k + j), v);
}
}
Results are weird.
- No matter how much time the
a_single_chain_computation
takes, the load latency is not hidden. - And what's more, the additional total time taken grows as I add more computation. (With a single
v = _mm_mul_ps(v, v)
, prefetching saves about 0.60 - 0.57 = 0.03s. And with 16v = _mm_mul_ps(v, v)
, it saves about 1.1 - 0.75 = 0.35s. WHY?) - non-temporal load/stores degrades performance with or without prefetching. (I can understand the load part, but why stores, too?)