17

I am aware of 3 methods, but as far as I know, only the first 2 are generally used:

  1. Mask off the sign bit using andps or andnotps.

    • Pros: One fast instruction if the mask is already in a register, which makes it perfect for doing this many times in a loop.
    • Cons: The mask may not be in a register or worse, not even in a cache, causing a very long memory fetch.
  2. Subtract the value from zero to negate, and then get the max of the original and negated.

    • Pros: Fixed cost because nothing is needed to fetch, like a mask.
    • Cons: Will always be slower than the mask method if conditions are ideal, and we have to wait for the subps to complete before using the maxps instruction.
  3. Similar to option 2, subtract the original value from zero to negate, but then "bitwise and" the result with the original using andps. I ran a test comparing this to method 2, and it seems to behave identically to method 2, aside from when dealing with NaNs, in which case the result will be a different NaN than method 2's result.

    • Pros: Should be slightly faster than method 2 because andps is usually faster than maxps.
    • Cons: Can this result in any unintended behavior when NaNs are involved? Maybe not, because a NaN is still a NaN, even if it's a different value of NaN, right?

Thoughts and opinions are welcome.

zx485
  • 24,099
  • 26
  • 45
  • 52
Kumputer
  • 508
  • 5
  • 19
  • The mask does not need to be loaded from memory. It can easily be calculated in two all-register instructions. – Raymond Chen Sep 05 '15 at 01:40
  • @RaymondChen, true, but that's still 2 extra instructions and a bypass delay, likely making it always slower than method 2 or 3. – Kumputer Sep 05 '15 at 01:47
  • 1
    How about doing a shift left by one bit, followed by an unsigned right shift by one bit? – Raymond Chen Sep 05 '15 at 01:49
  • @RaymondChen, Interesting. I actually hadn't thought about that. So, that's 2 shift instructions total. But also can't happen out of order, and also would likely incur a bypass delay depending on the CPU. Would be worth testing, though. – Kumputer Sep 05 '15 at 01:51
  • Instead of hypothesising why not benchmark several methods and see if there is any significant difference ? – Paul R Sep 05 '15 at 12:45
  • @PaulR: Why benchmark when the information needed to make accurate predictions is available? :D It wouldn't be a bad idea to test the bypass delays for the shift idea, but it's pretty clean to me that generating a mask in a register, and then using it with `andps`, is the best solution for most cases. – Peter Cordes Sep 06 '15 at 10:25
  • @RaymondChen: I threw in your idea into my answer. About all it has going for it is smallest size with VEX-encoding. Also only 2 uops in cases where you can't re-use a pre-generated AND mask, so the others are all 3uops or a potential cache miss. – Peter Cordes Sep 06 '15 at 10:27

1 Answers1

37

TL;DR: In almost all cases, use pcmpeq/shift to generate a mask, and andps to use it. It has the shortest critical path by far (tied with constant-from-memory), and can't cache-miss.

How to do that with intrinsics

Getting the compiler to emit pcmpeqd on an uninitialized register can be tricky. (godbolt). The best way for gcc / icc looks to be

__m128 abs_mask(void){
  // with clang, this turns into a 16B load,
  // with every calling function getting its own copy of the mask
  __m128i minus1 = _mm_set1_epi32(-1);
  return _mm_castsi128_ps(_mm_srli_epi32(minus1, 1));
}
// MSVC is BAD when inlining this into loops
__m128 vecabs_and(__m128 v) {
  return _mm_and_ps(abs_mask(), v);
}


__m128 sumabs(const __m128 *a) { // quick and dirty no alignment checks
  __m128 sum = vecabs_and(*a);
  for (int i=1 ; i < 10000 ; i++) {
      // gcc, clang, and icc hoist the mask setup out of the loop after inlining
      // MSVC doesn't!
      sum = _mm_add_ps(sum, vecabs_and(a[i])); // one accumulator makes addps latency the bottleneck, not throughput
  }
  return sum;
}

clang 3.5 and later "optimizes" the set1 / shift into loading a constant from memory. It will use pcmpeqd to implement set1_epi32(-1), though. TODO: find a sequence of intrinsics that produces the desired machine code with clang. Loading a constant from memory isn't a performance disaster, but having every function use a different copy of the mask is pretty terrible.

MSVC: VS2013:

  • _mm_uninitialized_si128() is not defined.

  • _mm_cmpeq_epi32(self,self) on an uninitialized variable will emit a movdqa xmm, [ebp-10h] in this test case (i.e. load some uninitialized data from the stack. This has less risk of a cache miss than just loading the final constant from memory. However, Kumputer says MSVC didn't manage to hoist the pcmpeqd / psrld out of the loop (I assume when inlining vecabs), so this is unusable unless you manually inline and hoist the constant out of a loop yourself.

  • Using _mm_srli_epi32(_mm_set1_epi32(-1), 1) results in a movdqa to load a vector of all -1 (hoisted outside the loop), and a psrld inside the loop. So that's completely horrible. If you're going to load a 16B constant, it should be the final vector. Having integer instructions generating the mask every loop iteration is also horrible.

Suggestions for MSVC: Give up on generating the mask on the fly, and just write

const __m128 absmask = _mm_castsi128_ps(_mm_set1_epi32(~(1<<31));

Probably you'll just get the mask stored in memory as a 16B constant. Hopefully not duplicated for every function that uses it. Having the mask in a memory constant is more likely to be helpful in 32bit code, where you only have 8 XMM registers, so vecabs can just ANDPS with a memory source operand if it doesn't have a register free to keep a constant lying around.

TODO: find out how to avoid duplicating the constant everywhere it's inlined. Probably using a global constant, rather than an anonymous set1, would be good. But then you need to initialize it, but I'm not sure intrinsics work as initializers for global __m128 variables. You want it to go in the read-only data section, not to have a constructor that runs at program startup.


Alternatively, use

__m128i minus1;  // undefined
#if _MSC_VER && !__INTEL_COMPILER
minus1 = _mm_setzero_si128();  // PXOR is cheaper than MSVC's silly load from the stack
#endif
minus1 = _mm_cmpeq_epi32(minus1, minus1);  // or use some other variable here, which will probably cost a mov insn without AVX, unless the variable is dead.
const __m128 absmask = _mm_castsi128_ps(_mm_srli_epi32(minus1, 1));

The extra PXOR is quite cheap, but it's still a uop and still 4 bytes on code size. If anyone has any better solution to overcome MSVC's reluctance to emit the code we want, leave a comment or edit. This is no good if inlined into a loop, though, because the pxor/pcmp/psrl will all be inside the loop.

Loading a 32bit constant with movd and broadcasting with shufps might be ok (again, you probably have to manually hoist this out of a loop, though). That's 3 instructions (mov-immediate to a GP reg, movd, shufps), and movd is slow on AMD where the vector unit is shared between two integer cores. (Their version of hyperthreading.)


Choosing the best asm sequence

Ok, lets look at this for let's say Intel Sandybridge through Skylake, with a bit of mention of Nehalem. See Agner Fog's microarch guides and instruction timings for how I worked this out. I also used Skylake numbers someone linked in a post on the http://realwordtech.com/ forums.


Lets say the vector we want to abs() is in xmm0, and is part of a long dependency chain as is typical for FP code.

So lets assume any operations that don't depend on xmm0 can begin several cycles before xmm0 is ready. I've tested, and instructions with memory operands don't add extra latency to a dependency chain, assuming the address of the memory operand isn't part of the dep chain (i.e. isn't part of the critical path).


I'm not totally clear on how early a memory operation can start when it's part of a micro-fused uop. As I understand it, the Re-Order Buffer (ROB) works with fused uops, and tracks uops from issue to retirement (168(SnB) to 224(SKL) entries). There's also a scheduler that works in the unfused domain, holding only uops that have their input operands ready but haven't yet executed. uops can issue into the ROB (fused) and scheduler (unfused) at the same time when they're decoded (or loaded from the uop cache). If I'm understanding this correctly, it's 54 to 64 entries in Sandybridge to Broadwell, and 97 in Skylake. There's some unfounded speculation about it not being a unified (ALU/load-store) scheduler anymore.

There's also talk of Skylake handling 6 uops per clock. As I understand it, Skylake will read whole uop-cache lines (up to 6 uops) per clock into a buffer between the uop cache and the ROB. Issue into the ROB/scheduler is still 4-wide. (Even nop is still 4 per clock). This buffer helps where code alignment / uop cache line boundaries cause bottlenecks for previous Sandybridge-microarch designs. I previously thought this "issue queue" was this buffer, but apparently it isn't.

However it works, the scheduler is large enough to get the data from cache ready in time, if the address isn't on the critical path.


1a: mask with a memory operand

ANDPS  xmm0, [mask]  # in the loop
  • bytes: 7 insn, 16 data. (AVX: 8 insn)
  • fused-domain uops: 1 * n
  • latency added to critical path: 1c (assuming L1 cache hit)
  • throughput: 1/c. (Skylake: 2/c) (limited by 2 loads / c)
  • "latency" if xmm0 was ready when this insn issued: ~4c on an L1 cache hit.

1b: mask from a register

movaps   xmm5, [mask]   # outside the loop

ANDPS    xmm0, xmm5     # in a loop
# or PAND   xmm0, xmm5    # higher latency, but more throughput on Nehalem to Broadwell

# or with an inverted mask, if set1_epi32(0x80000000) is useful for something else in your loop:
VANDNPS   xmm0, xmm5, xmm0   # It's the dest that's NOTted, so non-AVX would need an extra movaps
  • bytes: 10 insn + 16 data. (AVX: 12 insn bytes)
  • fused-domain uops: 1 + 1*n
  • latency added to a dep chain: 1c (with the same cache-miss caveat for early in the loop)
  • throughput: 1/c. (Skylake: 3/c)

PAND is throughput 3/c on Nehalem to Broadwell, but latency=3c (if used between two FP-domain operations, and even worse on Nehalem). I guess only port5 has the wiring to forward bitwise ops directly to the other FP execution units (pre Skylake). Pre-Nehalem, and on AMD, bitwise FP ops are treated identically to integer FP ops, so they can run on all ports, but have a forwarding delay.


1c: generate the mask on the fly:

# outside a loop
PCMPEQD  xmm5, xmm5  # set to 0xff...  Recognized as independent of the old value of xmm5, but still takes an execution port (p1/p5).
PSRLD    xmm5, 1     # 0x7fff...  # port0
# or PSLLD xmm5, 31  # 0x8000...  to set up for ANDNPS

ANDPS    xmm0, xmm5  # in the loop.  # port5
  • bytes: 12 (AVX: 13)
  • fused-domain uops: 2 + 1*n (no memory ops)
  • latency added to a dep chain: 1c
  • throughput: 1/c. (Skylake: 3/c)
  • throughput for all 3 uops: 1/c saturating all 3 vector ALU ports
  • "latency" if xmm0 was ready when this sequence issued (no loop): 3c (+1c possible bypass delay on SnB/IvB if ANDPS has to wait for integer data to be ready. Agner Fog says in some cases there's no extra delay for integer->FP-boolean on SnB/IvB.)

This version still takes less memory than versions with a 16B constant in memory. It's also ideal for an infrequently-called function, because there's no load to suffer a cache miss.

The "bypass delay" shouldn't be an issue. If xmm0 is part of a long dependency chain, the mask-generating instructions will execute well ahead of time, so the integer result in xmm5 will have time to reach ANDPS before xmm0 is ready, even if it takes the slow lane.

Haswell has no bypass delay for integer results -> FP boolean, according to Agner Fog's testing. His description for SnB/IvB says this is the case with the outputs of some integer instructions. So even in the "standing start" beginning-of-a-dep-chain case where xmm0 is ready when this instruction sequence issues, it's only 3c on *well, 4c on *Bridge. Latency probably doesn't matter if the execution units are clearing the backlog of uops as fast as they're being issued.

Either way, ANDPS's output will be in the FP domain, and have no bypass delay if used in MULPS or something.

On Nehalem, bypass delays are 2c. So at the start of a dep chain (e.g. after a branch mispredict or I$ miss) on Nehalem, "latency" if xmm0 was ready when this sequence issued is 5c. If you care a lot about Nehalem, and expect this code to be the first thing that runs after frequent branch mispredicts or similar pipeline stalls that leaves the OoOE machinery unable to get started on calculating the mask before xmm0 is ready, then this might not be the best choice for non-loop situations.


2a: AVX max(x, 0-x)

VXORPS  xmm5, xmm5, xmm5   # outside the loop

VSUBPS  xmm1, xmm5, xmm0   # inside the loop
VMAXPS  xmm0, xmm0, xmm1
  • bytes: AVX: 12
  • fused-domain uops: 1 + 2*n (no memory ops)
  • latency added to a dep chain: 6c (Skylake: 8c)
  • throughput: 1 per 2c (two port1 uops). (Skylake: 1/c, assuming MAXPS uses the same two ports as SUBPS.)

Skylake drops the separate vector-FP add unit, and does vector adds in the FMA units on ports 0 and 1. This doubles FP add throughput, at the cost of 1c more latency. The FMA latency is down to 4 (from 5 in *well). x87 FADD is still 3 cycle latency, so there's still a 3-cycle scalar 80bit-FP adder, but only on one port.

2b: same but without AVX:

# inside the loop
XORPS  xmm1, xmm1   # not on the critical path, and doesn't even take an execution unit on SnB and later
SUBPS  xmm1, xmm0
MAXPS  xmm0, xmm1
  • bytes: 9
  • fused-domain uops: 3*n (no memory ops)
  • latency added to a dep chain: 6c (Skylake: 8c)
  • throughput: 1 per 2c (two port1 uops). (Skylake: 1/c)
  • "latency" if xmm0 was ready when this sequence issued (no loop): same

Zeroing a register with a zeroing-idiom that the processor recognizes (like xorps same,same) is handled during register rename on Sandbridge-family microarchitectures, and has zero latency, and throughput of 4/c. (Same as reg->reg moves that IvyBridge and later can eliminate.)

It's not free, though: It still takes a uop in the fused domain, so if your code is only bottlenecked by the 4uop/cycle issue rate, this will slow you down. This is more likely with hyperthreading.


3: ANDPS(x, 0-x)

VXORPS  xmm5, xmm5, xmm5   # outside the loop.  Without AVX: zero xmm1 inside the loop

VSUBPS  xmm1, xmm5, xmm0   # inside the loop
VANDPS  xmm0, xmm0, xmm1
  • bytes: AVX: 12 non-AVX: 9
  • fused-domain uops: 1 + 2*n (no memory ops). (Without AVX: 3*n)
  • latency added to a dep chain: 4c (Skylake: 5c)
  • throughput: 1/c (saturate p1 and p5). Skylake: 3/2c: (3 vector uops/cycle) / (uop_p01 + uop_p015).
  • "latency" if xmm0 was ready when this sequence issued (no loop): same

This should work, but IDK either what happens with NaN. Nice observation that ANDPS is lower latency and doesn't require the FPU add port.

This is the smallest size with non-AVX.


4: shift left/right:

PSLLD  xmm0, 1
PSRLD  xmm0, 1
  • bytes: 10 (AVX: 10)
  • fused-domain uops: 2*n
  • latency added to a dep chain: 4c (2c + bypass delays)
  • throughput: 1/2c (saturate p0, also used by FP mul). (Skylake 1/c: doubled vector shift throughput)
  • "latency" if xmm0 was ready when this sequence issued (no loop): same

    This is the smallest (in bytes) with AVX.

    This has possibilities where you can't spare a register, and it isn't used in a loop. (In loop with no regs to spare, prob. use andps xmm0, [mask]).

I assume there's a 1c bypass delay from FP to integer-shift, and then another 1c on the way back, so this is as slow as SUBPS/ANDPS. It does save a no-execution-port uop, so it has advantages if fused-domain uop throughput is an issue, and you can't pull mask-generation out of a loop. (e.g. because this is in a function that's called in a loop, not inlined).


When to use what: Loading the mask from memory makes the code simple, but has the risk of a cache miss. And takes up 16B of ro-data instead of 9 instruction bytes.

  • Needed in a loop: 1c: Generate the mask outside the loop (with pcmp/shift); use a single andps inside. If you can't spare the register, spill it to the stack and 1a: andps xmm0, [rsp + mask_local]. (Generating and storing is less likely to lead to a cache miss than a constant). Only adds 1 cycle to the critical path either way, with 1 single-uop instruction inside the loop. It's a port5 uop, so if your loop saturates the shuffle port and isn't latency-bound, PAND might be better. (SnB/IvB have shuffles units on p1/p5, but Haswell/Broadwell/Skylake can only shuffle on p5. Skylake did increase the throughput for (V)(P)BLENDV, but not other shuffle-port ops. If the AIDA numbers are right, non-AVX BLENDV is 1c lat ~3/c tput, but AVX BLENDV is 2c lat, 1/c tput (still a tput improvement over Haswell))

  • Needed once in a frequently-called non-looping function (so you can't amortize mask generation over multiple uses):

    1. If uop throughput is an issue: 1a: andps xmm0, [mask]. The occasional cache-miss should be amortized over the savings in uops, if that really was the bottleneck.
    2. If latency isn't an issue (the function is only used as part of short non-loop-carried dep chains, e.g. arr[i] = abs(2.0 + arr[i]);), and you want to avoid the constant in memory: 4, because it's only 2 uops. If abs comes at the start or end of a dep chain, there won't be a bypass delay from a load or to a store.
    3. If uop throughput isn't an issue: 1c: generate on the fly with integer pcmpeq / shift. No cache miss possible, and only adds 1c to the critical path.
  • Needed (outside any loops) in an infrequently-called function: Just optimize for size (neither small version uses a constant from memory). non-AVX: 3. AVX: 4. They're not bad, and can't cache-miss. 4 cycle latency is worse for the critical path than you'd get with version 1c, so if you don't think 3 instruction bytes is a big deal, pick 1c. Version 4 is interesting for register pressure situations when performance isn't important, and you'd like to avoid spilling anything.


  • AMD CPUs: There's a bypass delay to/from ANDPS (which by itself has 2c latency), but I think it's still the best choice. It still beats the 5-6 cycle latency of SUBPS. MAXPS is 2c latency. With the high latencies of FP ops on Bulldozer-family CPUs, you're even more likely for out-of-order execution to be able to generate your mask on the fly in time for it to be ready when the other operand to ANDPS is. I'm guessing Bulldozer through Steamroller don't have a separate FP add unit, and instead do vector adds and multiplies in the FMA unit. 3 will always be a bad choice on AMD Bulldozer-family CPUs. 2 looks better in that case, because of a shorter bypass delay from the fma domain to the fp domain and back. See Agner Fog's microarch guide, pg 182 (15.11 Data delay between different execution domains).

  • Silvermont: Similar latencies to SnB. Still go with 1c for loops, and prob. also for one-time use. Silvermont is out-of-order, so it can get the mask ready ahead of time to still only add 1 cycle to the critical path.

Community
  • 1
  • 1
Peter Cordes
  • 245,674
  • 35
  • 423
  • 606
  • Peter, I've been meaning to thank you for this thorough answer. There's just one small problem with your ideal solution. Microsoft won't let you compare an uninitialized register with itself without first loading from uninitialized stack space if you use intrinsic. Seems you have to compromise by zeroing out a register first using _mm_setzero_ps. Other compilers at least allow you to use inline assembly to get around this. Any ideas on this issue? – Kumputer Sep 18 '15 at 03:46
  • @Kumputer: So if you use a `__m128i var` without initializing it, MSVC emits a load instruction? That's bizarre. gcc older than 5.0 seems to throw in an extra `PXOR` instead of letting you use unintialized variables for `cmpeq`. Intel's intrinsics guide specifies `_mm_undefined_si128()` for this purpose. icc13 gets it right, and doesn't emit any instructions. gcc 5 is required for that intrinsic to not be just pxor. clang doesn't it at all. https://goo.gl/UhZmeQ shows some examples. On platforms that don't have `_mm_uninitialized_si128()`, #define it to `_mm_setzero_si128()`. – Peter Cordes Sep 18 '15 at 04:43
  • Let me know (or edit my answer) what you have to do to get MSVC to generate the desired instructions, in cases where you don't have any initialized `__m128i` variables whose value is no longer needed. (with non-AVX, an extra movdqa would be needed to copy before replacing a register with all-ones.) A few old CPUs don't even recognize pcmeq as independent of the previous register contents, so it's best to use a variable that's not freshly set. (K10, P4, Intel P6 pre core-2, and Via Nano 2000 apparently don't know this.) Make sure you don't use the `epi64` version; it's slower on a lot of CPUs. – Peter Cordes Sep 18 '15 at 04:52
  • @Kumputer: See if MSVC does a decent job with `_mm_set1_epi32(-1)`. https://goo.gl/SJclXe . gcc 4.5 and later implement that as `pcmpeqd` on an uninitialized register. (gcc 4.4 loads the constant from memory). At `-0O`, icc and clang still use `pcmpeqd`, but `gcc -O0` uses a really bad sequence of storing `-1` to memory and `movd / pinsrd`... With optimization, it's the best choice for gcc, until gcc 5.1 stops inserting a dependecy-breaking `pxor` before `_mm_cmpeq` on unintialized variables. Anyway, there's no godbolt compiler that does a bad job with set1 with optimizations on. – Peter Cordes Sep 18 '15 at 15:01
  • @Kumputer: updated my answer again with a hopefully-good intrinsics version. – Peter Cordes Sep 18 '15 at 16:41
  • I just verified with VS2013, and using `_mm_cmpeq_epi32` on an uninitialized variable will emit a `movdqa` instruction from an ebp-10h in this test case. That said, I'm doing an abs test on a large float array in a loop, and the compiler is at least smart enough to slide the move from memory above the loop, but not smart enough to move the `pcmpeqd` or `psrld` above the loop. Using `_mm_set1_epi32(-1)` results in a `movdqa` from a constant memory location, but also outside the loop. `_mm_uninitialized_si128()` is undefined. – Kumputer Sep 21 '15 at 23:40
  • @Kumputer: Then your best bet is to manually inline the `vec_abs` function, and generate the constant outside your loop. I'd suggest `#ifdef MSVC` / `myvec = _mm_setzero_si128();` / `#endif` to get a PXOR instead of a load from the stack. Or use PINSRD from an integer constant, and broadcast it. Or just let the compiler load the whole 16B constant from memory, instead of generating it on the fly. It's not horrible, and probably won't cache-miss too often. Not that it affects this directly, but `Ebp`? You're making 32bit code, with only 8 xmm registers, and FP return vals in x87? ewww. – Peter Cordes Sep 22 '15 at 00:55
  • @Kumputer: with `_mm_set1_epi32(-1)`, do you mean the final shifted constant got loaded from memory, like clang > 3.5? Or was there still a psrld inside the loop? – Peter Cordes Sep 22 '15 at 01:05
  • There was still a psrld inside the loop. WRT 32 bit code, yeah, I know, but can't change that at the moment. The goal, of course, is to get the fastest generic abs inline for all code, especially code outside loops, or when other programmers are unaware of SSE. If I'm looping and I know I'm using SSE, it's a no-brainer for me, but I'm usually not looping. I'll get around to timing all your suggestions at some point. – Kumputer Sep 22 '15 at 03:15
  • @Kumputer: Thanks for the update. If you're not looping, I'd suggest aiming for something that compiles to a `PAND xmmX, [constant]`, esp. for a 32bit target. Generating the mask on the fly isn't that great if it gets inlined several times into a series of calculations. Loading constants from memory is done pretty often for things like shuffle masks, so it's only worth avoiding if you can do so easily. Just make sure it's not duplicated for every call site it's inlined into. (Maybe declare it as a global constant). – Peter Cordes Sep 22 '15 at 03:23
  • Peter, your ideal trick seems pretty ideal after a good bit of testing. In case where we can avoid explicitly zeroing the mask register before pcmp, it seems to run as fast as negate and on my Haswell core. That said the compiler likes to hoist the mask generation outside the loop, making it even great for loop cases, so I had to write explicit assembly to get the critical ops to happen inside a loop to time them. @RaymondChen, I think since I'm running x86, though, I'll use the shift right, left trick because it requires no additional registers, and performs only slightly worse otherwise. – Kumputer Sep 23 '15 at 23:06
  • @Kumputer: shift left/right is going to have a big penalty on Nehalem, where bypass delays between int and FP are two cycles each way. If latency of the dependency chain including the `abs` is a limiting factor, don't use that on code that needs to be fast on Nehalem. If latency doesn't matter, it's not a bad choice. 2 instructions inside a loop instead of just one. I'd recommend `ANDPS` with a memory source, unless profiling shows it's cache-missing a lot. If possible, try to put the mask global variable in the same cache line as other constants needed in your FP loops. – Peter Cordes Sep 23 '15 at 23:10
  • One little note: for masking all bits except the signbit, instead of `~(1<<31)` you can simply use `INT32_MAX` in C99/C++11 which is defined in ``/``. Similarly to mask only the signbit you can use `INT32_MIN` (beside `-0.0f` for floats). I felt cumbersome when I tried to set these constants manually, because compilers tend to display warnings about it. – plasmacel Jan 03 '17 at 17:47
  • @plasmacel: hmm, that does happen to work, but only because x86 is a 2's complement machine with 32-bit `int`. Bringing that into the picture seems like extra complexity for human readers of the code. e.g. why would you use a 2's complement signed integer constant for messing around with floats? (Though to be fair, `1<<31` is doing exactly the same thing, since I neglected to write `1UL`) – Peter Cordes Jan 03 '17 at 21:03
  • @PeterCordes It works for the same reason like any x86 intrinsics :) I mentioned them because these standard constants provide safe, warning-free cross-compiler solution for x86 code (unless you want to provide constants as decimal values). Btw there are also `INT8_MAX`, `INT16_MAX`, `INT64_MAX` and `*_MIN` variants. – plasmacel Jan 03 '17 at 21:42
  • 1
    At least the current version of clang, icc and gcc all combine the identical mask constants if you just use a local mask variable. When you compile for AVX-512 they are smart enough to use broadcast loads too, so the constant is only 32 bits (64 bits) for float (double) vectors. AVX-512 introduces an abs intrinsic, but there is no underlying instruction, it's just sugar: clang and icc handle it with a constant load, while gcc does some vbroadcast nonsense. – BeeOnRope Jul 10 '20 at 17:30