2

I have a GLSL shader that's supposed to output NaNs when a condition is met. I'm having trouble actually making that happen.

Basically I want to do this:

float result = condition ? NaN : whatever;

But GLSL doesn't seem to have a constant for NaN, so that doesn't compile. How do I make a NaN?


I tried making the constant myself:

float NaN = 0.0/0.0; // doesn't work

That works on one of the machines I tested, but not on another. Also it causes warnings when compiling the shader.

Given that the obvious computation didn't work on one of the machines I tried, I get the feeling that doing this correctly is quite tricky and involves knowing a lot of real-world facts about the inconsistencies between various types of GPUs.

Craig Gidney
  • 16,378
  • 4
  • 62
  • 124
  • Why do you want to generate a NaN value? Every stage of the shader pipeline is expected to generate a valid result for its stage – ibesora May 30 '16 at 19:48
  • @ibesora I'm using webgl to simulate a quantum circuit, and NaN is how I signal an error condition during an aggregation step. [Here's the specific shader that's causing trouble](https://github.com/Strilanc/Quirk/blob/efb797b9bd48f2dc46deadade17e883b00fffc26/src/gates/AmplitudeDisplayFamily.js#L311). – Craig Gidney May 30 '16 at 19:51
  • Maybe a uniform with NaN supplied? – Tamas Hegedus May 30 '16 at 20:04
  • @TamasHegedus I was actually just about to add that as a possible answer. If you know it works across various GPUs, then that's what I'll go with. – Craig Gidney May 30 '16 at 20:05
  • Unfortunately I have a single nvidia 840m at hand, can't test it on multiple devices. You can post it as an answer. – Tamas Hegedus May 30 '16 at 20:06

3 Answers3

8

Don't use NaNs here.

Section 2.3.4.1 from the OpenGL ES 3.2 Spec states that

The special values Inf and −Inf encode values with magnitudes too large to be represented; the special value NaN encodes “Not A Number” values resulting from undefined arithmetic operations such as 0/0. Implementations are permitted, but not required, to support Inf's and NaN's in their floating-point computations.

So it seems to really depend on implementation. You should be outputing another value instead of NaN

Community
  • 1
  • 1
ibesora
  • 197
  • 5
  • 1
    Hmm, well that's extremely inconvenient. Very useful to know, though. – Craig Gidney May 30 '16 at 20:07
  • This has nothing to do with WebGL. WebGL is not based on OpenGL. It's based on OpenGL ES. If you're quoting a spec please quote the correct spec. – gman Apr 03 '18 at 07:37
  • You are right @gman, although both specs describe the same behaviour, I changed the spec link to the OpenGL ES one. – ibesora Dec 07 '18 at 11:52
2

Pass it in as a uniform

Instead of trying to make the NaN in glsl, make it in javascript then pass it in:

shader = ...
    uniform float u_NaN
    ...

call shader with "u_NaN" set to NaN
Craig Gidney
  • 16,378
  • 4
  • 62
  • 124
0

Fool the Optimizer

It seems like the issue is the shader compiler performing an incorrect optimization. Basically, it replaces a NaN expression with 0.0. I have no idea why it would do that... but it does. Maybe the spec allows for undefined behavior?

Based on that assumption, I tried making an obfuscated method that produces a NaN:

float makeNaN(float nonneg) {
    return sqrt(-nonneg-1.0);
}

...
    float NaN = makeNaN(some_variable_I_know_isnt_negative);

The idea is that the optimizer isn't clever enough to see through this. And, on the test machine that was failing, this works! I also tried simplifying the function to just return sqrt(-1.0), but that brought back the failure (further reinforcing my belief that the optimizer is at fault).

This is a workaround, not a solution.

  1. A sufficiently clever optimizer could see through the obfuscation and start breaking things again.
  2. I only tested it in a couple machines, and this is clearly something that varies a lot.
Craig Gidney
  • 16,378
  • 4
  • 62
  • 124