Float precision on GPU, bugs/features

Expected 123 everywhere, screenshot from ANGLE DX11, Nvidia https://www.shadertoy.com/view/tlfBRB

Reading float bits from texture and losing float bits:

uint value_2 = floatBitsToUint(uintBitsToFloat(uint(value+iZero))+fZero);

Precompiled floats:

https://www.shadertoy.com/view/wdXGW8 Nvidia
sin sqrt and float precious patterns https://www.shadertoy.com/view/NsBBDW

The reason random numbers are generated on NVIDIA cards and not on AMD is that sine instruction on AMD GPU architectures actually has period of 1, not 2*PI. But it is still fully deterministic in regards to input value. It just returns different results between different platforms.

Also remember:

smoothstep returns 0.0 if xedge0 and 1.0 if xedge1.
Results are undefined if edge0edge1.

Linear interpolation depends on GPU/API:

https://www.shadertoy.com/view/ftXcW7 texture pixel linear interpolation

Shader compiler may use 32 or 64 bit floats to pre-compile static code:

https://www.shadertoy.com/view/sllXW8 Nvidia

GPU precision never 0, but it can be 0:

https://www.shadertoy.com/view/ftXSWB Nvidia

Testing this behavior:

if(val3==0.){fragColor=vec4(1.);return;}

Undefined behavior on CPU and GPU:

The resulting value is undefined if <condition>

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store