Float precision on GPU, bugs/features

Expected 123 everywhere, screenshot from ANGLE DX11, Nvidia https://www.shadertoy.com/view/tlfBRB


Reading float bits from texture and losing float bits:

Numbers on left side on screenshot:

uint value_2 = floatBitsToUint(uintBitsToFloat(uint(value+iZero))+fZero);

Right side of screenshot, texture reading:

Precompiled floats:

https://www.shadertoy.com/view/wdXGW8 Nvidia

Result of functions such as trigonometric functions(sin etc), pow, sqrt, others on CPU may not be equal to GPU.

Also remember:

Shader compiler may use 32 or 64 bit floats to pre-compile static code:

https://www.shadertoy.com/view/sllXW8 Nvidia

GPU precision never 0, but it can be 0:

https://www.shadertoy.com/view/ftXSWB Nvidia

In shader and screenshot:

Testing this behavior:


Undefined behavior on CPU and GPU:

GLSL and usual coding