This page implements a crude simulation of how floating-point calculations could be performed on a chip implementing n-bit floating point arithmetic. It does not model any specific chip, but rather just tries to comply to the OpenGL ES shading language spec. Subnormal numbers are flushed to zero.

For more information, see the Wikipedia article on the half-precision floating point format.

Exponent bits:

GLSL precision: lowp criteria fulfilled mediump criteria fulfilled ES 1.00 highp criteria fulfilled ES 3.00 highp criteria fulfilled

WebGL getShaderPrecisionFormat would return:

Operand A: ≈ binary =

Operand B: ≈ binary =