RFC: On compile-time diagnosis of erroneous floating point operations

I see. That is interesting. Just for more clarity, I have listed below a positive case where we need a warning and a slightly tricky negative case where we do not need one. Let me know if my understanding is correct:

Positive example (where tininess results in loss of precision)

Float(0x1.000002p-127)

// The significand has a bit width of 23, and here it has 1 as the last bit (LSB), which will be lost in the subnormal representation. Whereas, it will be preserved if the exponent was unbounded. So we need a warning here, right?

Negative example (here there is a loss of precision but it is not because of tininess of the value)

Float(0x1.000001p-127)

// The significand has a bit width of 23, and even with unbounded exponent (but bounded mantissa) the number would be approximated as 0x1p-127. This value is accurately captured by the subnormal representation and hence we do not need a warning. right?

Regarding APFloat

It seems that APFloat does not have support for this, though it can tell us when the underflow flag will be set (which is not exactly what we want as it will be set in my "negative example" as well). However, if we extract the significand and exponent bit patterns from APFloat, then, I guess, we can check for this property in the SIL diagnostics phase (e.g. by comparing the significand precision before and after truncation, after taking into account the subnormal representation). Let me know if this is a reasonable approach to take.