/* It prints 1229999.2 WHY .3 gets converted into .2?
.7 also gets converted to .8
Whereas .1 .2 .4 .5 .6 .8 .9 all remains same */
You have encountered floating-point rounding.
Float is represented as a 32-bit value: 1 sign bit, 8 exponent bits to specify the binade (essentially the order of magnitude), and 23 significand (aka mantissa) bits to represent the value within that binade.
The significand stores a fixed number of significant digits (in binary), and the last of these bits (the unit in the last place) represents the smallest difference between two numbers that a
Float in a given binade can distinguish between.
Essentially, a floating-point number is always rounded to a fixed number of binary digits, and those are the only numbers which can be exactly represented as a
Float. If you try to store a number which cannot be represented exactly as a
Float, the number will be rounded to the nearest representable value.
There is a substantial amount of literature available describing floating-point rounding in great detail, including common mistakes, best practices, and strange edge-cases. This is a deep and nuanced area, and if you will be working with floating-point numbers I strongly encourage you to start reading about how rounding works for them.
There is a substantial amount of literature available describing
floating-point rounding in great detail, including common mistakes,
best practices, and strange edge-cases.
Indeed. For some links to specific resources, check out this DevForums post.
Share and Enjoy
Quinn “The Eskimo!” @ DTS @ Apple