Why a double is preferred over float?

In general Double is preferred because it’s SO MUCH more precise than Float, and on 64-bit platforms (eg, everything but watchOS) it’s usually as fast (sometimes faster) to work with.

So unless you are an architecture that’s EXTREMELY memory-limited, you should default to using Double so you don’t have to keep swatting bugs that come from having low precision.

Specifically graphics code is a lot easier to deal with with Double — I was originally using Float for my 3D code but found that chains of matrix operations would end up pretty far off with Float matrices.

So, yes, if you’re allocating a gigantic buffer of a million floating-point numbers, you might stop and think if you want a Float16 or Float or Double. But if you’re just adding up some non-integer numbers in a loop, use a Double!

-Wil

1 Like