Why a double is preferred over float?

Hi all, question same as the topic of that post. Many thanks for help

1 Like

I see that you have posted the answer in your question - it is because of higher precision of Double. Swift is doing type inference and if given a choice it will choose something that is more precise. Of course you can change to type annotation and use Float instead of Double. Same is the case with single characters in double quotes, they are defaulted to Strings rather than characters.. More details here

1 Like

Hey, thanks for the answer. But does Double not require more memory than a Float type? To me it would be logical to prefer the type that requires less memory. :slight_smile: thanks

1 Like

You are right, Double does requires extra memory. But then swift had to choose between precision vs memory and the default choice was made towards precision. Eg: Int16 exists but is not defaulted for integer types - this is not from precision perspective for integers, just that there is a trade off. In case of Double it is between precision and memory and precision was chosen.

1 Like

thanks for clarification

Note that they say (emphasis added):

In situations where either type would be appropriate, Double is preferred

You should have already determined that the memory footprint and precision of both types are acceptable at that point (which would be 99% of the cases).

Swift just likes to be opinionated and nudge you toward a common choice. There are merits if most of the program uses the same type. You don't need to bother with data casting for one.

PS
Note also that the preferred choice for integer is Int and not Int64.

3 Likes

forgive me for asking, but what would be the difference between Int and Int64? I assumed that would be the same.

Int is 32bit on 32bit machine, and is 64bit on 64bit machine. It is again usually the right choice if your data range fits. It is the native int and is generally faster if there's any noticeable difference in speed between types.

The better question is, why do we specifically use Float64 instead of any other choice.

And why, in Swift, when Int is Int64, is Float not Float64. :pensive:

1 Like

Float64 is a typealias to Double, at least according to the Apple documentation. There is also a Float32 that is typealiased to Float. Float and Double are defined by their IEEE-754 definitions implemented in the hardware.

1 Like

Actually, the answer is probably: "Because C". :slight_smile:

Specifically, in C, the natural floating point type is double and for some intermediate period in history there were CPUs that had double registers but no float registers. In other words, float was a storage type, not a calculation type — hence the weird C rules that promote float to double in some contexts.

With a blank slate, it would have made sense for Swift's IEEE 64-bit floating point type to be called Float and the 32-bit version Float32, analogously to Int vs. Int32. I guess that was thought to be too confusing for developers coming from C or Obj-C.

Still, there's some reason to prefer that (Swift) Float and Double should be freestanding basic types, rather than related to each other. 32-bit floats are too limited in precision to be a good general purpose calculation type. They are generally better suited to being a compact storage type.

I'd say that's the real answer to the OP's question. Unless you are prepared to very carefully manage error accumulation due to loss of precision, Double is a better choice because you don't usually have to sweat the details over every floating point expression.

Of course, that's all too sweeping to be really true — Float has its uses as a calculation type too — but I think that's the flavor of why the floating point types don't follow the integer naming pattern, and why Double is preferred.

4 Likes

In general Double is preferred because it’s SO MUCH more precise than Float, and on 64-bit platforms (eg, everything but watchOS) it’s usually as fast (sometimes faster) to work with.

So unless you are an architecture that’s EXTREMELY memory-limited, you should default to using Double so you don’t have to keep swatting bugs that come from having low precision.

Specifically graphics code is a lot easier to deal with with Double — I was originally using Float for my 3D code but found that chains of matrix operations would end up pretty far off with Float matrices.

So, yes, if you’re allocating a gigantic buffer of a million floating-point numbers, you might stop and think if you want a Float16 or Float or Double. But if you’re just adding up some non-integer numbers in a loop, use a Double!

-Wil

1 Like

(Going into the weeds a bit) Double-precision arithmetic is basically just as fast* as single-precision; this is true even on most 32b platforms, including all iOS and watchOS hardware that is still supported (and all but the earliest unsupported ones too). There are some extremely limited CPUs where this is not the case, but Swift doesn't target them.

The only real exception to this is that you can fit more Floats in a given amount of space than Double, which applies to both memory and SIMD vectors, so you will get a speedup from reduced cache traffic and vectorization when it's possible--simple scenarios like summing two arrays can easily be twice as fast using Float for this reason, while more complex algorithms may not have any performance change.

[*] The semi-exceptions to this are divide and square root, which have somewhat longer latency for double than float on many CPUs, but should be rare in most programs.

4 Likes

But whether Swift "targets" the GPU next to it is true or false depending on your point of view.

Reducing Shader Bottlenecks:

Dropping precision is going to help some on the way to final delivery. As to where, it's not easy to find guidance. Boring to test it, too; documentation would be great!