Add `Float16`

Are you guys actually doing accumulation in BFloat16? Intel's extension does accumulation in Float, and so do all the other public proposals that I've seen so far.

Edit: for the curious, here's Intel's white paper. They only define three operations: an FMA that accumulates a bfloat16 x bfloat16 product to a float32, and conversions between bfloat16 and float. So all arithmetic is done in float. Every other public proposal I've seen follows exactly the same pattern.

1 Like

I don't believe that it is publicly documented.

This seems like an interesting idea. However, I do not see the general users of Swift taking full advantage of it. I understand use of Float, but most people just use Double. If there are any cases to use Float16 then please correct me.

This would require a reimplementation of log and other functions. That would lead to a lot of code reusing. Is gyb used on these functions?

As mentioned upthread, Float16 is used heavily in ML and graphics APIs both in Apple's SDKs and on Linux. We would like people to be able to use those APIs from Swift.

No, it doesn't. Those operations are not required for BinaryFloatingPoint conformance, and these types are useful without them, to interoperate with the aforementioned APIs (the type would even be useful without any arithmetic at all, but we can and should provide it.)

That said, we will eventually provide the usual set of math functions for these types.

In what cases? I followed the thread but why would you not use just Float32?

Because being able to transfer or use twice as many weights buys you more accuracy improvement than additional bits to represent each weight does, and doing computation in Float16 consumes less energy so you can get more useful work done before you run out of battery. There are just a few of the reasons why this type gets used pretty heavily now.

Ultimately, the exact use cases barely matter. The fact is that there are common APIs that cannot currently be used from Swift, and adding this type makes them usable. That's reason enough to have them.

22 Likes

I support this. I'm also on the Float16 camp (without Half).

What is the interop story with __fp16 and _Float16 values coming from C?

1 Like

They use the same underlying LLVM type, so we can make the importer map them both fairly easily.

1 Like

Any update on this? @scanon

I think this would still go in the standard library rather than the numerics library, is that correct? Because it's so widely applicable and would use LLVM intrinsics.

1 Like

You also want SIMD support.

Or ditch Float80 and introduce DoubleAndAHalf?

6 Likes

I'm not sure how serious this suggestion is but please, no.

1 Like

Well I almost spelled it DoubleAnd½

Since you asked so nicely:

suggestion withdrawn. Please disregard.

8 Likes

+1, would be nice if a typealias as Half was included

1 Like

Half isn't unheard of, but Float16 is roughly equally widespread, so I don't see a lot of value in adding a second name.

Ok not a big deal :)

What's the harm in adding a nickname?

It seems like if you really wanted to use Half you could just do typealias Half = Float16.

3 Likes

Nicknames cause considerable harm that needs to be offset by more than aesthetic preferences. Having two names for something sows doubt and confusion. Are they the same or different? Does an extension on one appear on the other? Is one the "preferred" one? Should we add an entry in our style guide stating which one to use?

15 Likes

The reason that people want Half is because in Metal a 16-bit float is "half", and not "float16". So, it may actually be more confusing for a Swift-Metal project, to use Float16 on the Swift side, and half on the Metal side.

4 Likes