Add Float16

@Karl, great observations about naming. I think Float16 is the right name.

However, back to the substance of the pitch:

I’ve been writing a fair amount of drawing code, lately, and made the decision that Float32 is most appropriate for my specific task. Float64 is way more precision than is needed In most drawing contexts, and thus a waste of memory and processing resources. Even Float32 feels like overkill. I did take a look at implementing Float16, but quickly felt like I’d run into large-magnitude-number edge cases that would make it a bad choice for me. So, Float32 was the right balance in my case.

However, for appropriately constrained problem sets, Float16 is a great fit, and can deliver big performance gains. It should be part of the Standard Library.

2 Likes

Right; there are some specific graphics applications where naive use of Float is insufficient (world-space coordinate transforms, generally). That can be addressed either by going to Double or by structuring the algorithm more carefully in Float. For essentially all other graphics uses, Float is more than sufficient.

There are also some graphics-adjacent workloads (computational geometry like meshing) for which naive use of Float also falls over, but so does Double—these domains generally require exact arithmetic.

2 Likes

Most modern rendering engines (WPF, CoreGraphics, Cairo) tend to favour Double over Float - because it's just as fast on modern systems, and the extra precision (allegedly) helps hide artifacts from repeated floating point rounding.

In that case, I'd just assumed that, since S4TF is bringing lots of developers familiar with Python, they might have chosen Float meaning a float from Python (when they meant a Swift Double). If they'd have written Float32/Float64, I would have always assumed it was deliberate.

So why is CGFloat a Double on 64-bit systems?

I understand now that you are referring to 2D graphics systems for UIs, web pages, etc. These are indeed quite different from 3D graphics systems for games, scientific visualization, etc. Thank you for clarifying.

I am not an expert on 2D graphics systems, but I suspect you may find that any GPU-accelerated rendering back-ends for the systems you list actually use float almost exclusively.

I’m also not a machine learning expert, but I am given to understand that machine learning code is often even more aggressive than 3D graphics about using reduced precision to improve performance. Even Float16 is wastefully large for some machine learning cases.

1 Like

I don't think that's a concern. In fact, while Python's float is 64-bit, the numeric Python community is quite used to choosing between float16, float32, and float64. Those names are used by NumPy and indeed TensorFlow.

2 Likes