Say I have some UI widget that takes a floating point value of 0.0 ... 1.0 representing sound volume setting. internally, all thing are in terms of CGFloat because that's what UIKit use. But I wonder, should I be exposing CGFloat to my own API or use Double instead?
CGFloat is 32-bit on 32-bit platforms, as far as I am aware. If this is only for Apple platforms, then probably doesn't matter. But if you're interested in supporting something like Raspberry PI, older models of which are 32-bit, or WebAssembly, which is going to stay 32-bit for the foreseeable future, then this probably matters a lot. I'm not sure about the Windows situation and whether the Windows port of Swift is going to support 32-bit Windows in any way.
I wished SwiftUI kept typealias Length = Double which was in the few first betas. CGFloat is all great, but Double is better, and then there are things like CGSize does not conform to Hashable.
I'm not sure what the correlation between CGFloat, which can be a Double or a Float depending on the machine architecture, and a CGSize, which is a pair of CGFloats, is in this discussion. Not quite sure what your point that CGSize doesn't conform to Hashable out-of-the-box has to do with the thread.
Doesn't matter about float literals. All of the floating point types (Float, Double, CGFloat, NSNumber, etc.) can all be initialized with floating point literals.
You could even use a Float if you wanted, depending on the precision you want to support. Is 6 significant figures okay, or do you need 14?
Same goes for the U/Int family. The literal defaults to Int and if you need a different type you'll need to be explicit about that. Basically all literals in Swift behave like that, which is cool as you can create own types that adopt one or more different literal forms. Set for example adopts the array literal.
Just use CGFloat for your UI and you'll be on the safe side and won't require much conversion.
Personally, as a rule of thumb, I'd go with Double. You can always add an overload for CGFloat if it's a common use-case.
Double is the default float-literal type for a reason, and it's probably the floating-point type you'll encounter most often when interacting with 3rd-party libraries. For similar reasons, Swift APIs tend to use Int everywhere (even when negative values are not allowed).
CGFloat is not “Apple float” - it is specifically related to the CoreGraphics rendering framework. Plenty of non-CG Apple frameworks use standard Float and Double types.
I mean, it’s an academic discussion (nobody is going to tell the difference between float and double precision when it comes to volume levels, and CGFloat is just a wrapper around one of them), but if you want to be super correct about it, it doesn’t make sense to set the volume based on a rendering-related type. The control will likely draw a filled bar or whatever using CGFloats (because that is rendering related), but the underlying volume level at a programmatic level has nothing to do with how it is presented.