Should Double/CGFloat equivalence work between UnsafeMutablePointer<Double>/UnsafeMutablePointer<CGFloat>?

extension Color {
    var luminance: Double {
        var (r, g, b, a): (Double, Double, Double, Double) = (0, 0, 0, 0)
        UIColor(self).getRed(&r, green: &g, blue: &b, alpha: &a)    // Cannot convert value of type 'UnsafeMutablePointer<Double>' to expected argument type 'UnsafeMutablePointer<CGFloat>'
        return 0.2126 * r + 0.7152 * g + 0.0722 * b
    }
}

Edit: This maybe related to Double?/CGFloat? not interchangeable (same limitation).

2 Likes

SE-0307 mentions this issue and considers it minor:

The concern around arrays or pointers to CGFloat turns out to 
be a minor concern as there are not many APIs that take them. 

Personally I would ditch CGFloat and make it a typealias (i.e. choose #3 of the table of choices in SE-0307 "Motivations" section), but I guess that ship has sailed.

1 Like

This is unrelated to Optional.

When Double and CGFloat are not layout compatible, the location in memory referenced by a pointer stores either one or the other value; two different values cannot be stored simultaneously at the same memory location. It is logically impossible to make typed pointers to two types with potentially distinct memory representations become interchangeable with each other.

1 Like

Can you explain what do you mean? I thought the whole reason for SE-0307 Allow interchangeable use of CGFloat and Double types is because CGFloat/Double are the same: you can substitute Double for CGFloat and vice versa everywhere?

No, the whole reason that we need special compiler support as outlined in the proposal is that the two types are not the same on 32-bit platforms. If the two types were the same, then almost all of that proposal would be unnecessary.

1 Like

So if we live in a 64-bit only world, we can just do:

typealias CGFloat = Double

?

I think the only 32-bit device left is the Watch, so we are not far from 64-bit only?

Unfortunately, changing CGFloat's definition to being a typealias would be source- and ABI-breaking:

  • Because CGFloat is its own type (a struct entirely separate from Float and Double), you can write extensions to CGFloat directly which wouldn't appear on either type. Converting it into a typealias would migrate all of those extensions over to Float/Double could easily conflict with existing methods, extensions, and protocol conformances, which is source-breaking
  • And, eliminating the struct CGFloat type would break the ABI for existing binaries expecting it to exist (e.g. think of precompiled binaries that rely on CoreFoundation exposing struct CGFloat, running against a version of CoreFoundation that doesn't expose struct CGFloat)

There's more detail on this in some of the original discussion threads, but as @xwu alludes to, part of the reason the discussion on this pitch went for so long was because of subtle details like this.

4 Likes

I am not saying we should or should not do it; but if to do it - it can be an opt-in setting, along with soft deprecating CGFloat, so that devs are motivated to opt-in. Once I flip the relevant switch it is understandable something can break - if I have time to fix I fix, otherwise I simply flip the switch back and leave it for some later time when I have time.

What I think @itaifarber is saying is that it cannot be done (because breaking ABI is not permissible), whether or not anyone thinks we should or should not do it.

I see, thanks.

A small tangent for those in the know. We have this disparity, most likely because of C legacy:

Float types: Float, Double
Float type aliases: Float32 = Float, Float64 = Double
Int types: Int8, Int16, Int32, Int64, Int (plus unsigned versions)

Would it be more logical to have this symmetric type system:

Float types: Float32, Float64
Float type aliases: Float = Float64 or Float32 (per platform)
Int types: Int8, Int16, Int32, Int64 (plus unsigned versions)
Int type aliases: Int = Int32 or Int64 (per platform)

If we created swift today would we do it this way?

I come from a scientific and engineering background and to make an alias for Float or Int be platform dependent would be a problem. If you are streaming millions of data entities through an algorithm it is important to be precise about your definition. Float becoming Float64 on "some platform" doubles the memory usage (and halves the memory to CPU bandwidth). Many apps use 32-bit floating point precision because that's how accurate the data is -- not because of what platform they are on.

64-bit addressing is totally separate from 32-bit or 64-bit or 16-bit floating point data.

It does happen to correlate to which platforms alias CGFloat to Float or Double.

I think you missing the point here. Int is already 32 or 64 bits depending upon a platform, and if you need specific size you'd already use Int32 or Int64 explicitly. Ditto for float: you use Float or Double specifically (aka Float32 and Float64) when you need specific size / precision. The symmetric type system above merely removes Int as a separate type (making it similar to floating types: there is no separate type that aliases to Float32/Float64), makes Int a mere alias and swaps what type <-> alias is in floating types (making Float an alias and Float32/Float64 the main type). You'd still be able doing everything as before, although there would some obvious differences and it might feel inconvenient initially (e.g. to use old Float you'd need to reach out for Float32, to use "double precision float" you'd have to reach out for Float64, if you have no specific preference you'd use the new Float that will match Float64 on iPhone and Float32 on a watch. In this scheme of things CGFloat could have been an alias as well (I mean, if we were creating Swift and/or CoreGraphics today).

I also quite like Java type nomenclature, where Short/Int/Long are hardcoded to 16/32/64 bits, no ifs no buts, keeps things simple and makes names aesthetically pleasant compared to Int32 / i32.

Int & UInt are only platform dependent because they're specifically tied to the bitwidth of a pointer. Floating point aren't used to represent pointers, so there's no reason for there to be a platform dependent floating point type (GFloat notwithstanding).

Isn’t this how C is because it’s low level using int for address arithmetic. Whereas Swift is completely abstracted from the running hardware architecture, there is no need to match Int to the memory address space. It makes more sense to be like Java.

The reason Int matches the architecture’s pointer size is that all the architectures Swift supports (and likely ever will support) use the same registers to store pointers and integers. Swift is intended to be useful as a systems programming language, so its common integer type matches the target architecture’s native integer type.

And that absolutely isn’t going to change now.

1 Like

Do to various reason's, C's int has nothing to do with pointers (except by coincidence on certain platforms). On current 64-bit hardware int is actually 32-bit with the (currently optional) (u)intptr_t taking the place of an integer that can hold a pointer.

The original intent was that int would directly correspond with the natural word size of the CPUs arithmetic instructions, but that largely fell to the wayside with the transition to 64-bit.

Oh, that fell by the wayside long before the transition to 64-bit (-:

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

1 Like