Compilation conditions for word size

There could theoretically be new ABIs where CGFloat doesn't follow the word size.

2 Likes

That would be a breach of contract, though.

The size and precision of this type depend on the CPU architecture. When you build for a 64-bit CPU, the CGFloat type is a 64-bit, IEEE double-precision floating point type, equivalent to the Double type. When you build for a 32-bit CPU, the CGFloat type is a 32-bit, IEEE single-precision floating point type, equivalent to the Float type.

From what you quoted, nothing in that specifically says CGFloat follows the word size. It says it is CPU architecture-dependent - that is, if an architecture changes, it may differ. How, it makes correlations, but does not state outright.

Nevertheless, I think this points to a complexity we need to think about in the architecture check, and shows perhaps one of the reasons this may have been held back for some time: we need to really define what we mean by these checks, and what we expect to be able to do based on them. How flexible do we need the checks to be? What are the problems we intend to solve so this feature fills the requirements of the developers like yourself who'll use it?

Definitely pro the idea. I think we just need to be careful to fully understanding the goal.

2 Likes

Eh, that's already bogus. CGFloat is Float on arm64_32, which indisputably is "for a 64-bit CPU".

But more to the point, code that isn't the CoreGraphics SDK overlay should not be trying to infer what CGFloat is based on pointer size, CPU, or anything else. It should use CGFloat when necessary, and convert to a type of known-size when necessary, and use MemoryLayout when exact layout is required. Attempting to infer what CGFloat "really is" instead of using the information provided by the SDK is basically always a bug.

10 Likes

This discussion seems to be going a bit into the weeds with the distraction of CGFloat.

It seems like what's needed are independent tests for the bit size of the natural word in the architecture, and the bit size of a pointer.

intBitWidth
wordBitWidth
pointerBitWidth

More generally, and in addition, perhaps a conditional to test the size of a type: bitWidth(Int) or bitWidth(CGFloat) would also address the CGFloat and similar issues if needed?

1 Like

This sort of information already has the spelling MemoryLayout<T>.size--are you really just asking to be able to use that in an #if context?

1 Like

Yes, that would seem to cover the general case of checking the width of a type, including Int. I don't think it covers the width of a pointer (though _pointer_bit_width does, I guess, if it's public)?

MemoryLayout<UnsafeRawPointer>.size?

Ah yes!

Basing an #if conditional on the size of specific types would be problematic, because it creates a layering problem. #if happens before any imports or semantic analysis even happens, so we wouldn't even be able to do name lookup to find out where a type is, let alone its layout. In principle it could also create circular dependencies, like:

#if sizeof(Foo) == 8
import Foo_is_16 // defines struct Foo { var x, y: Int64 }
#elsif sizeof(Foo) == 16
import Foo_is_8
#endif

It also seems like it invites misuse, since, like with the discussion above, you could check the size of one type and draw inappropriate conclusions about other types from it. Having a conditional that checks a higher-level trait of the platform (like ILP32/LP64/LLP64-ness) makes sense to me, though we should be careful to specify exactly what these mean.

Note that, if you're just trying to conditionalize logic within a function, without changing the types of declarations, if MemoryLayout<UnsafeRawPointer>.size == 8 already works fine and will get constant folded away.

5 Likes

I still feel like there has been no case made for why the intBitWidth or wordBitWidth is needed, merely a statement that this is needed because Foundation uses architecture lists to determine which size to use. It seems that in all of those cases, its pointer size that they are checking for and no concrete examples have been given for intBitWidth and wordBitWidth. Did I just happen to completely gloss over that?

For the intBitWidth, https://developer.apple.com/documentation/swift/int which clearly indicates that it is supposed to be sizeof(void *) and seems that therefore intBitWidth would be superfluous?

1 Like

This actually makes me wonder—in our brave new ASTGen world, will #if be evaluated in the parser or in ASTGen? @rintaro @jansvoboda11

In any case, I wonder if we could support only lookup of the size of types in a different module. That would suffice to get the size of Int or CGFloat without needing to fully parse the surrounding code. We could even require the name to be fully qualified so that it's not sensitive to imports or anything else in the current module:

#if sizeof(Foo_is_16.Foo) == 8    // No way to create circular shenanigans
2 Likes

I'm not sure whether #if is currently evaluated in Parser, but if so, we'll move the evaluation to ASTGen. The idea is that the parser should be strictly concerned only with parsing and any semantic actions should be done at a later stage.

2 Likes

Looking through swift-corelibs-foundation for uses of #if arch, it seems that most cases are not concerned with pointer size as such, but Int size.

We see it in Data, in NSNumber, in NSRange and in Scanner.

Looking through Github projects, #if arch seems to often be used in tests for testing precision or maximum values.

I'm all in favour of having separate conditionals for Int size and pointer size, and possibly other word sizes as well.

The APIs on Unsafe*Pointer and Int were designed assuming they're the same size, so it'd be a bigger project to separate them.

1 Like

If that is the assumption built into Swift, we should perhaps just expose a corresponding conditional. Switching on a growing list of architectures handles neither case.

Meanwhile, TSPL states:

Swift provides an additional integer type, Int , which has the same size as the current platform’s native word size

2 Likes

Yeah. TSPL is the one out of line here; when we hit our first platform with pointers different from "native word size" (arm64_32), we said Int matches pointers.

2 Likes

"Word" has never been an unambiguous term, which is one reason why we don't normally use it in the library or in documentation.

Pointer size is also a surprisingly complex concept: in addition to targets like arm64_32 or x32 with artificially restricted address spaces, there are interesting targets (such as GPUs) with non-unified address spaces, as well as various proposals to use fat pointers for security hardening (which can mean that the range of pointers can be substantially smaller than their storage size).

Regardless, Swift makes an assumption in several places that the target has an acceptable generic address space which class references, UnsafePointers, etc. must fall within. That choice of address space is a core and immutable aspect of the definition of the target, and if we wanted to support e.g. wide pointers on arm64_32 then we would need to provide new language mechanisms to access that wider range.

I have always considered Int to be tied to the range of that generic address space, and I am fairly certain that the library team agrees, and that is reflected throughout Swift's API. I don't know if we've ever said anything about that officially, but if we haven't, I'd be happy to run that statement past the rest of the Core Team, and I don't think we'd hesitate about it. Given that, I don't think we should provide redundant ways to query the range of Int vs. pointer size.

7 Likes

I also don't speak for the core team, but FWIW, I wholeheartedly agree with you and +1 this viewpoint. Int really needs to follow the UnsafePointer size, and if a specific target chooses to use a constrained pointer size, that constraint should apply to Int as well IMO.

-Chris

6 Likes
Terms of Service

Privacy Policy

Cookie Policy