Windows x86_64 is a LLP64 environment, that is long is 32-bits, and long long and pointers are 64-bits. This has created a small snag for me now that the Windows port is far enough along to start doing interesting things. In particular, I am porting Foundation to Windows, and hitting cases where we create CFOptionSets with UInt, which actually ends up being 32-bits instead of 64-bits. This occurs due to the way that CLong is imported IIRC. How do we want to handle this to make code be easier to port across targets? It seems unfortunate that swift did not opt to standardize on size-specific integral types.
When I encountered this I handled it by just casting the value to the inferred RawValue type using the init(bitPattern:) initialiser. It's a little verbose, but isn't too terrible of a solution.
I do remember it getting particularly ugly with enums. I think that constants (e.g. enum cases) are imported as Int32, whereas the RawValue type for the enum is imported as UInt32 (or possibly the other way around), meaning I had to do this sort of dance:
The enumeration case makes sense. Enumerations are signed by default on Windows where as they are unsigned on Linux and macOS. This is going to be more interesting later I think.
I'm afraid I don't follow what you are suggesting for the cast. That is one of the more annoying thing to fix. The hashing breaks due to the type mismatch for the __CFHash (the return type unsigned long is not treated as conforming to _Pointer (since there is no matching type).
Well, it isn't the Foundation API that would need to change there, but the CoreFoundation APIs. But, even that won't help too much I am afraid. The importing issues will come up due to the behavioural differences due to the ABI. We need to figure out how to deal with the signedness conversions as well as the violation of the assumption that Int and UInt are really intptr_t and uintptr_t equivalent.
I'm not well versed in this area of Swift, so forgive the basic question: why isn't CLong simply an alias for a different type for LLP64 environments? That is, preserve the relationship that long is imported as CLong and that intptr_t is equivalent to Int; for LLP64 environments, make typealias CLong = Int32.
I think I misunderstood what the problem was. What I was meaning is that you can use the .RawValue on RawRepresentable types rather than hard-coding the type in Swift code to work around differences in how those types are imported to Swift code.
That assumption should hold even on Windows x64. intptr_t should be equivalent to Int and uintptr_t to UInt on every platform – is that not the case?
That's exactly what the current setup is. Most of it was set up in this PR, and there's some relevant discussion on it as to the best approach for consistency between Win64 and Win32.
Hmm, maybe I misinterpreted something? But, I am definitely seeing a mismatch between Int and Int64 when targeting x86_64-unknown-windows-msvc. This makes me suspicious of the mapping that we have currently.
C:/Users/user/source/repos/swift-corelibs-foundation/Foundation/NSSwiftRuntime.swift:125:53: error: cannot convert value of type 'Int' to expected argument type 'Int64'
return CFHashCode(bitPattern: (cf as! NSObject).hash)
Usually CFHashCode would be defined as unsigned long and therefore imported as UInt. UInt has an init(bitPattern: Int), and since hash is defined as var hash : Int, that normally works.
On Windows, CFHashCode is imported at UInt64 since unsigned long long maps to UInt64. UInt64 has a bitPattern init only for Int64, and not for Int (and Int and Int64 aren't implicitly convertible/are different types in Swift, even though they have the same layout). That's why it's prompting to convert the Int to an Int64 (which would be a no-op).
If CFHashCode were instead defined as uintptr_t in C things would (hopefully) work as expected.
That doesn't seem like it would be better to me – it's just trading off one sort of breakage for another. The current implementation effectively means "use machine-width types if you want them to be exposed to Swift as machine-width types". If long long mapped to Int, that means that explicit 64-bit types in other libraries would be Int on Win64 and Int64 everywhere else. That seems like a worse outcome IMO, but I'll let others weigh in.
For Apple platforms, the Foundation APIs use NSInteger which is a pointer-sized integer type, and I suspect that’s the intent for these CF interfaces as well, so they ought to import as Int/UInt independent of the C type system for the host platform. The Clang importer has a table of special typedef-importing rules that we could add the CF typedefs to so that they import consistently with how they do on LP64 platforms. It’s also worth asking whether the corelibs version of CF can be modified if necessary to use consistently sized integer types on Win64—it probably wasn’t written with LLP64 in mind and, if it’s using long, really means pointer-width.
It seems you and @Torust are saying the same thing, then? The relevant code reads as follows:
typedef unsigned long long CFTypeID;
typedef unsigned long long CFOptionFlags;
typedef unsigned long long CFHashCode;
typedef signed long long CFIndex;
typedef unsigned long CFTypeID;
typedef unsigned long CFOptionFlags;
typedef unsigned long CFHashCode;
typedef signed long CFIndex;
...and you're both saying the problem could be addressed if the LLP64-specific typedefs would simply be changed?
Those typedefs look correct. We should have the Clang importer recognize these typedefs and map them to Int/UInt, like it does for NSInteger, size_t and other typedefs that are intended to be pointer sized. It’s likely we haven’t had to for these CF types yet only because nobody’s tried to use CF on an LLP64 platform with Swift yet.
So, this certainly did help a lot. However, I seem to have come across an interesting behavioural change. Trying to import CF_ENUM types of CFIndex now do not import properly as the imported type is not believed to conform to BinaryInteger which means that certain conversions cannot be performed. This is a problem, for example, in Stream.swift in Foundation where they do:
I'm not sure why that would ever work – is there some sort of implicit conversion from a C enum to that C enum's RawValue type that's not kicking in? I wouldn't expect to be able to do that for enums defined in Swift.
Yes, there is a conversion that is supposed to kick in for CF_ENUM. Yes, that does work, and that is what I had, but, it really would be better if that just mapped as expected. It avoids unnecessary changes in swift-corelibs-foundation.
I'm not sure what you mean by "there is a conversion that is supposed to kick in for CF_ENUM". This really does seem like it shouldn't work. CF_ENUMs don't conform to BinaryInteger on Linux or Apple platforms either. It'd be worth doing a -debug-constraints run on Linux to see what overload it's picking for that initializer.