Conditionally compile code based on SDK availability

This is not a pitch in the sense that I’m not proposing an exact language syntax. But I, and apparently many others, have a need that has come up often, especially during the yearly beta periods: there needs to be a way to gate code with new platform features so that the code also remains compiling in old Xcode.

Here is a very specific example which I myself ran into. In the 2026 platform version, CloudKit enum CKShare.ParticipantRole adds a new case, administrator. Here is a piece of code that cleanly compiles in Xcode 16, but has a warning in Xcode 26.

What I’d like to do is to put the new case handling behind some conditional compilation. And indeed, it actually is possible with this hack.

if canImport(FoundationModels) is an awkward SDK check for this year’s platform. I’d just like to have a better, cleaner, more obvious check in that place which more clearly expresses my intent of checking for the SDK that has this new enum case available. But yes this is pretty much how I would like to write it.

I think it is a reasonable requirement to be able to have code that keeps compiling cleanly and without warnings in both current and n-1 major versions of Xcode, where it keeps working correctly with previous platform in n-1, while providing access to new platform versions in current Xcode. Adding cases to platform enums is a pretty good example/testcase of this and happens quite frequently throughout the platform.

Resources:

  • Mastodon thread where @ole suggests some more ways to do the availability checking
  • Previous forum threads on this: 1, 2, 3, 4
10 Likes

For reference, here are the other alternatives for conditional compilation checks I suggested:

  1. Use #if canImport with a new framework Apple has added to this year’s SDK (as shown by @JaanusK above):

    #if canImport(FoundationModels)
    case .administrator: …
    #endif
    

    Downside: the code’s intent is not clear.

  2. Use the compiler version as a stand-in for the SDK version:

    #if compiler(>=6.2)
    case .administrator: …
    #endif
    

    Downside: code’s intent isn’t clear either. And it might break if Apple happens to release macOS 26 later than iOS 26 and therefore Xcode 26.0 final ships with the 6.2 compiler for all platforms but the old SDK for macOS. This has happened in previous years.

  3. Use the unofficial #if canImport(…, _version: …):

    #if canImport(CloudKit, _version: 2300.0.0)
    case .administrator: …
    #endif
    

    Pros:

    • The clearest way to show the code’s intent (though you may have to document the significance of the version string: "2300 means the OS 26 SDK").

    Cons:

    • Uses an unofficial feature (no big deal though because it has no impact on the resulting binary).
    • You have to go spelunking in the .swiftinterface files in the SDK to find the correct version. Example: I found the relevant CloudKit version in /Applications/Xcode-26.0-beta.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/System/Library/Frameworks/CloudKit.framework/Modules/CloudKit.swiftmodule/arm64e-apple-ios.swiftinterface
19 Likes

I vaguely remember a proposal for #if available syntax (kind of like if #available to check platform version number, but at compile time). I can't find the proposal and I'm somewhat wondering if it existed at all, but it seems like that would work for this case?

switch shareParticipant.role {
// other cases

#if available(iOS 26, *)
case .administrator: // stuff
#endif

@unknown default: // stuff
}
1 Like

#if available is the first previous forum thread linked at the bottom of the original post.

This comes up every beta season for us in my company. This year we are using if canImport(FoundationModels), last year we used if canImport(Synchronization) and the year before that yet another thing. Lucky us, that Apple has introduced a brand new framework each year, but what if next year there is no such new framework?

We would really appreciate a proper SDK check.

2 Likes
Side Observation on Inclusivity

Ehh… I would generally not encourage you to refer to engineering you consider to be worse or unclean as "ghetto". Possibly there are some cultural artifacts here depending on where you come from… but for engineers working out of California it can come across as not being fully inclusive. Maybe try something like "suboptimal" or "awkward" or even "clunky".

11 Likes

I totally agree, but the work around I found is by using an xcconfig file.

OTHER_SWIFT_FLAGS[sdk=iphoneos26.*] = $(inherited) -D CAN_USE_IOS_26_FEATURES
OTHER_SWIFT_FLAGS[sdk=iphonesimulator26.*] = $(inherited) -D CAN_USE_IOS_26_FEATURES
OTHER_SWIFT_FLAGS[sdk=macos26.*] = $(inherited) -D CAN_USE_IOS_26_FEATURES

And then you can call it in swift with

#if CAN_USE_IOS_26_FEATURES
// Do Something
#endif
7 Likes

We use this as well, but you cannot use it for Swift Packages. It only works in Xcode targets.

IMO, if #available(iOS 26, *) should just work.

I know that’s supposed to be a runtime check but I don’t think this check should ever evaluate to true at runtime when compiled against the iOS 18 SDK. If that assumption is correct, this is equal to if false, the optimizer will remove the code anyways.

Could even be a compiler switch –max-deployment-target "iOS 18".

This would make it impossible to build apps using the latest SDK, using newly introduced APIs, and back-deploy those apps older operating systems. It would defeat the purpose of @available and if #available.

This problem has a range of possible solutions with different tradeoffs:

A compile-time SDK version check directive

Adding support for some directive like #if sdk(iOS >= 26) to the language would be the most straightforward solution to the original problem that many developers face when they ask for this functionality. I think it would also be relatively simple to design and implement. However, the design wouldn't scale in a few ways that seem important to me:

  • Different beta versions of the SDKs can't be differentiated this way since they don't have distinct version numbers.
  • Similarly, for the developers who work with the SDK as it is being developed, it's not flexible enough since the in-development versions of the SDK also don't have distinct version numbers that can be tested this way.
  • It doesn't scale to handling source compatibility with different versions of dependencies that are independent of an SDK (like package dependencies).
  • It's not clear to me whether it would generalize well as Swift support for additional platforms grows.
  • For code that needs to handle the same source compatibility difference in the same dependency but across many different distinct SDKs, the syntax could become very cumbersome since each SDK would need to be checked independently.

A compile-time module version check directive

We could also formalize the existing #if canImport(Module, _version: 1.2) syntax into something like #if canImport(Module >= 1.2). This finer-grained check solves many of the issues I enumerated for a coarse grained #if sdk(...). For example, these checks work regardless of where the dependency comes from. They also work better for betas and in-development SDKs, because the versions of individual modules do change between intermediate releases of the SDKs. It also has the advantage of being completely implemented already and just needing an official syntax. However, it has some limitations:

  • It's a bit cryptic, since the -user-module-version of an SDK dependency is not usually a concept that developers otherwise interact with.
  • Finding the right module version to test against is cumbersome (perhaps tooling could help with this though).
  • It's still not fine grained enough to succinctly handle concurrent in-development branches of the same dependency. For example, the same new API could be introduced simultaneously in versions 1.1.51 and 2.0.34 of the same dependency. If your code must be compatible with both branches of the dependency as it is under development then checking for #if canImport(Module >= 1.1.51) would be insufficient, since not all versions >= 2.0 have the API.

Despite the problems, formalizing this syntax would be a fairly pragmatic solution to the problem that doesn't preclude better solutions in the future.

A compile-time declaration presence check

Rather than checking for versions at all, we could allow developers to be more precise about exactly what they're trying to do, which is usually to use a new declaration that doesn't exist in all versions of the dependency. This could be spelled like

func foo(_ s: SomeStruct) {
#if hasDeclaration(SomeStruct.someMethod())
  s.someMethod()
#endif
  // ...
}

I think this is clearly the most conceptually powerful solution and solves problems that the other solutions can't. It's also has the most complexity and biggest open questions, though:

  • It could only work for declarations that come from dependencies and not code in the same module, since checking for the presence of declarations in the same module would imply a cycle in the compiler's parsing and type checking stages. It's possible that this should be spelled like #if canImport(Module, declaration: ...) to make this limitation evident.
  • It's not clear to me how every kind of declaration that one might want to test the presence of would be spelled.
  • Similarly, disambiguation would be a challenge. How would you spell a reference to an overload that only differs from another declaration on return type, for example?
  • It would probably be easy to misspell the check and wind up accidentally never compiling the code you wanted to compile.
  • The implementation may be fairly involved on a number of dimensions. I'm not going to try to enumerate them because I think it's more valuable to decide first whether this is the best option and then talk about how much effort it is.
6 Likes

we could allow developers to be more precise about exactly what they're trying to do, which is usually to use a new declaration that doesn't exist in all versions of the dependency

What a developer like me is trying to do is to keep the code idiomatic, working, and compiling warning-free under several SDKs.

Using a new declaration is one case of that, but it’s not the only case. Remember that the original problem was a new SDK introducing a new case to a platform-provided enum. I’m not sure if it counts as a new “declaration”.

Another important case to consider is things (declarations, enum cases) getting deprecated. So under new SDK I would not want to use some API that I was using previously. Should there then also be #if hasDeclarationAndItIsNotDeprecated(SomeStruct.someMethod()? I can easily imagine branching my code based on checking on SDK version, but I’m not sure what the right APIs would be to check that some case exists and is not deprecated.

Going back to my original example, here’s vibing a bit. Let’s imagine we have a more targeted check for enum cases. So something like this…

switch shareParticipant.role {
case .owner: …
case .privateuser, .unknown: …
case .publicUser: …
#if hasDeclaration(CKShare.ParticipantRole.administrator)
case .administrator: …
#endif
}

This is a clear expression of intent, but a bit wordy and redundant. I could imagine annotating enum cases like this:

@ifAvailable
.administrator: …

But that is probably a runtime thing and not conditional compilation.

2 Likes

An enum case is a declaration. I can't think of anything that someone would call a new API that isn't categorized in the language as a declaration. However, I think this is a good point:

To generalize this further, there are a number of ways which existing declarations in a dependency might be modified in a new version of that dependency and have an effect on compilation. For example, an existing declaration might be updated to be Sendable and that difference might cause you to want to structure your code differently. I think this is good argument that even if we were to introduce a syntax for detecting the presence of a declaration with a particular name at compile time, that feature should still ought to be complemented with something that allows you to check the version of a module at compile time, which is the most essential and flexible way to differentiate. Adding different flavors of compile time checks like hasDeclarationAndItIsNotDeprecated doesn't strike me as a great direction because it would scale poorly.

In my opinion, any syntax that is going to be used to detect a compile time difference in order to entirely exclude some code in a file from compilation (because that code cannot be compiled) should involve some condition that can be checked with #if. I think introducing a new style of syntax that can have the effect of excluding regions of a source file from being part of the AST would need to be very well justified.

I feel that we should just formalize #if canImport(Module, _version: ...) because it was designed to solve this problem. It just needs some syntactic polish and confirmation that it is a feature we really want to elevate. While it isn't the most elegant tool, I think it's flexible and powerful enough that we should just consider it a supported feature so that people can reach for it without worrying about its "unofficial" status.

I also think there's still room for a declaration presence check even if we formalize module version checks. One of the flaws of #if canImport(Module, _version: ...) that I forgot to enumerate above is that sometimes folks want to be able conditionalize code that uses a new declaration but they don't have a good way of knowing for sure what version of the module is going to be the first to contain the new declaration. This comes up frequently in my work as a reason why #if canImport(Module, _version: ...) is insufficient, but I'm feeling like that indicates we need a suite of tools, rather than one syntax that works for everything.

5 Likes

Is there a reason that canImport itself could not be (as a future direction) extended to cover declarations just like import itself already does—e.g.: #if canImport(func POSIX.isatty)?

3 Likes

That absolutely seems like a plausible future direction, but I'm unsure if that syntax is sufficiently expressive:

1 Like

Sure, and a similar limitation of import func also—we could imagine both being extended to allow optional argument labels, parameter types, return types, etc.

1 Like

Would it? When using the latest SDK, nothing would change at compile time (or runtime) since the compiler would know this test might be true (or false).

Maybe I am wrong but even when backdeploying, you need to use the SDK that knows the API, so you would have to use SDK 26 anyways.

The only case I see where compiling would fail is when you use an API in your code that was marked @available(iOS 18) after SDK 18 was released. This API would be backdeployed when compiling against SDK 26 but would fail to compile when compiling with SDK 18.

1 Like

Perhaps another reason that supports this argument: I vaguely remember the time before Swift when Apple actually recommended to do availability checking on a per-symbol basis instead of checking against framework versions. Example: Apple doc archive > SDK Compatibility Guide > Using SDK-Based Development. (This was for runtime availability checks, not compile-time, but I think the point still stands.)

When they introduced availability checking for Swift, they switched to version-based checks, which I found odd at the time because the symbol-based checks seemed so much clearer. If I remember correctly, one of the reasons given for preferring version-based checks was that some "new" APIs may have already existed in past framework versions, but weren't public. So a symbol-based check for this newly public API could also be true on older SDKs, which is probably not what you want.

I don't know if the same reasoning would apply to compile-time symbol checks in Swift (e.g. does the mangling change when an internal declaration is made public?), but maybe it does.

1 Like

You're right, I misunderstood what you were saying and this wouldn't really interfere with back deployment. I think there are some more fundamental problems with the idea, though.

First, it conflates parse-time and runtime conditional syntax. Like I said in another post above, I think that allowing syntax other than #if to cause code to be conditionally compiled would need to be very well justified. We don't expect the code inside of an if false { } condition to be ignored by the type checker, even if it would sometimes be convenient for it to work that way, because if is semantically a runtime check. Having the compiler sometimes evaluate if statements before type checking, but only when they happen to be evaluating conditions that can be evaluated at parse time, would be too surprising in my opinion.

Even if we did decide that it would be ok to turn if into a syntax that sometimes functions as pre-processing directive and other times as a runtime check, though, treating these availability checks as if they were known to be false at runtime wouldn't be sound. The syntax if #available(iOS 18, *) represents an inequality which you can read as "if the version of iOS at runtime is at least 18, then ...". In order for the compiler to eliminate that branch entirely at compile time, we would need to statically know an upper bound on the version of iOS. The version of the SDK that you are compiling against does not bound the version of the OS that an app will run on at runtime at all, though. An app built against the iOS 18 SDK may run on iOS 17 and also iOS 26 - there is no restriction implied by the SDK version. The only bound that is ever known for the version of the OS at runtime is specified by the deployment target, which is a lower bound.

I get why it is appealing to reuse the existing if #available syntax to "do what I probably would want it to do" when compiling against older SDKs, but I don't think it's a principled enough design.