[Returned for revision] SE-0376: Function back deployment

Hi everyone,

The review of SE-0376: Function back deployment has concluded and the language workgroup has elected to request the following revisions based on the community's feedback:

  • Change the spelling of the base name from @backDeploy to @backDeployed for better part-of-speech alignment with other attributes
  • Change the argument label from before: to upTo: for alignment with the terminology already existing in Swift for "upper bound of an exclusive range" a la PartialRangeUpTo.

The functionality provided by this proposal was overall well-received, with some minor questions about how best to integrate it into the language.

  • Some reviewers felt that back deployment wasn't appropriate for inclusion in the language proper at all since it primarily an Apple platform concern. Since the existing availability system shares this property, the language work group believes that officially adopting @backDeploy as a language feature is reasonable.
  • There was also further discussion about whether "back deploy" appropriately describes this feature, and whether we'd want to introduce this terminology to the language surface as opposed to formalizing existing terminology such as @_alwaysEmitIntoClient. The language workgroup feels that "back deploy" is used commonly enough to describe this functionality that it is clear in terms of semantics, and that alternative terminology such as "emit into client" is overly jargon-y.
  • Reviewers additionally discussed the argument label before: and various alternatives were raised. The language workgroup agrees with the point raised that being clear about the exclusivity of the bound here is important, and so has adopted the suggestion from review of using Swift's existing upTo: terminology to describe an exclusive range.
  • The language workgroup also agreed with the position of the author that it is appropriate to require authors to always specify @backDeploy explicitly even when it could theoretically be inferred from information in the function body.

Once the requested revisions have been made, a second review of the proposal will be kicked off to evaluate the changes in naming. As always, thank you to everyone who participated in the review!

14 Likes

I think this needs some elaboration.

The issue is not that it is "primarily an Apple platform concern" - it is that it is exclusively meaningful for engineers creating Apple's SDK libraries. In that respect, it could not be more different to the availability system.

Joe Programmer, writing code which will run on an Apple platform, needs to know about availability. They may need to conditionalise calling certain APIs using if #available, or annotate their own APIs with @available if they absolutely depend on an availability-limited platform API.

Joe will never have to write @backDeploy. He has no meaningful way to use that feature, because there is no chance whatsoever that code compiled against a newer interface will load an older version of his library at runtime. That is a problem that only Apple's SDK engineers need to worry about.

In fact, does Joe even care that a function was introduced in iOS 16 but must be emitted in to his binary if he targets iOS 15? I don't think so! All Joe needs to know is that the function requires at least iOS 15, and whether that calls a copy emitted in his binary or a copy already present on the system is none of his business (the same way he doesn't care about @_alwaysEmitIntoClient today). IMO, things like Xcode's generated interfaces should display the effective availability - the availability context which callers (like Joe) need to ensure before using the function, regardless of where it is physically located.

So Joe should never see the @backDeploy attribute at all, anywhere. He should never write it, and he should never have to read it. It is an implementation detail of the SDK, expressing a distinction that only Apple's SDK engineers need to worry about. For everybody else, it is entirely meaningless. That's the issue.


But you may say, "well, there's no harm in it. Why not stabilise this feature now?"

Once this becomes a formal part of the language, changes must go through swift-evolution. If Apple's internal needs change, for whatever reason, and they need to make additions to this attribute, that will need pitches, proposals, reviews, and all of that. All future language features will also need to consider how they interact with @backDeploy.

And if Apple's SDK engineers find some part of this attribute's behaviour undesirable and in need of adjustment, that then becomes a source-breaking change! Even though Apple's SDK libraries contain the only source code that should be using it, they will need to consider the "other users" of this attribute, who won't really exist.

Stability is a burden. Not, like, in life/in general (although....), but at least in this case. This is an implementation detail of Apple's SDKs, Apple is a very large organisation deploying code to many different platforms of all shapes and sizes, and it is a reasonable expectation that their deployment needs and back-deployment abilities may change with time.

Unless I am mistaken, the interfaces in Apple's SDKs are not guaranteed to work with any compiler except the one that ships as part of the SDK. If this were not a formal language feature, Apple's SDK engineers would be free to make additions or breaking changes to this attribute at any moment, as their needs determine, and those changes roll out in a new version of the SDK with a new compiler that supports them. And nobody else needs to care.

I can't understand why Apple's engineers would sacrifice that flexibility - for what?


So yeah, I think this needs some elaboration. As far as I can tell, making this a formal language feature would be undesirable for all parties. Why does the language workgroup disagree?

3 Likes

We discussed this at some length in the meeting; I started out thinking this way, but I’ve come around. In an ideal world Jane Programmer has no reason to care, but there are all sorts of things that can happen that make it relevant to her.

The one that has the most impact to my mind is understanding the different behavior modes if an SDK bug gets fixed. On back-deployed targets, she has to recompile her app to see the fix, but on new targets she gets the fix automatically.

There are a whole host of other “understanding what happened” and debugging scenarios that make this attribute very relevant to her, even though she will never use it on her own code.

12 Likes

I see. That's an interesting point, thanks for mentioning it.

I agree that these sorts of debugging scenarios can be difficult, and it is a strong argument. Then again, in order to actually get those benefits of improved understanding, Joe/Jane Programmer would need to know exactly when functions are emitted in to their binaries. Backwards-deployed functions can offer more certainty about that, but of course there are other, even more common sources of functions being emitted in to binaries:

  • Any use of @inlinable code leaves emission up to the compiler's optimisation heuristics. Maybe it inlined a version of a function including a bug, maybe it didn't and you get the system's copy (including any fixes), maybe it inlined it in some places but not others, etc.

  • Libraries (including closed-source libraries) may use @inlinable/backwards-deployed code from the system, so their behaviour may change and the conditions under which they change can be similarly obscure.

Considering that, I wonder how much tangible benefit there truly is, or whether it is a more of an abstract/theoretical improvement.

Moreover, I wonder if it's even desirable to make that kind of promise about how exactly functions will be back-deployed. One could imagine, for instance, that while building an SDK library, the compiler could collect all back-deployed functions and emit them in a separate binary. That separate binary could be a shared library which does gets updated. That would reduce the size impact of using back-deployed functions, and allow important bugs/security issues to be patched.

I think flexibility like that is also very important - but to get it, we'd need to not promise that back-deployed functions get emitted in to the caller's binary. They might, or they might not, or it might change in a future version of the SDK - again, really more of an SDK implementation detail.

In addition to what Steve said, I think it's a very plausible future direction for Swift to extend all the logic around deployment targets and dynamic version testing to other libraries besides just the target OS. That shouldn't really be a difficult generalization, it's just not something that we've prioritized in development so far because it's relatively uncommon to have that kind of boundary with anything except the OS. But if somebody working on, say, the binary plugin SDK for UltimatePowerApp wanted to teach the compiler how to gate code on the availability of UltimatePowerApp 2025 Edition (in some reasonable way that didn't require the compiler to hard-code knowledge of their app), there's no reason they shouldn't have the same set of tools available to them as an OS vendor.

8 Likes

Yeah I know, I'm just in favour of crossing bridges when we actually get to them.

Plugin development in particular is something that has changed a lot in recent years. I think it's generally preferred to have them running in self-contained processes these days, and for communication to occur over a more robust IPC interface. That's how we got things like the Language Server Protocol, and it's what SwiftPM's plugins use, and it appears that the compiler's macro plugins will take a similar approach.

I wouldn't presume that binary plugins are still especially relevant in 2022+. They may be, but I wouldn't call it obvious. There may be other things we could do that would be more useful to support application plugins (probably around distributed actors).

1 Like

Well, it's an interesting extension area. What you're saying about loading plugins directly into processes being disfavored is absolutely true, but that doesn't actually mean that plugin APIs don't evolve over releases, it just means that checking for dynamic availability can't just mean checking some sort of metadata in the current process.

If I wanted to have a stable out-of-process plugin SDK for UltimatePowerApp, I'd probably have a package offering a Swift API for what you can do over IPC. Some of the APIs in that package would presumably only work with newer releases of the application, and it'd be nice if clients of my SDK had to write code around that. What they care about isn't the version of the SDK package they're building with, but the version of the application that's available at runtime. But it still makes sense to:

  • be able to mark specific APIs as e.g. @available(UltimatePowerApp 2025)
  • test for whether the new APIs are available with e.g. if #available(UltimatePowerApp 2025)
  • build a specific plugin with a minimum deployment target of e.g. UltimatePowerApp 2022 (which hopefully is also written down in the plugin metadata so that UltimatePowerApp 2019 knows it can't load that plugin)

The dynamic test used by if #available would have to send some sort of query over the IPC connection (which presumably when then be cached on the client side). But all the concepts are still relevant.

8 Likes