SE-0260: Library Evolution for Stable ABIs

The proposal states (in bold nonetheless):

This build mode will have no impact on libraries built and distributed with an app . Such libraries will receive the same optimization that they have in previous versions of Swift.

What does this mean? This sounds like it's saying that something about the compilation and distribution process must disable or bypass this mode, even if the user explicitly states they want to build their libraries with this mode. Or, is the message that the build system (which build system?) won't pass this flag in this situation? That is, are you requiring new behavior for all Swift build systems, or addressing concerns about whether the existence of this mode may negatively impact an app?

I would certainly hope so. To be clear, what I'm discussing is not a plan-of-record; I'm laying out something that I think we need to be planning for. I think it ought to be part of this proposal that we agree on the fundamentals of some of these future plans (with the understanding that such plans are always subject to revision).

Under my plan, the expectation for a non-ABI-stable binary library would be that everything downstream of the library would have to be recompiled if you updated the library, but that the downstream compiler version would not have to be the same as that used for the library.

Your plan makes sense and provides a clear rule to follow. Do you have any guess as to when it will be possible to build and distribute such a library? In the meantime, it sounds like all binary-distributed libraries will need to use -enable-library-evolution (assuming this proposal is accepted), is that also correct?

Stable interfaces for binary libraries are something we're still working on in general; I wouldn't want to speculate.

This was there to avoid people freaking out about the proposal imposing new requirements on things like Swift packages and built-from-source frameworks in current Xcode projects. We probably shouldn't have made it so alarming; it's just saying that if a library author does not opt into this mode, they don't get any of the flexibility or the costs associated with it.

Right, maybe phrase it as "This proposal will have no impact on libraries that don't use the new build mode, such as those built and distributed with an app".


I've been putting off responding to the "frozen types layout" question because it's complicated. :-) @Slava_Pestov and I thought about a very similar problem quite a bit, so I'm going to explain that problem first and then try to connect the two.

Here's the similar problem. Pretend that CoreGraphics, an Apple framework, forgets to mark CGPoint, a simple struct containing two CGFloats, as frozen. This would be terrible cause we do math on points all the time, as well as perhaps storing big collections of them for things like "drawing bézier curves" or "tracking mouse movement". So in dishwasherOS 2, CGPoint gets this new "frozen-after-the-fact" attribute.

// Fake syntax not being proposed, do not criticize
@frozen(dishwasherOS 2)
public struct CGPoint {
  public var x, y: CGFloat
  // …

Okay, great. This has a few effects:

  • Because CGPoint wasn't frozen from the start, any existing functions with CGPoint parameters or return values still have to pass them indirectly. This isn't terrible—it's incredibly common in C++ with templates that always take references—but it's a litte unfortunate.

  • Any new API introduced in CoreGraphics in dishwasherOS 2 can avoid this indirection in theory. After all, it's never existed in a world where CGPoint wasn't frozen.

  • App targets and packages built from source can avoid this indirection in their own functions if they have a new enough deployment target.

  • There's still lots of other benefits here that don't have to do with parameter passing: any manipulation of local variables can be done in registers if the struct is small enough, Array iteration doesn't need to ask what the stride is at run time, etc.

Now consider a library that uses CGPoint that's not shipped with the OS, but still has library evolution support turned on, i.e. it's important to maintain binary compatibility.

// MoreGraphics.swift
extension CGPoint {
  public var distanceFromOriginSquared: CGFloat { x*x + y*y }

Here's the weird thing. If MoreGraphics has a minimum deployment target of dishwasherOS 1, distanceFromOriginSquared has to operate on the CGPoint indirectly. But changing your minimum deployment target should also not break binary compatibility, if you're vending a binary framework, meaning that even with a minimum deployment target of dishwasherOS 2, distanceFromOriginSquared still has to operate on the CGPoint indirectly. What you really need is "what was my original minimum deployment target", which we don't currently have an option to provide.

It actually gets weirder. If I had written this instead

// MoreGraphics.swift
extension CGPoint {
  @available(dishwasherOS 2, *)
  public var distanceFromOriginSquared: CGFloat { x*x + y*y }

then distanceFromOriginSquared could potentially manipulate self directly. Why? Because now there's a guarantee that there aren't any clients older than dishwasherOS 2, and there never were—because that would have been a breaking change. But that means making a distinction between "module-wide minimum deployment target" and "availability annotations on individual declarations", when normally we don't bother to do that.

(This also applies to the far-future potential feature of versions that are independent of the OS, discussed in docs/LibraryEvolution.rst.)

Everything that applies to MoreGraphics applies to inlinable code in CoreGraphics itself, by the way, since inlinable code can't assume it's going to be run against the same version of the library. Essentially, the minimum deployment target of a library with evolution support doesn't mean anything to its clients.

So, to close out, everything gets the benefits of the frozen-after-the-fact except for API that already exists in libraries with evolution support—both the library that defines a type and other libraries that use it.

Changing frozen layout has similar issues. Any struct that's explicitly frozen now can't take advantage of the new frozen layout without breaking its binary interface, but neither can we bump the default based on a particular compiler version or minimum deployment target, because people change those for other reasons and that shouldn't break binary compatibility. So we are boxing ourselves in a little.

That said, I don't know if this is ultimately important enough to matter. Yes, the C layout algorithm isn't very good, but at the time when you freeze your type, you can also reorder your stored properties to minimize padding. That's not a great answer because (1) it means you can't use order for presentation unless you hide your storage behind computed properties, and (2) it's a manual process that we don't make transparent. But it's something. It's only if we expected the default frozen layout to do things like packing Bools into another field's spare bits that this would really become important, and I feel like it's okay to leave tradeoffs like that on the table when not explicitly requested.


I see no mention in the proposal of how reflection APIs like Mirror and MemoryLayout may be affected by this mode. Is the metadata of a non-frozen type still available such that MemoryLayout can provide the type's size, or that Mirror can list private variables?

My presumption is that the information would still available for non-frozen types, but the compiler cannot use the information to perform any optimisations, meaning reflection capabilities would be unaffected.

1 Like

Yes, these features are still available on frozen and unfrozen types from a library build with evolution turned on. To see this, you can experiment with standard library types that already use this feature in the 5.0 compiler. Most types are frozen, but there are a few that aren't (Mirror being one of them).


Thank you, this was an excellent response.

That may be true, but I think this needs discussion, not just summary judgment. Let me try to restate your conclusion. We have language features that are required to produce a stable ABI when applied to specific declarations. We can always use the ABI rules from Swift 5 (or whenever the feature was introduced), which might be suboptimal for some declarations if we could've improved the ABI rules in the future, but it's easier to use a single set of rules than to try to get everyone to agree about what the best possible rules were when the declaration was introduced, especially since the latter might require knowing similar kinds of information about other libraries.

To me, this does seem very limiting. Mistakes — or even just conservatism or indecision — in early releases of a library will have permanent and unsolvable consequences for the performance of the library. Similarly, mistakes — or conservatism — in Swift's implementation design will essentially never be correctable, even as applied to APIs introduced far in the future. If this were a necessary limitation, well, so be it. Fortunately, I think there's a solution which allows us to have the necessary information while also addressing a number of related problems:

  • The interface description for an ABI-stable library should include a complete history of the library's dependencies, including when the minimum target versions were bumped.

Now, that would be pretty ridiculous information to have to maintain in source code. But it's already going to be ridiculous to maintain things like "when was this API introduced" and "when was this attribute added" in source code. Therefore:

  • The interface description for the previous version of an ABI-stable library should be an input to its build.

The compiler can then just take the history from the old interface description and append the current dependency information in order to create the history for the new interface description.

This will, of course, also allow the compiler to check that the new interface of the library is compatible with the old interface directly as part of the build. (It should be straightforward to also use this to build a tool which can verify that a new interface description is a valid successor to an old one, which can be used as a deployment safety check that the build was configured properly.)

1 Like
  • It is not guaranteed to use the same layout as a C struct with a similar "shape". If such a struct is necessary, it should be defined in a C header and imported into Swift.
  • The fields are not guaranteed to be laid out in declaration order. The compiler may choose to > reorder fields, for example to minimize padding while satisfying alignment requirements.

This is a huge -1 for me. Currently Swift is very convenient to use for applications like computer graphics, where it's necessary to pass large blocks of structured memory to the GPU. For instance, in Swift I can currently declare a datatype to be used in shaders like this:

struct Vertex {
    let position: Vec3
    let normal: Vec3
    let textureUV: Vec2

And then simply pass an array of verticies to the graphics API as an opaque pointer.

It is extremely powerful to be able to do this in pure Swift, and breaking this and requiring the overhead of moving GPU-types to C headers will make Swift non-viable for a whole class of use-cases involving computer graphics and GPGPU programming.

In my opinion this change would be a huge mistake for the language unless it were possible to opt out of this feature.

Library evolution is an opt-in feature.

But I will say, your graphics programming example is probably a little unsafe in Swift to begin with. It might be the case now, but I'm not sure that Swift explicitly guarantees how layout is going to be done.

IIRC the general guidance I've seen is that if you absolute must have a specific layout for structs, then they should be declared in C. Especially if you're going to be interacting directly with C functions.


Yeah I understand that memory layouts are not explicitly guaranteed in Swift, but since C-style layouts are currently reliable, and since this is a "feature" that people are currently taking advantage of (for instance as discussed by @Torust in this thread) I think it's worth understanding the costs and benefits of breaking said feature.

As far as the opt-in nature of library evolution, I still think it makes sense to be able to opt out on a per-struct bases. It's not clear to me why libraries declaring datatypes to be consumed by the GPU and libraries using library evolution should be mutually exclusive.

There's precedent for this in other languages which IMO would map fairly naturally to Swift. For example, C# has the [StructLayout(LayoutKind.Sequential)] attribute, which could map to a @structLayout(sequential) attribute. Of course, this is largely a moot issue until the compiler's layout algorithm changes.

By the way, you currently can't map trivially to GPU structs in many cases because the struct's size isn't rounded up to the maximum alignment of the stored properties. Assuming a struct of the below format with Vec3 being an alignment-16 type,

the struct definition needs an extra let padding : Vec2 = Vec2(0) property at the end to make it compatible with GPU structs under the current layout algorithm.


That seems like a nice solution.

That's a good point, but it's still a far more tractable problem to need to add padding in some cases than it is to have the fields laid out in unpredictable order in memory.


Ultimately I would love to see more tools to explicitly define memory layouts in the language, but to have them be opaquely determined by the compiler would be a step backwards IMO.

That's exactly what is being proposed.


I'm very much against this. I don't think it's acceptable to tie the correctness of an ABI to preserving a history of released versions, particularly for potentially branching revision histories. A library author may not have to create or edit this information manually, but they do have to check it into their repos, integrate it across branches, and decide when to update it.

I do think we should eventually have a tool for checking that you didn't make mistakes, but I don't think that tool needs to have inputs any more complicated than "another release's interface file". Not the immediate previous release, just a release you want to be compatible with. And if there's a break, it doesn't change the output of the compiler.

I do like your point that just because this is how many other languages/environments do it doesn't mean we have to do it that way. But I don't think a historical record that serves as a build input is the way to do it, and I don't have another solution in mind either. Even given that, though, I don't think we should block the simpler feature while trying to solve the problem.

(And to be fair, it's technically not mutually exclusive; even if we go with the historical record thing, we can say that "has a baseline but it's empty" is different from "does not have a baseline at all".)

1 Like

I’m not sure why branching would be a hard problem for this. Stable-ABI libraries generally follow a regular schedule of periodic releases. At the beginning of each cycle, when you bump the current version, you also commit the interface from the release you just made. You don’t need your CI system to be committing intermediate interfaces from daily builds.

That “previous public release” interface is also exactly the interface you’d want to check compatibility with. If you check compatibility with older releases, you are of course failing to verify that you didn’t change anything about API introduced in the last public release. And I think you should be checking compatibility continuously, not just when someone remembers to manually run some specialized checker tool.


SE-0260 has been accepted.

I think this conversation about library versioning and development has been very interesting, but the right venue for it now is probably a separate, non-review thread under Discussion. Any other discussion about SE-0260 should be taken to the announcement thread; I will be closing this one.