If you bundle a binary with your app, that binary doesn't need to be ABI stable, because there is no circumstance where the library will be replaced without the app being recompiled. So in this respect Carthage only needs module stability.
Where it gets dicey with the Carthage use case is if the binary itself uses other libraries, that the app also uses. Then that third binary needs to be stable. For example, if app A calls a method in binary library B, passing in an array of a struct from library C, it's critical that both A and B have the same notion of the size of the struct stored in the array.
Even then, source is going to be preferable. We don't have cross-module optimization yet but once we do, you'll get much better optimization from sources from different packages all compiled together than you would from the hand-crafted @inlinable annotations you get today.
Good to know. Is there by any chance a plan of record or timeline for module stability? Organizations that ship SDK frameworks and are not willing to publish the source will be keenly interested in this.
This makes sense. Assuming most open source libraries don’t choose to support binary stability this effectively means a binary SDK won’t be able to have open source dependencies. It’s good to know this. Thank you for clarifying.
Agreed. However, as noted above sometimes source is simply not an option. It’s not always a choice available to engineers.
Thanks, Ben and Slava, this is immensely helpful. Much of this might become a blog post when the feature lands, especially the “you probably don't want to use this feature” headline. I suspect that many of us library vendors will reflexively feel by default like it's something we ought to support.
Even if the app launches an external, sandboxed, process and execute the plugin there, the app usually still wants to provide the plugin with some API. If you deliver that functionality with a runtime loaded library with library evolution enabled, you can ship a new application version with an evolved plugin API without requiring all plugins to recompile.
It's also important to remember that its not just about evolving the API but also evolving the implementation. Without library evolution enabled you're extremely limited in what you can change without breaking ABI, and so far we haven't even documented the narrower guarantees you get in this case.
Thank you for this insight. I had thought that (assuming module stability in place), if you didn't change any constructs with public/open access modifier you would not break ABI.
With this proposal the compiler only adheres to @inlineable with -enable-library-evolution? Or are there other cross-module optimisations at play here as hinted by the following paragraph?
This build mode will have no impact on libraries built and distributed with an app . Such libraries will receive the same optimization that they have in previous versions of Swift
Module stability builds on top of library evolution support. Textual module interface files cannot describe "non-resilient" Swift interfaces, since those would include private members of types, which cannot be represented in parsable source.
Even without -enable-library-evolution, the compiler will only inline and specialize @inlinable functions across module boundaries. However, other optimizations are made without -enable-library-evolution, such as assumptions about struct layouts (even private members) and class vtables (even private methods). This is why you cannot change implementation details in an ABI-stable way, even if you don't touch the API.
Re: module stability: To build on what Slava said, we/I owe you all a real update, but all the work we've been doing is in the 5.1 branch. That doesn't guarantee we'll finish in 5.1 (well, as much as any new compiler behavior supported until the end of the language is ever "finished"), but I'm optimistic about what we have so far.
If the willSet / didSet are not inlinable, you end up with a function call for setting but not getting. This is technically feasible, but I think it complicates the model, and I'd rather start without it and add it later if we need it.
I wouldn't mind adding it for private properties, and internal properties…but @usableFromInline starts getting tricky. And anything you can do with willSet / didSet, you can do with an explicit computed property…except for a handful of places where the _read accessor would be more efficient, and we'll get there too.
In my first draft of this proposal last year, I didn't want to extend it to classes just yet because we'd have two similar annotations, "frozen"/"fixed-contents" and the existing final. There's also two different things about a class that can change and grow: the stored properties, and the vtable (dynamic dispatch table for overridable members). I think we agreed that the overhead of a flexible vtable for an open class is likely small enough in practice that we wouldn't need to lock it down, but still, it's less obvious for a class that a "frozen" class means the instance layout is frozen but not the set of overridable members. Additionally, we'd have to think through whether a "frozen" class can have non-frozen superclasses (I think no) or non-frozen subclasses (I think yes?), and therefore I think for the purposes of discussing this proposal it's worth subsetting out.
I think the existence of the prohibition complicates the model :) Both conceptually (you have one more restriction to learn) and also from an implementation standpoint (we have to add code for enforcing it).
The simple solution to this dilemma would be, of course, to prohibit both (with that caveat that a subclass in a non-resilient module does not count as a 'frozen subclass').
Ok, that's fine. The only reason I can think of to tackle it all at once is that it allows us to just implement this proposal by allowing FrozenAttr everywhere that FixedContentsAttr is allowed today, with FixedContentsAttr emitting a warning unconditionally.
As a user of these from the very beginning, +1 to the name and concept.
Perhaps explicitly note this is could be a temporary restriction to limit scope, but it could be expanded in the future, as you talk about that later and @Slava_Pestov and @jrose discuss the details.
Thank you for this table. A simple table is worth a thousand words.
What about changing the name? Or rather, would renaming be allowed under (future work) versioning? If so, what else?
Why? (I know why, but I think it's worth mentioning that we've decided on a certain layout convention for these).
Perhaps it would be beneficial to address the data layout impact of @frozen in an additional table.
I think it would be especially enlightening to show the calling convention for these. The CC of public API taking @frozen structs is ABI and deserves some mention. Currently, everything can feel abstract and like we're reasoning through some pretty deep interpretation. I find seeing the CC to be hugely illustrative.
Since you also say
This would imply that the "sort here is the tie-breaking function for such a packing algorithm. Not sure if you wanted to be more explicit, but it does read as though it is in contradiction.
Good question. Renaming an ABI-public field is not allowed; renaming a non-ABI-public field is allowed. These rules do not change between frozen and non-frozen structs.
Both of these should just say "at compile time". We're not talking about the layout algorithm in this case, and there is no run-time data layout impact of @frozen.
There are questions about how the ABI changes when something that didn't start off as frozen becomes frozen later. Specifically, you can't change the calling convention for existing ABI-public functions. So being frozen-up-front would still get you more performance than being frozen-in-a-later-release, and coming up with a way to optimize new additions to say "this can use the more efficient calling convention" turns into some kind of discussion about availability.
That's about as far as these discussions have gotten. It gets even trickier when you have a client framework that's also compiled with library evolution.