Design Priorities for the Swift 6 Language Mode

I’m sure the Standard Library engineers are on top of the upcoming Unicode changes, but Swift’s philosophy so far has been that Unicode support comes from the platform, not the language. Changing that philosophy would probably require a sufficiently motivated proposal.

The link is not to a Unicode change (in the way that, say, new code-points are assigned), but to a very recent draft for a new set of recommendations.

It's certainly possible that Swift is already represented in the Unicode Source Code Working Group; I don't know. But if we're going to make this sort of very rare release where breaking changes are on the table, I think it's a good idea to check.


Example - the grammar page says our line breaks are CR, LF, and CRLF. However, this new draft recommendation has a section with "specific recommendations for language designers", and it says:

It is recommended that all computer languages meet requirement UAX31-R3a Pattern_White_Space Characters, which specifies the characters to be interpreted as end of line and horizontal space, as well as ignorable characters to be allowed between lexical elements, but not treated as spaces.

Using the specified end of line characters prevents spoofing issues; see Section 1.2.1, Line Break Spoofing. Note that the line terminators listed in UAX31-R3a will be interpreted as line terminators by any editor that implements the Unicode Line Breaking Algorithm. See Unicode Standard Annex #14, Unicode Line Breaking Algorithm [UAX14].

The latest draft of UAX31 has two definitions for R3a:

  • R3a-1, which includes CR, LF, but also vertical tabs, form-feed, U+0085 NEXT LINE, U+2028 LINE SEPARATOR, etc.

  • R3a-2: which allows a custom profile, but comes with a warning that:

    Note: Failing to interpret all characters listed in item 1 of UAX31-R3a-1 as line terminators would lead to spoofing issues; see Unicode Technical Standard #55, Unicode Source Code Handling.

The grammar page says that we consider vertical tabs and form-feeds to be whitespace, but not line breaks. These new recommendations seem to be suggesting that we should treat them as line breaks. Perhaps we should take this opportunity to reconsider, given that Unicode are making an effort to help language designers by producing this set of recommendations.

Similarly, our definition for allowed identifiers is difficult to interpret (a list of unnamed code-points) - perhaps this also should be aligned with something from these recommendations, etc.


i think this has changed, because it sounds like unicode support does indeed come from the language now, at least according to:

i’m not sure what the alternative to @_exported or @inline(__always) is, usage of them should obviously be kept to a minimum, but oftentimes they are unavoidable.

for @_exported specifically, i think a lot of the issues with it boil down to us not being able to gate them with @available so @_exported tends to become “sticky”, which i imagine is why it is so disliked by many. if we could add an @available to @_exported i think we could get along just fine with packages that use a lot of @_exported.

It will be enabled when the overhead is sufficiently low. A good way to think of this is to look way back to the introduction of memory exclusivity into the language---it took a little while to get the point where we could enable it everywhere. I expect the same here.

Oh, yes please!

That's not something that should require the new language mode, but yes, reviewing and finalizing non-standard language features is good for the long term.

It's reasonable to consider adopting newer recommendations for Unicode handling of source code. If the actual source incompatibilities are rare in practice, I'd rather do this as part of adopting the new Swift parser than tie it to a language mode. I don't know of anyone who is looking at this, though.



I'd be interested to see an exploration of the removal of @objc, @objcMembers etc. I've just created a pitch to Move @NSManaged from Swift to CoreData but now I think about it, a complete review of the Objective C keywords may be in order.


I’m not sure what you’re imagining would replace @objc. The Using Swift From C++ vision puts this pretty well, I think: there are deep reasons why interoperating with Objective-C requires code to be generated a particular way in the defining module, and we don’t want to do that by default, so we need a spelling for the explicit request to do it.

More generally, we’re really not going looking for arbitrary things to revisit. Something’s got to be causing significant problems before we consider changing it.


One possible approach would be using declaration macros to have these functionality as part of a separate library rather than including it in standard library.

1 Like

If I understand correctly, macros cannot generate something that is not representable using the AST, and calling convention and other low-level details are not representable without custom attributes like @objc (the canonical way to represent that in the AST today, is to attach an attribute to the declaration).


It would still need some way to tell the compiler to emit an Objective-C method and link to it from a class/category method list.

This leads into a complex subject that we haven’t talked much about. Macros are a very powerful feature. In principle, almost everything the compiler does could be recast as a macro. After all, at the end of the day, the compiler simply emits some functions and global variables; maybe that’s all it should do, and everything else should be built up by a macro system.

This is a very powerful approach. It is also very constrained by that power. The upside is that you can add new language features easily, without waiting for a new release of the compiler, and that is a very important benefit. But the downside is that those features can’t outgrow what you can do with macros:

  • Different features with complex macro transformations can interact with each other poorly.
    • Often this means that applying those features to the same declaration needs to be treated as an error. Diagnosing that cleanly in a macro system is remarkably hard! The macros may come from completely different places and have no reason to know about each other.
    • Even when it works, it can be subtle. The language will have rules for the order it applies macros, and that might not always be the “right” order for the features; it might well depend on something like the order of attributes at the use site.
  • Features tend to end up with “macro-friendly” ABIs rather than the best ABI they could hope for.
    • Collaborating with “peers” is difficult in a macro, so different uses of the macro tend to do their own “locally optimal” thing. For example, if you did virtual dispatch in a macro, the easiest thing to do would be to store a function pointer per method in every instance, rather than building a shared virtual table.
    • The macro ultimately generates normal user code, which can be a pretty major limitation for an ABI. Anything ”weird” you want to do (relative addressing, pointer authentication, explicit sections) will have to be developed as a general feature first.
    • The macro expands the ABI directly, so if you want to produce platform-specific code or evolve the ABI based on the target, all of that logic has to go in the macro, and the runtime cannot assume it was done with any consistency.
  • Macros are generally not well-suited to do a complex static analysis.
    • If there’s something that would be useful to diagnose statically, but it would be take a complex analysis to do it, it probably won’t ever be diagnosed by the compiler; programmers will have to use secondary linting tools.
    • High-level optimization is generally not going to be possible except for what can be done in a very local way in the macro. If you did virtual dispatch in a macro, for example, devirtualization would generally be off the table.

There are plenty of features that can live with these trade-offs and are excellent choices for a macro. But it is not the right choice for every feature, and some things should continue to be done with direct compiler support.


I think it's one of the paradoxes of major releases that they're the only time we get to remove function from a product or project; features can be, and very much have been, added in point releases.

With this in mind I'm suggesting that, far from being arbitrary, we should be taking the time to ensure that features such as @objc are still required in the core language rather than being removed or at least moved into the standard library. It may well be that @objc stays, but I do think @NSManaged should go (it looks to all the world like a property wrapper with less functionality).


This still counts as “looking for things to revisit.” Unless you point to some specific problem that @objc has caused and which would be solved by removing it?

I don't have any horse in the @objc race in particular, but I'd like to ask regarding these comments:

As @jjrscott points out, major language versions are few and far between and therefore are rather rare and hard-won opportunities to make breaking changes of any sort that may bring value to the language. Is the idea that even though technically we can introduce breaking changes in a major version update, there's still a strong pressure to keep those breakages minimal, and that in the case of Swift 6 the bar is that the breakage must directly solve a problem, and not just be for the purpose of compiler hygiene, for example?

If that's the bar then so be it, it's not my place to argue it. I'll offer the testimony that I am personally more affected by compiler bugs (of which I've distilled and filed many over the years) than I would be by having to accommodate a handful of extra breaking changes once in a while, but I know that I'm not necessarily representative of the Swift user base at large by any means.


Hi @jeremyabannister, the criteria we will apply for Swift 6 source-breaking changes is in fact the entire subject and rationale of the original post in this thread. It is a long post, but if we could reply to this question in a more succinct way the post would just have been shorter.

Here are some of the points in that post which might help to reiterate though:

  • targeted set of source-incompatible changes
  • must not change the fundamental shape of the language, nor make sweeping changes without purpose
  • only changes that serve a specific goal that benefits the entire Swift ecosystem, not achievable while maintaining source compatibility
  • should address one or more of these areas: data race safety by default, performance predictability, package ecosystem scalability

As noted in the post, we will evaluate other proposals on a case-by-case basis, but we are not looking for or actively soliciting other areas to add to the list above.


I would just add to that that these language changes do not generally remove complexity from the compiler, because the same compiler has to continue to support old language versions. They can speed up the compiler, if they allow it to apply rules that are easier to check, but they don’t simplify it.

The only thing that would actually remove complexity would be dropping an old language mode, and we are very reluctant to do that.


Perhaps I can provide more motivation here, but speaking more for myself than for the language workgroup as a whole. The strong pressure to keep incompatibilities minimal comes from a desire to ensure that the Swift ecosystem can reasonably move forward to Swift 6. Swift 6 can't be so different from Swift 5 that the code is not recognizable as being the same language, or it requires so much effort to adopt that few people do. The programming-languages landscape is rife with cautionary tales, including our own:

  • Swift 2 -> 3 was really rough. The feel of the language changed significantly, existing tutorials and blog posts got invalidated, and you needed to migrate an entire program at once. That was more than 6 years ago and the pain still lingers.
  • Perl 5 -> 6 just didn't happen. Perl 6 started more than 20 years ago, and got renamed to Raku when it was clear that "Perl 6" was so different that there wasn't an evolutionary path.
  • Python 2 -> 3 has been more than a decade in the making, with users often having dual Python installations and lots of developer pain as folks to try write their code so it works with both.
  • Scala 2 -> 3 is still unfolding (it's been less than 2 years since Scala 3 was released), but it's clear that many folks are hanging back on Scala 2 for now, and both Scala 2.x and 3.x are supported with different toolchains.

There are languages like C and C++ that very rarely change or remove anything in a source-incompatible way. The priorities outlined by the language workgroup already imply more than that level of change, so I don't think there's a useful comparison there.




Thank you for the added detail and the history, fascinating!

Regarding this idea:

I have a desire to clarify a little:

I'll describe a scenario that seems perfectly plausible to me and which I think would contradict the above idea. I'm sure the most likely truth is that I'm just ignorant of some important concepts and therefore wrong, and I'm looking forward to understanding how:

Shorter description of question:

The single sentence question is: Is it not the case that removing or reimplementing some special-case-y kind of feature that was added in the early days could conceivably enable improvements to the underlying implementation of some other (probably newer) compiler features, and that therefore some current bugs in those other (newer) features could be inadvertently solved specifically for users of Swift 6, at least under the condition that they fully commit to the Swift 6 language mode and do not use any of the compiler's backward compatibility features?

Longer description of question:

Imagine a feature of Swift that is "special-case-y" (which I guess concretely means that it requires it's own special code in the compiler rather than being one of a set of features that fall out of some more general code.) This feature was originally developed as a positive "feature", but has now been seen to not really pull it's weight in the end, because (for example) implementations of newer Swift features have been undesirably influenced by the need to account for this special case. Now let's say that one of these newer features (primary associated types on protocols, for example) suffers from a bug in some obscure situation. For example, the compiler emits an error where it should compile, because I've written some expression that should still be able to be inferred to have a particular primary associated type, but somehow it got dropped along the way. In such a situation, the undesirable influence from the special case feature might ultimately be shown to be more or less the culprit of the associated type bug, or at least a large contributing factor to the complexity that gave space for the bug to live. But imagine that we haven't found and fixed the bug yet. Imagine that we do, however, decide to remove the special case feature, or provide nearly the same functionality but implemented using one of the latest tools (e.g., macros). It is conceivable that the obscure associated type bug is fixed by this no? Surely at least if I fully commit to the Swift 6 language mode, this is a plausible scenario? In which case, my stated personal preference about the philosophy on breaking changes in Swift 6 is at least coherent/rational? (even if isn't representative of the best interests of the community at large?)

Thanks for any and all clarifications and explanations!


One other thing I wanted to add is that I only just absorbed this:

and I think it might be relevant to bring up this:

Any thoughts on how to factor Swift 7 into this discussion? Presumably, Swift 7 would be yet another opportunity to introduce a new set of breaking changes, perhaps some of which will have had community interest for Swift 6 but weren't right based on requirements specific to the Swift 6 release, like the ones quoted here?

I think your story is perfectly plausible, but I don’t think it holds water as a justification for any particular source-breaking change without at least a demonstration that it is indeed the cause of such bugs. The mere fact that some vestigial feature might be causing latent bugs is a pretty small hypothetical win when weighed against the certain cost of adding yet another item to the list of required migrations for Swift 6.

Moreover, if it were demonstrated that the inherent complexity of a feature (and its interaction with the rest of the language) is so high that it is actually causing recurring bugs that do not seem like they can reasonably be addressed simply with implementation improvements then IMO that would (potentially) pass muster as a demonstration of “significant problems” of the kind John mentions up-thread:

But I expect such a showing to be quite difficult, because I expect that in many cases such problems can be addressed by sufficient implementation effort without resorting to a source break.

Swift 7 should probably not factor into this discussion. I wouldn’t expect it to be any easier to introduce source breaking changes in a later language version (however many years away it may be).

The short list of priorities here isn’t a list of general priorities for the Swift project that just so happen to narrow the scope of source breaking changes—rather, it is the result of careful consideration of what the highest priorities for source breaking changes are specifically in order to narrow the scope and prevent the new language version being treated as an opportunity for arbitrary source breakages that are just ‘nice to have.’


Speaking just based on my own understanding, Swift's source compatibility guarantee should hold unless there's some reason it absolutely cannot. Swift 6 has such a reason (concurrency checking), so that presents an opportunity to slip other source breaks in, but I would not anticipate that being true of any other major version of Swift.

The normal policy for massively widely deployed OS platform languages and libraries is "you don't get to remove anything, basically ever". This often comes as a shock to people used to more relaxed environments that follow a more typical "deprecate, wait, remove" cycle.