Pitch: Unicode Equivalence for Swift Source

...hmm...

It will not affect the ABI of anything ASCII, since that is static under all normalization. At least the API surface area of the standard, core and system libraries restricts itself to ASCII at the moment, right?. Are there internals that don’t? If not, then this would be theoretically ABI‐breaking, but break nothing in practice, since nothing that is declared ABI‐stable uses the affected functionality?

Only thinking out loud. None of those are facts I am absolutely sure of.

Normalizing Unicode names could lose user data in NSCoding archives. :-(

I do think we should normalize for typo-correction purposes, but I don't think it's worth slowing down the compiler for something that most users will not encounter anyway.

EDIT: a programming language is by nature a parseable format, and while it's a parseable format for humans I feel like it's valid to be stricter about it than text. We don't want to start accepting U+037E GREEK QUESTION MARK as a statement delimiter even though its canonical form is a semicolon.

1 Like

Right, we can fix this because none of the core libraries are using identifiers with multiple representations. Note that there are several different normalizations we could use; ideally we would use the most compact, but it might be more prudent to use a normalization that matches what Xcode (sorry for the platform bias, but it probably needs to be Xcode) currently outputs.

ASCII sequences are always canonical, so I wouldn't expect this to significantly slow down the compiler; we can very cheaply remember during identifier-lexing whether we saw a non-ASCII character, and we can put redundant entries in the identifier table for non-canonical strings.

I agree that we don't necessarily have to normalize non-identifiers during lexing, although I'm not sure this would really be particularly problematic.

This has been discussed extensively in a few prior threads. I would encourage you to read through them to get a sense of where things stand.

In short, there is a thorough set of rules already laid out in UAX#31 on how to normalize identifiers in programming languages. Several of us have written several versions of a proposal to adopt it, but each time it has failed because of issues with emoji. Recent versions of Unicode now have more robust classifications for emoji, so the proposal can be resurrected with better luck now, probably. No need to start from scratch; feel free to build on the work that we’ve already done.

All of this applies only to identifiers. Literals should never be messed with by the compiler. That are, after all, supposed to be literals.

2 Likes

Thanks. I had search, but I guess I picked the wrong search terms. Knowing there must be some threads and being more persistent allowed me to find several. I’ll post links here in chronological order for others who land here. (I actually haven’t read the threads yet. I will now.)

1 Like

While the first one started with the same premise, the other threads seem to be much wider in scope and become largely about other issues like which characters should be operators and which should be identifier. None of that matters to me. I only care about the compiler correctly recognizing matching tokens. But I defer to those who have already done much heavier lifting. Thanks for the hard work, @xwu.

If we don't normalize and then we add a reflection APIs, we'll have to deal with the possibility of two distinct properties having names that are equal to each other when it comes to String equality. But then maybe backward compatibility requires it.

Here's an idea to sort-of deprecate non-normalized identifiers without forbidding them outright: make the compiler normalize all identifiers except for those in `backticks`. This should make sure any desire for a weird normalization is written in a way that'll make people suspicious that something is going on. I believe this should be relatively easy to implement. The migrator could also automatically detect and migrate those symbols to whatever the developer chooses.

3 Likes

We would welcome a fix for the normalization-of-names issue. The rules about what exactly is an operator are a completely separate issue and should not block a normalization fix, which as you say is ABI-affecting and therefore should be fixed ASAP before there's a bunch of code relying on libraries with stable ABIs using inconsistent normalizations.

As a general matter, ABI issues can be fixed, you just have to be aware of the practical impact of the fix.

3 Likes

Since APIs that differ only in normalization form would arguably be a bug, would you consider that narrow change alone to require a proposal?

I wouldn't, but I can raise that question to the rest of the Core Team.

1 Like

A targeted fix here just for identifiers would be great. I have no interest in normalising anything else, and would consider it a bug if string literals were normalised.

2 Likes

+1, completely agreed.

-Chris

3 Likes

Important enough to delay Swift 5.0 and iOS 12.2? It seems to me as the release date of these two should be open to s delsy to fit in this ABI breaking change you really really do not want people to rely on.
Would be quite puzzled if that cannot happen (there are business driven decisions behind the next iOS point release, but getting the ABI right at the start seems to be a worthy goal).

On the other side I fear this is ultimately a mistake and that although essentially ASCII only grammar seems not inclusive it does provide a simple easy to parse and easy to share solution: a lingua franca does have some advantages as Anglocentric as that may sound, but that is quite off topic on a boat already sailed :).

No, we cannot delay Swift releases over a relatively minor bug that we have no reason to think that we cannot fix in a future release.

3 Likes

I, too, think that this is a bug and that the `backticks`-form should preserve encoding. I also see this bug as non-critical, so the following is more for later reference:

I think, due to character duplication (“Å” (U+00C5) vs. “Å” (U+212B)), the compiler has to use NFC internally. These characters differ in NFC, but are the same in NFD. Code points that do not exist in NFD should be disallowed outside `identifiers`.

It might be necessary to allow `\u{…}` inside backticked identifiers to allow NFD users to use all identifiers. Otherwise anyone using a NFD editor had no way of using a module that uses any of those characters if the module was written using NFC.

They are the same in either.

NFC always does NFD first—decomposing and reordering (NFD) before it recomposes in a possibly different way.

In this case NFD transforms both into U+0041 + U+030A. NFC then recombines that into U+00C5.

Maybe you are confusing it with this fact: The first time NFD or NFC is performed it will “lose information” by removing the “distinction” between the ångström unit and the scandinavian letter Å. The ångström will never reappear in NF‐anything. But after initial normalization there is only one NFC representation for any NFD and vice versa, so no information is lost no matter how many times data gets converted back and forth.


Implementations may skip or refactor this step for performance reasons when it is known that doing so will not alter the result.

1 Like

So it would be a mere bug fix then? Or was that not what Chris’ answer meant and the core team’s answer still pending?

@xwu, is there already some implementation associated with the previous broader discussions that could be used as a starting point to factor out the targeted bug fix? Or would it be better to start from scratch?

With the proviso that we'd want to be able to run an implementation through as much compatibility testing as we could, yes, consensus on the Core Team was that this would just be a bug-fix.

8 Likes