My fear is that these non ascii symbols won't work in some contexts (terminal windows, console logs, etc). It is harder to type them (e.g. to filter results in console log).
Something else is weird with those subscript letters: digits are fine and some letters are fine, but other letters are either "off" (related to each other and to gigits) and some letters (like C or F) are absent altogether:
It is absolutely something we'll keep an eye on. It's a hard problem because, as I think this example shows, neither choosing what counts as "political" nor enforcing restrictions against "politics" can ever be apolitical decisions. But this thread is not the right place to have a long conversation about this, so let's let it stand there for now.
To repeat: I don't want to get too much into bikeshedding the description formats -- this isn't something we can reasonably expect to form a consensus on.
That said,
I don't hate that! This was one of the formats I tried. I think I disliked the way the 8 and 16 blended with the two offset values: my eyes kept focusing on 16+3 in 23@utf16+3. I do think the +1 format works well for transcoded offsets, and I wouldn't like to mess with it.
It isn't that bad, though: (and transcoded offsets tend to pop up relatively rarely)
Since Swift 5.8, lldb has been shipping a built-in data formatter for String.Index that produces the displays this pitch proposed. In the ~two years these formatters have been in production, these descriptions have proved immensely useful when developing String algorithms.
I think it's time to properly ship this in the stdlib, too! I submitted a proposal to finish this effort, officially adding the missing CustomStringConvertible conformance:
(We'll also want to do the same for AttributedString.Index in Foundation -- however, I think that can be done in a separate process.)