The Swift type
Character is indeed a bit funny because its underlying data structure need to be similar to a string (since graphemes can be made of many code points). But that's an implementation detail. Surely nobody would oppose a BigInt type initializable from an integer literal, even though its underlying implementation is likely using an array of smaller integers. I think the same principle should hold for a
Character made of multiple code points.
I do like that character literals are meant to represent "characters" in the very general sense of a "unit of a string". If you're working at the grapheme level, a "character" is a grapheme. If you're operating at the code point level (like in an XML parser), a "character" is a code point for the purpose of this algorithm. If you're working with ASCII or some binary format, a "character" is a one-byte code point. Regardless of which level I'm working with, it's likely I'll do things like
firstIndex(of: ':'), and it'd be a bit strange for the character literal to have a different syntax depending on whether or not I'm working at the grapheme level.
That said, I agree there's some logic in reserving
'x' for only code points because of the semantic discontinuity between a code point and
Character: the former behave like an integer while the later does not. If it is important to express this difference in the literal syntax, then
Character could keep the double quote and single quotes could be reserved for code points. But I'm not sure the integer behavior of code points is important enough to justify that.