As I've been going through SBS doing clean-up work, I have noticed that our string benchmarks are in some places trying to cover wider range of scripts, but the selection of test strings is rather ad-hoc. Basically we test text written in Latin, Cyrillic and CJK scripts. That is all.
StringWalk workload variants
But it looks like we are relying on them to fine tune the String’s ABI and UTF-8 … I'm not so sure we are backing such crucial decision with enough data.
I propose we start migrating existing benchmarks and write all new string performance tests against a single text corpus that more systematically covers the various scripts represented in Unicode. I think that ideal document for this purpose is the Universal Declaration of Human Rights. Plethora of official translations is available from Unicode Consortium, to create a text corpus of semantically equivalent information in various scripts and languages. I believe that would enable us to make much more sensible relative performance comparisons between various scripts and unburden benchmark authors from the need to reinvent the wheel.
We can start small, with the two sentences from Article 1. If this proves useful, we can expand this in the future to include more (or all) articles, possibly even the preamble. I'm thinking this could start as simple string, and grow over time to include more articles and we could store the list of ranges for individual articles for extraction of smaller substrings.
On top of this we could build parsing tests, by having language specific parsing rules (strings to split the text into articles, paragraphs etc.), or string interpolation benchmarks (eg. HTML formatting — filling templates with articles and paragraphs).
Scripts and Languages
I think all questions around Unicode are potentially politically sensitive and we should be making decisions about strings very carefully and with all interested parties in mind. I hope you already did all this internally at Apple, and you can enlighten me in my ignorance. I wonder what is the impact of switching from UTF-16 to UTF-8 for languages whose scripts didn't draw lucky cards and are not located at the beginning of the Basic Multilingual Plane. How are they impacted by the switch to variable length encoding that strongly favors ASCII and Latin scripts?
The UDHR in Unicode contains, in addition to the super useful table of Translations that lists the used scripts and language codes, a page with Aggregates. I think that for our testing goal would be pretty well covered by choosing the most used scripts from the first article in all the scripts. We just need to estimate the number of language speakers that use a given script — we can probably skip the most esoteric ones.
I thought it might be useful to start from the List of languages by total number of speakers in Wikipedia, which draws on data from Ethnologue. Before trimming this down, I've compiled a bigger set of 29 samples.
For fun, and to maintain coverage parity with existing
StrigWalk, I have even created this expressionistic translation of Article 1 into Emoji + math symbols:
(Shoutout to @codafi for indulging my procrastinations about emoji and math notation.)
See the gist of the corpus prototype.
What's the performance impact of switching from UTF-16 to UTF-8, on string processing algorithms for languages written various scripts? Hard to say until we do proper benchmarking. I leave here the number of elements in
.utf16 views. Given the amount of infomation int the text is the same, the utf8 count is the number of bytes this information get's represented as a combination of the language, script's unicode mapping and utf-8 encoding.
Element Counts and Encoded Size
|lang||char||scal||utf16||utf8||UTF-16 (B)||UTF-8 (B)||𝚫||𝚫%|
Does this approach make sense, of are we already covered elsewhere?