Unicode didn't arrive complete and fully-formed in the early 1990s; it has grown (and continues to grow), and we have learned a lot about the problem space. When the first versions of the standard were released, software companies (such as Microsoft and NeXT) were eager to add support for it, and the idea was that we could essentially just expand the character size from 8 to 16 bits and otherwise keep fixed-width characters and random-access strings. That was UCS-2: fixed-width, 16-bit characters.
I find the summary document Unicode 88 and the Wiki page for Han unification to be interesting reading...
Nothing comes for free, and the price of Unicode's fixed-length 16-bit character code design is the twofold expansion of ASCII (or other 8-bit-based) text storage, as seen in the figure on the previous page. This initially repugnant consequence becomes a great deal more attractive once the alternative is considered.
The only alternative to fixed-length encoding is a variable-length scheme using some sort of flags to signal the length and interpretation of subsequent information units. Such schemes require flag-parsing overhead effort to be expended for every basic text operation, such as get next character, get previous character, truncate text, etc. Any number of variable-length encoding schemes are possible (this fact itself being a major drawback); several that have been implemented are described in a later section.
By contrast, a fixed-length encoding is flat-out simple, with all of the blessings attendant upon that virtue. The format is unambiguous, unique, and not susceptible to debate or revision. It is a logical consequence of the fundamental notion of character stream. Since it requires no flag parsing overhead, it makes all text operations easier to program, more reliable, and (mainly) faster.
Anyway, it turns out that 16 bits weren't even close to being enough. Not only did Unicode massively underestimate the needs of CJK languages, there was also a need to catalogue historical texts (for instance, how else could you write a history book which uses those texts?), to allow for round-tripping with legacy encodings, and more.
And so Unicode invented the UTF-16 encoding, which reserves part of the 16-bit code space to include special flag characters known as "surrogates". In UTF-16, a lone surrogate is not a valid character.
When it became increasingly clear that 2^16 characters would not suffice, IEEE introduced a larger 31-bit space and an encoding UCS-4 that would require 4 bytes per character. This was resisted by the Unicode Consortium, both because 4 bytes per character wasted a lot of memory and disk space, and because some manufacturers were already heavily invested in 2-byte-per-character technology. The UTF-16 encoding scheme was developed as a compromise and introduced with version 2.0 of the Unicode standard in July 1996.
In the UTF-16 encoding, code points less than 2^16 are encoded with a single 16-bit code unit equal to the numerical value of the code point, as in the older UCS-2. The newer code points greater than or equal to 2^16 are encoded by a compound value using two 16-bit code units. These two 16-bit code units are chosen from the UTF-16 surrogate range
0xD800–0xDFFF which had not previously been assigned to characters. Values in this range are not used as characters, and UTF-16 provides no legal way to code them as individual code points.
In other words: UTF-16 is a variable-width encoding.
And that's really what it comes down to - UTF-16 was always a compatibility thing. UCS-2 was a mistake, but interfaces which worked in terms of 16-bit strings were built for it. It was necessary to retrofit a larger character space on to them somehow.
And ultimately, compatibility is also the reason why Swift's String initially chose UTF-16 as its native encoding - for compatibility with
NSString, NeXTSTEP's UCS-2 String type.
And that's also why UTF-8 is better - if you're doing variable-width anyway, there's no point to 16-bit elements any more.