I wasn’t proposing adding “Byte,” etc. now, but asking why the originators didn’t go for width-agnostic names for the base numeric types way back when INSTEAD of what we do have. I know that nowadays any violators would be embedded systems that start off using 16- or 32-bits for everything. We’re more likely to go to 128 bits than any non-power-of-2 in the future.
(I don’t know what names we would use besides the “Byte” and “Int” ones. Since the “Int” type is supposed to move up with processor improvements, I guess we should go with “Short,” “ShortShort,” etc. instead of multiple “long”s like C++ did.)
If we had value-based generic arguments, we could have had something like as set of “Integer<width: Int>”, with “exact” and “at least” variants. We would still need “Byte” and “Int” unless we provide constants for the environment’s minimum and optimized bit widths (and even then, the type aliases would be handy).
On Jun 19, 2016, at 1:04 AM, Chris Lattner <email@example.com> wrote:
On Jun 17, 2016, at 1:01 PM, Daryle Walker via swift-evolution <firstname.lastname@example.org> wrote:
When I first looked into Swift, I noticed that the base type was called “UInt8” (and “Int8”) and not something like “Byte.” I know modern computers have followed the bog standard 8/16/32(/64) architecture for decades, but why hard code it into the language/library? Why should 36-bit processors with 9-bit bytes, or processors that start at 16 bits, be excluded right off the bat? Did you guys see a problem with how (Objective-)C(++) had to define its base types in a mushy way to accommodate the possibility non-octet bytes?
Given that there are no 9-bit byte targets supported by Swift (or LLVM), it would be impossible to test that configuration, and it is well known that untested code doesn’t work. As such, introducing a Byte type which is potentially not 8 bits in size would only add cognitive overload. Any promised portability benefit would simply mislead people.
Mac, Internet, and Video Game Junkie
darylew AT mac DOT com