Found this difference:
let x: UInt64 = 1
print(type(of: 1 + x)) // UInt64
print(type(of: 1 << x)) // Int
Is the second outcome a bug or a feature? I expected to get UInt64
there as well.
Found this difference:
let x: UInt64 = 1
print(type(of: 1 + x)) // UInt64
print(type(of: 1 << x)) // Int
Is the second outcome a bug or a feature? I expected to get UInt64
there as well.
Feature! The bits in a shifted value come entirely from the left-hand side, and the limits on the shift also come from the left-hand side, so the result type always matches the left-hand side. This lets you use Int8 fixed fields to describe shifts on UInt64 values, which is perfectly fine since the maximum useful shift amount there is “64”.
This happens because the plus operator is defined in the standard library as:
static func + (lhs: Self, rhs: Self) -> Self
While the left shift operator is defined in the standard library as:
static func << <RHS>(lhs: Self, rhs: RHS) -> Self where RHS : BinaryInteger
For the plus operator: lhs
, rhs
, and the operator's result must have the same type, so 1 + x
is inferred to have the type UInt64
since x
is of type UInt64
.
For the left shift operator: rhs
doesn't have to have the same type as lhs
and the function's result, so 1 << x
is not inferred to have the type UInt64
. Instead, 1
defaults to being an Int
, so the type of 1 << x
is Int
.
It took me quite a few readings of this to grok it, for whatever reason. For the benefit of anyone else that has the same mental stumble, @jrose is just saying that if your value is e.g. UInt64 then you typically don't need to represent a shift larger than 64 - the number of bits in UInt64 - so it's convenient that you don't have to use a whole UInt64 to store that shift amount; you can use an Int8 and save space. Int8 can still represent values larger than 64, so it's not perfectly sized in that sense, but it's still better than a larger integer type.
Re. memory usage you could always just have used e.g. Int8 and upcast it to UInt64 for use as the shift amount, but since Swift doesn't have implicit integer conversions that would be annoying.
“How big the shift count can be” is a little bit of a red herring. The real thing in favor of heterogeneous shifts is that when you’re doing arithmetic to generate shift counts, it is generally most natural for the counts to be typed Int (both because Swift privileges Int somewhat, but also because you generally want them to be signed, either because you need to reason about negative shifts, or just to prevent intermediate computations from trapping). Being able to use smaller types for stored counts directly is just a nice perk.
Swift also makes << do the right thing for negative shift counts, which often makes logic around variable shifts much easier; you don’t need write a branch in your source code to handle the other alternative.
I was also thinking of shifts being a sort of low-level thing that showed up when trying to implement algorithms exactly as specified, and one of the ways that comes up is dealing with binary encodings that try very hard to save space—so, less about the in-memory use than a canonical file format. Even then Steve’s right that you could use Int as you unpack from such a binary encoding though; as long as you do it near the shift, the compiler will see the limited range regardless.
Sorry for a possibly naĂŻve question: why we didn't do the same for all other ops like +
, -
, etc?
Because the shift count and the thing being shifted are different sorts of things in general, so there’s no reason for them to have a same-type constraint. Things being added or subtracted are usually the same sort of thing.
And to elaborate, it seems like it's merely keeping with Swift's design choice to not do implicit casts.
If you could add e.g. Int
and Int8
implicitly, you're essentially upcasting implicitly (just in a haphazard way that only works in the exact cases where someone happened to define a compliant operator overload).
Or potentially worse - if you always force conformance to the left hand side - you might be implicitly downcasting (e.g. from Int
to Int8
) which is usually dangerous and incorrect.
That said, Swift isn't completely consistent about this, because it does allow (essentially) implicit type conversions for some operators, like BinaryInteger >
. I assume the judgement there was that those are okay to do this because they have no failure modes (e.g. comparing two binary integers cannot fail, irrespective of their relative sizes, and is essentially automatically part of the implementation anyway because in the generic case it just involves comparing their words
which is a 'normalised' unlimited-width form).
I remember older computers could do "Int16 * Int16" to get Int32 results more efficiently than the "Int32 * Int32"; is this no longer the case nowadays to worry about and bake such peculiarities in general purpose programming languages like Swift?
The rationale for allowing it for comparisons is that there is an unambiguous result type for such a comparison (Bool
). For arithmetic operators, there isn't always a result type capable of representing all possible results, nor is there an obvious rule for how you would choose one automatically even in cases where such a type exists (what should the result type of Int + UInt
be?), nor is it clear that we would want to try to encode such a rule via generic constraints (C and C++ fall back on the "usual arithmetic conversions," but those are (a) a notorious source of bugs (b) hardcoded into the compiler, rather than expressed in a generics system; most of the ways you might try to do this with generic constraints would become an even more notorious source of bugs in the face of user-defined conformances).
I suspect that we will end up adding heterogeneous arithmetic operations, because it's not always trivial to implement them correctly yourself, and experience has shown that having checked heterogeneous arithmetic is occasionally very valuable for correctness reasons (see clang's __builtin_add_overflow
and friends). But I don't think that we'll use the operators to do so (they would either be methods or use a macro), and we would require you to specify the result type (explicitly or implicitly).
When you write:
let a: Int16 = ...
let b: Int16 = ...
let c = Int32(a) * Int32(b)
the compiler knows that the two operands are sign-extended from Int16
and can use the cheaper multiplication operation when targeting a platform where one exists.
I do see some appeal, though, in permitting let c: Int32 = a * b
.
Admittedly, in the absence of an explicit type for c
it'd then be ambiguous as to what the author's intent is,
However, I think there's a good argument to be made that it should in fact be Int32
, because that makes the arithmetic safe. If the author wants Int16
, having to explicitly specify that - and thus at least somewhat opt into the possibility of crashing - seems not just reasonable but strictly better than what we have today, where there is no opt-in. Currently you can only opt out with e.g. wrapping versions (where applicable), which lots of people don't know about and even those that do sometimes abuse (e.g. thinking they're more efficient, because they don't check for overflow, which is not necessarily the case and not necessarily the right thing to do if their algorithm isn't actually tolerant of wrapping).
(this is presumably just musing, as I assume this is too late to change in Swift)
Int32
would definitely be worse than what we have today (because then operations on that Int32
would promote to Int64
and that would statically promote to Int128
and there'd be a huge loss of efficiency that went with that). Any set of static non-value-aware rules for arithmetic type promotion is pretty much doomed to either accepting wrapping/trapping or exponential blowup.
Most languages that do not have systems programming as a goal should "simply" use arbitrary-precision integers by default and avoid this mess. The best idiom(s) for programming languages intended for constrained environments is an interesting research problem. Swift has a pragmatic solution, but I'm not sure I would argue that it's the best possible.
Yep. Some languages promote to unsigned in this case but that's arbitrary. Another option is to leave it ambiguous. Ditto for Int8 + Int
, etc, or, perhaps even for Int + Int
:
var x: Int16 = ...
var y: UInt16 = ...
let z = x * y // 🛑
let z: Int16 = x * y // âś… could trap
let z: UInt16 = x * y // âś… could trap
let z: Int64 = x * y // âś… can't trap
var w: Int16 = ...
let z = x + w // 🛑
let z: Int16 = x + w // âś… could trap
let z: Int32 = x + w // âś… can't trap
I tried to model the above "heterogeneous" approach. With the following setup:
protocol N { associatedtype D }
struct MyInt16: N { typealias D = MyInt32 }
struct MyInt32: N { typealias D = MyInt64 }
struct MyInt64 {}
var i16 = MyInt16()
var i32 = MyInt32()
func * (lhs: MyInt32, rhs: MyInt32) -> MyInt32 { fatalError("A") }
func * (lhs: MyInt32, rhs: MyInt32) -> MyInt64 { fatalError("B") }
let x = i32 * i32 // 🛑 Ambiguous use of operator '*' ✅
I am getting the expected (and desired) ambiguity error.
However with a generic version:
func * <T: N> (lhs: T, rhs: T) -> T { fatalError("C") }
func * <T: N> (lhs: T, rhs: T) -> T.D { fatalError("D") }
let y = i16 * i16 // 🤔 ?!
I am not getting an ambiguity error... As if compiler doesn't care which of the two implementation to choose. Is this expected behaviour or a bug?
Which version does it actually use?
Only if the default, safest paths were used. That's simply following the principle that Swift itself tends to follow, that code should be safe unless explicitly annotated otherwise. For a stronger definition of 'safe' than Swift typically uses currently.
The unsafe (crashy) versions would still be available, one way or another. Either as overloads or by requiring explicit downcasting. e.g.:
let a: Int16 = 1
let b: Int16 = 2
let c = a + b // Inferred as Int32. Completely safe.
let d: Int16 = a + b // Unsafe - crashes on overflow - but still available.
let e = Int16(a + b) // Equivalent to the above.
It could also compose well, I suspect, with more explicit safety markings for arithmetic. e.g.:
let f = try! a + b // Inferred as Int16, since only that overload can throw.
(for a hypothetical world where the unsafe arithmetic operators throw instead of crashing)
I doubt Swift will ever consider changes of this magnitude, irrespective of their merits, so I'm not trying to pitch it. I'm just hoping to see this pursued in successor languages.
It's a wild west russian roulette:
// as originally written
func * <T: N> (lhs: T, rhs: T) -> T { fatalError("C") } // âś…
func * <T: N> (lhs: T, rhs: T) -> T.D { fatalError("D") }
let y = i16 * i16
// the two lines in different order:
func * <T: N> (lhs: T, rhs: T) -> T.D { fatalError("D") }
func * <T: N> (lhs: T, rhs: T) -> T { fatalError("C") }
let y = i16 * i16 // 🛑 Ambiguous use of operator '*'
// if remove ": N" from the `T,T -> T` version:
func * <T> (lhs: T, rhs: T) -> T { fatalError("C") }
func * <T: N> (lhs: T, rhs: T) -> T.D { fatalError("D") } // âś…
// ditto if the two lines in different order in this case
let y = i16 * i16
func * <T> (lhs: T, rhs: T) -> T { fatalError("B") }
func * <T: N> (lhs: T, rhs: T) -> T { fatalError("C") } // âś…
func * <T: N> (lhs: T, rhs: T) -> T.D { fatalError("D") }
let y = i16 * i16
The third is expected from the rule that “more constrained = better choice”. The fourth is equivalent to the first under that same rule. But the first and second should definitely be consistent and they are not.