Has this always worked in Swift or has some change been introduced recently?
var x: UInt16 = 0xffff
var y: Int16 = -1
if x == y // I expected a compiler error here
{
print("x == y")
}
else
{
print("x != y")
}
In Playground, the above prints x != y because, obviously 65535 is not -1. However, I am writing a 16 bit emulator at the moment and quite often I want -1 to be the same as 65535.
Now, I’m not saying that the comparison behaviour should change. I’m quite happy to have to write
if x == UInt16(bitPattern: y)
What irritates me is that I could swear that, in previous versions of Swift, forgetting the type conversion would flag an error at compile time.
I’m also interested to know the performance implications. For the first comparison to work, presumably there must be conversion code to convert both variables to a type that can contain both values.
It is pretty efficient, especially in concrete contexts, partially because you do not actually have to convert both to a common type to compare them. For example, in your UInt16 == Int16 example there are a least two strategies that a compiler can use:
Promote both to Int32 and compare. This would be equivalent to the Swift code:
Observe that if the Int16 is negative, it cannot be equal to any UInt16 value, and if it is positive, then it is also representable as UInt16. This is equivalent to the following:
func equal(a: UInt16, b: Int16) -> Bool {
b >= 0 && a == UInt16(bitPattern: b)
}
My recollection is that the compiler mostly uses #2 currently, but there are some circumstances where #1 would be a better option. Happily neither one of these is very expensive in practice (modern CPUs have such wide integer execution resources that they are often nearly free), and the second approach does not require the existence of any type not used in the original expression. There are probably some opportunities for further optimization work here, but the wins will mostly be fairly small.
All that said, homogeneous comparisons are simpler, and should be preferred when it doesn't introduce any additional program complexity.
I would note that long experience shows that when a language does not provide heterogeneous comparisons, people will implement them themselves and get them subtly wrong. There is also the C and C++ situation, where the language permits them, but the integer promotion rules mean that they do not produce a (mathematically) correct comparison result, which is probably the worst of all possible choices.
It's quite "shallow" and being an adherent of the ninth commandment I'd rather we didn't have that feature at all. Any of these is a compilation error:
var x: UInt16 = 0xffff
var y: Int16 = -1
(x, x) == (y, y) // ❌
x + y // ❌
These examples all seem to be premised on thinking that UInt16 and Int16 are structural subtypes of some Equatable super type. That would certainly be a reasonable design (if inexpressible in current Swift). But you can also define == without conforming to Equatable or even having a common type, so there's no real reason to expect any of these to work.
I.e. there is no false witness. You may be coveting your neighbor's type system.
Sure, although there is something "off" when there's == without corresponding Equatable, and exceptions like these add a great deal to the language complexity...
Would this compile?
[1 as Int16] == [1 as UInt16]
or this?
class C {
func foo(_ v: Int16) { 0 }
}
class D: C {
override func foo(_ v: UInt16) {}
}
Is the answer different if we change the types to Double and CGFloat?
I won't bet my kidney before I actually trying those...