Unlike in some other languages, Swift's Bool is not Comparable. Why is that?
I'm not questioning the current behavior as I agree that it doesn't make sense to compare true to false in an ordinal manner, I'm just looking for a valid professional explanation.
Actually, I would very much like Bools to be Comparable.
When I'm comparing two objects, sometimes I like to give preference for which a property is true, so I would like true > false.
For instance, it's nice to sort messages by whether they are flagged, to list the flagged ones first.
I find this especially useful when sorting on multiple criteria. Swift even gives tuples automatic Comparable conformance when the components are Comparable. So sorting messages on, say, (isFlagged, dateOfMessage) would be simple and very useful.
Yeah, you could make the same argument that it doesn't make sense to compare strings in an ordinal manner.
"apple" < "orange"
"Apple" < "apple"
For many data types, the order is basically arbitrary, but it can still be useful to define a default way of sorting a list of values.
(And by the way, String's comparison function has nothing to do with human/cultural expectations -- that's a whole other mountain of complexity called collation)
to me, this is a problem because of generics. a type like A<T> can only conditionally conform to Comparable once, which rules out the possibility of something like A<Bool>. you can try defining a custom boolean-backed enum type that isComparable, but thatâs just a really bad idea in practice.
IMHO we should go one step further and have a default autogenerated Comparable conformance for a struct which fields are Comparable. In many cases that would get the desired outcome, in those cases where it's not we'd override "static func <".
Yes. I've done that and also just extended Bool - depending on the context.
Still, I'm curious the argument for not making it Comparable in the first place.
in my opinion, not really, for at least two reasons:
a struct canât be (easily) switched upon; you need to replicate the shape of the enum with static vars, which clutter the API and arenât readily discoverable. and you wonât get exhaustive switching.
a struct doesnât benefit from a compiler-synthesized Comparable conformance.
I guess this problem is also related to the fact that we currently do not provide compiler-synthesized Comparable conformance for struct.
Image we do provide one the same as Codable if all the properties are Comparable.
struct A: Comparable {
var a: Int
var b: Double
}
Since Int and Double are both Comparable, we synthesized the implementation here.
But when we add a Bool property here which is trivial and common, the synthesized implementation will disappear because Bool does not conform to Comparable
What's the sort order, for Bool? Is true greater or less than false?
For the other examples - strings, numbers, etc - there's a prior convention that's far more general than Swift, or even programming. For numerics it's by definition - 1 is less than 2 - and for strings, while there is some complexity around localisation and some character sets, there's essentially also strong definitions (just many of them, mostly but not entirely compatible).
For booleans, though, I can't recall any convention let-alone hard rules, from my distant academic studies in mathematics. Simply, what's the sort order between heads and tails? Left and right? On and off?
I'm not trying to oppose the pitch, to be clear, I'm just pointing out that booleans seem distinct from existing Comparable primitive types.
It seems like booleans are more context-sensitive in this regard than things like numbers. I wonder if the problem isn't Comparable conformance, but Swift's arrangement that comparability is tied to types to begin with, rather than uses of those types?
I generally like to argue this (that Bool is inherently symmetrical) and a colleague finally pinned me down by pointing out that the relative precedence of && and || suggests that && is more multiplication-like and || is more addition-like, which in turn implies that true, the identity for &&, is more 1-like, and false, the identity for ||, is more 0-like. Which, conveniently, is how most lower-level environments like to define them, including C (but excluding shell scripts, unfortunately, due to error codes having good reason to use 0 as success). So if Bool were Comparable, Iâd expect false < true for that reason, even though I suspect the actual reason would be âweâre using 0 and 1 underneath, and also diverging from C here would have to have some really strong reasonsâ.
âŚthough note that Iâve also heard it argued that -1 is a better true, since in twoâs-complement representation it is the bitwise negation of 0.
I, personally, am overall neutral on whether Bool should be Comparable. If it is, there will always be times you want to invert the ordering, and having it not be Comparable forces you to think about that. But also, having it be Comparable makes it easier to impose a total order on composite keys when it doesnât really matter which comes first, which is relevant for the common boilerplate struct implementation of âput all stored properties in a tuple and compare them that wayâ.
IMO, Bool.true is greater than Bool.false for most platform Swift lives on. (Because we use 1 and 0 to represent them under the hood which is compatible with C.)
For Swift platforms where we use -1 to represent Bool.true in future, that's a new topic to discuss.
Which is not the case, right? So we are talking about two "missing" features: Bool conformance to Comparable and Comparable auto-synthesising for structs.
There are precedents in other languages that have "false < true" (Pascal springs to mind).
Yes. I mean the "missing" features of Comparable auto-synthesising for structs. Even we have such feature, we will still be blocked in many cases by the fact that Bool does not conform to Comparable.
Synthesizing Comparable for structs has been discussed in the past, both in the context of SE-0185 and I believe also during SE-0266. If I recall, the main point of contention was around surprising behavior caused by reordering stored properties. For Equatable and Hashable conformance, the order of the properties doesn't matter with respect to correctness:
Reordering properties may allow a synthesized Equatable implementation to become more efficient (by putting a slower-to-compare property toward the end), but ultimately those operations are order-insensitive.
The exact hash values computed by a synthesized Hashable implementation may be order-sensitive but except in degenerate cases the order should not affect the correctness of the overall set of possible values.
Comparable, on the other hand, is by definition order-sensitive.
There are no other (or vanishingly few other) places in Swift where reordering stored propertiesâa source-compatible modification*âcauses an observable change in runtime behavior. It would be very surprising (and the source of potentially hard-to-find bugs) for a simple refactoring operation to have that effect.
* The order of arguments to a synthesized memberwise initializer comes to mind as an exception, but even that would cause a build failure rather than a silent change in runtime behavior.
With Swift now supporting macros, a better approach might be to have a macro that can synthesize Comparable, but requiring the client to explicitly list the properties in order that should be compared.
...and this kind of explicit list would likely require that the type of each listed property is Comparable, which further suggests that Bool should be.
The ordering of true/false and of composite types that include a Boolean heavily depend on the design choices that are made for what the Boolean value represents. Defining a type's Comparable conformance requires understanding its meaning in a way that Equatable does not.
For example, is a default-generated Comparable conformance for this type correct? What if the author had chosen to model isNegative instead?
struct SignedNumber<T: UnsignedInteger>: Comparable {
var magnitude: T
var isPositive: Bool
}
This is tricky even without Boolean values. In this type, a "smaller" (earlier) birthday value represents a larger semantic age value:
struct Age: Comparable {
var birthday: Date
var currentAge: Int { /* some calculation */ }
}