We have a bunch of model structs that mostly contains other models and doubles and ints.
With Swift 4.1 we get automatic Equatable implementations and that is great. But a bunch of our unit tests now fails because comparing two Doubles with == fails. Their values are as equal as is practically possible given the circumstances in which they are produced.
To loosen the equals requirements a bit I would like to do something like this
I am sure this is a bad idea for a number of reasons. It does not help in a playground but it actually seems to make the unit tests pass.
One solution is to manually implement Equatable on all the model structs but that is also a bad idea for all the reasons that we now have automatic synthesis of Equatable.
Any ideas on a smarter route?
For reference a couple of our model structs below:
struct GeoCircle: Codable, Equatable {
let center: GeoPoint
let radius: Double
}
struct GeoPoint: Codable, Equatable {
let latitude: Double
let longitude: Double
}
You could define your own type for those comparisons:
struct Double00001: Equatable, ExpressibleByFloatLiteral {
var value: Double
static func ==(lhs: Double00001, rhs: Double00001) -> Bool {
return abs(lhs.value - rhs.value) < 0.000001
}
init(floatLiteral value: Double) {
self.value = value
}
}
let a: Double0001 = 0.1
let b: Double0001 = 0.09999
print(a == b)
Sadly, you can't use constants as generic parameters yet -- this would make it quite convenient to declare floating point types that can be compared with a custom tolerance.
We considered the solution with wrapping in a struct. But that would require all uses to add a “.value” which makes the code less readable and a bit obscure.
We also considered a custom operator but again we need to compare deeply nested structures and arrays and dictionaries.
I can't help but ask: what is the significance of "now" in the above statement. Did existing tests start failing when Equatable was synthesized in 4.1? I can't think of a reason why that should be so.
Similarly, I wonder: is this humorous understatement? This would be a really, really terrible idea. Apart from the fact that none of your code could compare small values for equality (in other contexts where small differences matter), it could fail with large values, if the representation inaccuracy was above the 0.000001 threshold.
You could try testing the significand rather than the absolute value, but that would leave edge cases in the neighborhood of 0 and 1.
But even that only takes into account the representation inaccuracy (of fixed-width floating point representing real-world values). It still would be wrong if the calculation or measurement error exceeded your threshold.
The reason that fixed-width floating point doesn't come with a built-in comparison threshold is that it's an insoluble problem in general. I don't see that you have any choice but to define a custom threshold test for each of your custom types.
(Also, personally, I wouldn't use a threshold, but rather round values to a fixed-size grid, then test for equal grid positions. But I don't know the details of your use-case.)
Extending on what @QuinceyMorris wrote, beyond the particular way the tolerance is applied being a bit crude, making == use a tolerance is always a really bad idea, because:
a) it makes == no longer transitive.
b) it makes == not imply substitutability.*
These are assumptions that a lot of code depends on. Breaking them causes all sorts of trouble.
If you want to compare with a tolerance, use an explicit function, not ==.
[*] == already doesn't quite imply substitutability for floats because of +/-0, but in practice this distinction almost never matters. The breakage from == having a tolerance is a lot worse. I'm waving my hands a bit here, but it's really much, much worse.