The root of the problem is Swift can't do + (any op) for more than 4 things, beyond 4 terms, compiler is unable to type-check, Use Array().reduce() to get around this.
Use Measurement/UnitDuration for type safe/readable time duration calculation
To get Date value from some time interval, use Calendar/DateComponents
Read the thread for details
===================================
Original post:
No, there are some literals there that must be inferred to Double ("from Int"), or I guess one could say there are some operators that need to be inferred to be (Double, Double), rather than eg (Int, Int).
Doubly more precise, the more you use integer literal the slower it gets until it hits "The compiler is unable to type-check" threshold... ok that may not be a good take away . Though try to be careful around +-*. It's quite a dense and complex space, and you might need to help guide it along.
There's so many ways I want to break it, like throwing in some Double variable .
Anyway, IIRC, there's a short circuiting for all integer literals, and all float literals. I think there's some problem with mixing float/int literal, but I'll let someone more familiar than me explain.
Actually, the array literal's type is inferred to be the most-specific supertype of all the members involved. E.g. this is possible:
print(type(of: [A(), B()])) // => Array<Any>
Interestingly, extracting the value yields an error:
Untitled.swift:10:13: error: heterogeneous collection literal could only be inferred to '[Any]'; add explicit type annotation if this is intentional
let array = [A(), B()]
^~~~~~~~~~
as [Any]
That's not the problem. The expression is trivially known to consist of only numeric literals. The hard question to answer is: what concrete type should be initialized from all these literals?
You know that, and I know that. But the compiler doesn't know that. All it has is a bunch of overloads for the operators, and it has to weed out the combinations that don't work. We know that there are only definitions with heterogeneous arguments, but the compiler doesn't know that in the sense of that meaning anything to the type checker.
There is no shortcut taken, and I expect that's because it's perfectly possible for someone to create an overload that takes (Double, Int), and if the compiler short-circuited the type checking and auto-magically promoted all the literals to double, the overload would not be selected when it should be.
Yes, if I understand the issue correctly, the time it takes to typecheck increases exponentially by the number of operators and their overloads, which is why eg the following will compile quickly:
I wonder if it wouldn’t be possible to introduce a shortcut so the compiler first tries “same type” operators, and if it is able to successfully type-check an expression using only same-type operators, then it finishes without ever needing to consider mixed-type ones.
That would be a bad choice in this situation. The literals without a decimal point type check as integers (Int) first. If there's an operator overload that takes (Double, Int), you'd expect the compiler to choose it. If you short-circuited the literal typing and made them all Double, you'd violate the principle of least surprise.
I guess it could work if the compiler can check that all possible overloads take heterogeneous types, and combine that with the type of the variable to discern that the only valid typing will be that type. Then the compiler can check that all literals can be promoted to that type, or fail if not.
I wonder if it wouldn’t be possible to introduce a shortcut so the compiler first tries “same type” operators
This would change the semantics of the language (which isn't to say that we can't consider it, but it's a source-breaking change, so the bar is high).
The last time I dug into this, it seemed that part of the problem was the +/- operators on Strideable, which introduce non-homogeneity into the overload set (SIMD operators are the other stdlib feature that does this, but IIRC we already have some special handling for those that makes it less of a problem).
There's definitely a lot of promising ideas about how to resolve this (people have proposed a bunch over the years, the compiler team has a lot of good ideas themselves, and many of them can be found on this and related threads). The limitation is really not ideas about how to address it, but rather engineering time to tackle the problem in a systematic fashion, to avoid grafting on more ad-hoc shortcuts.
Correct me if I'm wrong, but I was under the impression that the promise of source compatibility was explicitly not a promise that semantics will remain consistent version-to-version. I.e., if we could change the overload resolution rules in such a way that all existing code compiles, but perhaps resolves operators in some circumstances to different choices than before, we would not have to meet the usual requirements for a source compatibility break.
Changing the meaning of source is a worse break than making formerly-valid code invalid. There's a little bit of handwaving implicit here in that we want to reserve the possibility of fixing "obvious" bugs in the compiler or standard library, but sweeping changes to language semantics should be considered source-breaking.
(Note: I am not on the core team, and am not speaking for them. I'm wearing my standard library contributor hat here.)
Yeah, that definitely makes sense. I have a vague memory of a post from Core Team member on the forums that implied there was more leeway for "semantic breaks" than there was for source breaks, precisely because of the handwaving you mention. I won't speculate further since I can't remember any specifics.
I like this method. But once you add a fifth term, the compile hangs for a long time and gives up with the same "unable to type-check this expression in reasonable time" error: