This is a random idea off the top of my head so might be utter nonsense, but what about a fallback mode specifically designed to provide diagnostics? So if you hit the cannot check in a reasonable number of steps limit, you fall back to this other mode. Which could do something similar to what users do when faced with that issue — remove parts of the code in a binary type search and try again — probably with a much more shallow limit — until it can at least track down something like "I don't know what the issue is but I know that it's on this line" or "I know it's in this block".
This could even be something you can click to initiate if you get the compiler timeout error rather than something automatic. It could keep refining whilst you watch even, with some kind of live display of what it can rule out and where is still potentially the issue, so that if you spot where it's headed and find the issue yourself then you can just stop it.
Of course there is always the possibility that the code is actually well formed but the type checker just can't find the proof in time, but I don't think I've ever actually seen that in practice.
I have no idea tbh how practical that idea is but hey, it's an idea!
"" + s + "" + s + "" + s + "" + n + ""
-> String + "" + s + "" + s + "" + n + ""
-> String + s + "" + s + "" + n + ""
-> String + "" + s + "" + n + ""
-> String + s + "" + n + ""
-> String + "" + n + ""
-> String + n + ""
---^-- unexpected expression of type Int
Q2: What are the advantages of using a constraint solver?
Swift is primarily used to build apps on Apple platforms.
SwiftUI is the Apple-promoted way to build apps, is becoming more popular, and is the future.
Therefore, the Swift compiler should prioritize working well in SwiftUI.
And these facts:
Currently, in SwiftUI, the compiler is often about as useful as Siri.
AppKit, UIKit, and other frameworks have code that is often much simpler for the compiler to digest (compared to SwiftUI). The expressions are shorter and chunk-ified. There are not result builders.
A bias towards doing nothing out of fear might be appropriate if performance was currently okay. Performance is not currently okay.
I'm not sure about result builders, but at least for the "" + s ... + i case, Kotlin doesn't have an equivalent to ExpressibleBy*Literal, so it can just see that there's no overload for String + Int and stop there. No exponential retry-with-every-type-of-"" because there's no other type that can be.
But Isn't that what tests are for? Sure there isn’t enough tests to account all the implications of a change but if after some change is done we start to get a lot of complains about something that is worse now we know what to test, we reiterate, try another approach and so on…
It is true that if your + operator has exactly two overloads and they have these concrete types, there is an easy strategy to figuring out the problem. In fact, if you declare your own operator, you will see the type checker solves it in linear time. Try adding repetitions of "" ++ s ++ at the beginning or ++ s ++ "" at the end:
infix operator ++: AdditionPrecedence
func ++(lhs: String, rhs: String) -> String { return lhs + rhs }
func ++(lhs: Int, rhs: Int) -> Int { return lhs + rhs }
func f() {
let s = ""
let n = 0
let url = "" ++ s ++ "" ++ s ++ "" ++ s ++ "" ++ n ++ ""
// error: cannot convert value of type 'Int' to expected argument type 'String'
}
Polymorphic literals don't work the way you're imagining; there's no enumeration of concrete types conforming to ExpressibleByStringLiteral. That is, we don't magically deduce that T has to be S here:
protocol P {}
struct S: ExpressibleByStringLiteral, P {
init(stringLiteral: String) {}
}
func f<T: P>(_: T) {}
f("") // error: global function 'f' requires that 'String' conform to 'P'
I am happy to see this brought up here as I love Swift and would trade a great many language features for fast, accurate, and reliable type checking, which is not what we have today.
Real-world effects and experience:
Just last night, things were going smoothly during my session until suddenly code compilation slowed completely on my M4 Max MBP. It was something simple like the forId/withID difference mentioned here and Xcode was unable to tell me that, so I lost a half hour or so backtracking, commenting, etc.
It feels to me like Swift (which again I love!) offers me flexible and adaptable language features always at my disposal, but constantly charges me in time and stress for the fact that the features exist whether or not I make use of them.
Hoping and looking for solutions:
Without any clear instructions on best practices for helping the type checker out (should I explicility type things? Should I not? Only sometimes?) I find relief in blog posts like this one which give some benchmarks and tips like "don't use .init()" and "use String literals without an explicit String type declaration" that hopefully should speed things up and help me and the type checker get out of each other's way.
One part of this that I hope comes across: I'm willing to change the way I write code to make things run faster! But, I think like many in this thread, I've already broken my code down, made the simplest types possible, and been explicit everywhere. So, all that's left are educated guesses like the one linked to try and return some reliable speed and smoothness to the experience.
The real issue from what I’m understanding is that String + String doesn’t always return String.
If that wasn’t the case we wouldn’t need a Sudoku algorithm. Instead we‘d just walk through a first pass at O(n) and test each element pair for compatibility. With that we‘d immediately see the Int + String doesn’t exist and fail early. But given that each operation can theoretically produce a different type, we are stuck with essentially brute force but at best test the most likely paths first.
I’m trying to think of cases where we really need A+A -> B. I’m sure they exist, but maybe this presents an acceptable „opt out“ (eg compiler flag) that could reduce complexity a lot?
Can you elaborate? The link points to (_: A, _: A) -> B but I’m not sure what the exact use case is.
I wonder how ergonomic it would be to spell those cases differently (eg as function calls), assuming a hypothetical language mode that disallows what I‘ll call heterogeneous operator types for lack of a better term.
Ok. I’d need to think about whether that case could be excluded from the logic or not. My gut feeling tells me that A + B -> A has different implications for a fast path than A + A -> B but I’m not immediately sure.
Either way, that particular case feels very un-Swifty to me tbh. Especially because we can’t even do Int16 + Int32 in Swift. Personally I’ve always used .advanced(by: Int).
Should not the compiler be able quickly to instantly the value of String + String. How is it different from any other function that takes two arguments?
But if you have Type1 + Type2 + Type3 + Type4, even if all of them are string literals, it’s not immediately obvious that Type3 + Type4 is truly String + String or if one of the previous additions has resulted in a different type.
I wonder if using more specific operator symbols would be an acceptable workaround for these kinds of issues. For example ⋅ would be a common option here.
The issue that I at least have been unable to solve easily is how to spell these operators without having to reach for copy-paste. On Apple devices you can set up letter combinations that resolve to others (like (c) for the copyright symbol), so maybe something like that. But it’s a bit of a stretch to standardise things like that for a whole team / open source project with existing tooling.
Correct me if I am wrong, but I believe the problem is not with operators but with function overloading in general. When types need to be inferred (which mostly occurs with literals) for many possible overloads things explode. So limiting operators will not solve the issue in general. The problem with operators, I believe is that they happen to be overloaded for a lot types.
Possible solutions are:
Allow type inference only in the simplest expressions. (initializers etc).
Introduce type literals with suffixes - like 1f, 1i64 and allow user define them for types that satisfy "ExpressibleBySomeLiteral" protocol. (I know we have the as keyword, but in long expressions it creates unnecessary noise - most of code in the expression is working around the type checker instead of the actual expression itself. Expressions like 1f are much more ergonomic and familiar from other languages.)
Produce warnings/errors when type inference is required for overloaded functions and suggest tot add type hints. (Often it is enough to add hint to single literal, and the rest is easily inferred then).
Add some syntactic sugar that allows specifying the type of literals in the expression. (for example - in the famous var x: Double = 1 + 1 - (-1) ... + (1... + 1) the moment compiler knows that 1 is a Double, it is becomes trivial)
You don't necessarily need to do it with such a big hammer at the compiler flag level. I had an attempt at a PR a couple years ago that would check if all possible operator uses for the possible types involved in an expression were all (A,A)->A and if so, then all the types had to be identical and you could take a major shortcut (bind all the type variables together so deciding on one decided them all). If you see any possibilities where the operators weren't symmetrical you'd fall back to the current algorithm.
I don't remember the reason why it didn't get entirely merged - I think other pieces did. And I frankly haven't looked at this part of the compiler in a long while - there have been these types of optimizations in there for a long time, and it might already be in there at this point.
Is it possible to replace the current algorithm type inference with one that produces correct answer in 80-90% of use cases, fails (instead of producing a wrong answer) in others and does it in reasonable polynomial time, while producing meaningful and helpful error messages (like pointing at the exact part of the expression which type could not be determined) while making it deterministic or at least reproducible?. While being optimized to work fast for common use cases? Because it still fails in non-negligible number of cases, in unreasonable amount of time (often several minutes) without giving any helpful hints.
I wonder about one thing: which particular set of features require the current constraint solver implementation? Could there feasibly be a swift dialect that would only require a "classic" unidirectional inferencer?
If we can sacrifice something small to get a stable, fast and non-frustrating type inference, like e.g. overloads by return type and dot shorthand – well, at minimum, I think it is worth discussing.