Oh sure, it'd be a huge pain in that regard. However in terms of the type checker, I imagine just always treating the intermediates as Double/Float for the purposes of constraint solving, then rewriting as CGFloat later, would produce the "correct" code in all but the strangest cases because, e.g., == is exactly the same.
I realize I'm jumping to specific technical solutions, so I can shush.
Itâs probably *all* deeply nested closures that the type checker has trouble with. Problem is, up until SwiftUI, no one writes deeply nested closures, so the performance problem only became eminent recently.
Well, no one except he who ever bothered to call sgemm_ -
// Gross simplification here - rows == row stride == columns = m
// Two fewer levels of closure
let blasopts = Array<CChar>("ULNVTCFE".utf8CString.dropLast())
return a.withUnsafeBytes { (a : UnsafeRawBufferPointer) in
return withUnsafePointer(to: m){ m in let m = UnsafeMutablePointer(mutating: m)
return withUnsafePointer(to: F(1)){ fone in let fone = UnsafeMutablePointer(mutating: fone)
return withUnsafePointer(to: F(0)){ fzero in let fzero = UnsafeMutablePointer(mutating: fzero)
return blasopts.withUnsafeBufferPointer { opts in let opts = UnsafeMutablePointer(mutating: opts.baseAddress!)
a.withMemoryRebound(to: F.self) {
let a = $0.baseAddress!
// result += a * a'
sgemm_(opts+2, opts+4, m, m, m, fone, a, m, a, m, fone, resultmat.baseAddress!, m)
}
}
}
}
}
}
Make a mistake and the type checker will point at the top-level withUnsafeBytes. Another case where human may eyeball the problem sooner than the checker.
validation-test/Sema/SwiftUI/too_complex_source_location.swift:27:25: error: the compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions
25 | Picker(selection: $selection) {
26 | ForEach(["a", "b", "c"], id: \.self) {
27 | Text($0)
| `- error: the compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions
28 | .foregroundStyl(.red) // Typo is here
Of course, this is far from foolproof, but in some cases I it might produce the diagnostic closer to the location of the actual problem.
What a timing to encounter this post after spending several days chasing infinite loop hang in swift::GenericSpecializer only in whole-module optimization. In Xcode 16.4 it would hang indefinitely but in Xcode 26.1 it does not. I assume thatâs due to improvements you mentioned in Swift 6.2..?
In any case, I consider this the most important work regarding all of Swift toolchain and itâs very encouraging to see it given considerable time. Thank you.
This sounds like a bug in the optimizer, so not really related to type checking. I'm glad to hear you're no longer seeing the problem in Xcode 26, but if you see something like this again, please file an bug.
One other type-related diagnostic it would be nice to see less of, but might not be strictly related to overall type checking, is "Type of expression is ambiguous without a type annotation". Especially when that diagnostic shows up on code, like a function call, which can't be provided with a type annotation in the first place. And usually, even when provided with type information, this diagnostic is really just a cover for some lower level issue in a closure. So anything to clarify the issue is in the closure rather than the function that takes the closure would be good, though the real issue in the closure would be best.
To my (limited) knowledge, a key difference between an overloaded function/operator and a generic function/operator is that, in the former case, if an argument has a non-concrete type, such as if it uses the "leading dot" syntax, the compiler will enumerate all possible types when trying to infer its type. The compiler not doing so in the latter case is why SE-0299 needed to be implemented.
For example, suppose A conforms to Equatable, and A is the only type with a static member .a. Then .a == .a works in the former case, where the compiler can pick ==(_: A, _: A) from the overload set. But it wouldnât work in the latter case, where the compiler can only see the generic version ==<T: Equatable>(_: T, _: T), and thus would only be able to find a solution if it enumerated all Equatable types and saw that A is the only one with a static member .a.
Expressions like .a == .a are inadvisable anyway. Operators like == have so many overloads that the overload set is effectively infinite, making operators "generic-like" in a sense. I think it could be beneficial to change how operators behave in overload resolution, by making operators "behave like" generics.
A maximally restrictive solution would be to require both operator arguments to have a concrete type. However, that would break perfectly reasonable expressions like A() == .a.
Another solution would be to require at least one operator argument to have a concrete type, and if the other argument has a non-concrete type, the compiler would guess that both arguments have the same type. However, that would be a special case, and wouldn't accommodate operators whose arguments have different types.
Personally, I think the best solution would be to require operator arguments to be concrete by default, but allow specific declarations of operator functions to "opt in" to being unconditionally included in the overload set. For example, suppose Equatable.== were declared like this:
The @infer annotation would tell the compiler to unconditionally include ==<T: Equatable>(_: T, _: T) in the overload set, even if at least one argument has a non-concrete type. Basically, @infer would mark an operator overload as "important" enough for the compiler to always consider. That is, Equatable.== is "important" enough that the compiler needs to consider it whenever it sees ==. The @infer annotation would not automatically apply to implementations (witnesses): even though Equatable.== is marked @infer, Int.== is not, so ==(_: Int, _: Int) is not included in the overload set unless both arguments have the concrete type Int.
Furthermore, suppose that Strideable supported the + operator:
Because of the @infer marker, whenever the compiler sees the + operator, it would unconditionally include +<T: Strideable>(_: T, _: T.Stride) in the overload set. For example, with the expression ptr + .zero, the compiler knows that ptr has the concrete type UnsafePointer, so it would infer that .zero has the type UnsafePointer.Stride (aka Int). But expressions where the compiler would need to enumerate all Strideable types, like .init(bitPattern: 1)! + Int(0), would fail to type-check.
The second bolded part has always been what I thought the Swift type checker was missing.
In my mind, it should be possible to eliminate impossible options between adjacent nodes without fully solving either node by simply intersecting the broadest possible set of types along each edge and immediately eliminating disjunctions whose types canât be intersected. No solving, just brute domain intersections.
In short: arc consistency.
In problems like the url example, this approach should fairly quickly eliminate all possibilities. AC-3 is a worst case O(nm^3) (n is nodes, m is disjunctions) â but if just using it to reduce the problem space and find inconsistencies, O(nm^2) would be enough, followed by running the regular constraints solver on the remainder. It feels like the work already done in CSSimplify.cpp could be extended to do this.
The upcoming work from @xedin sounds more like making the best positive choices, whereas this would be about pruning negative ones. I assume no one else is looking in this direction?
Everything would indeed be simpler if we could determine complete or even bounded sets of types for each type variable but unfortunately we cannot because we cannot enumerate all the types that conform to a given protocol or protocol composition or even determine whether such composition doesnât have any member types, and thatâs why disjunctions drive solving at the moment (especially when it comes to literals because they are represented as a protocol conformance) and we cannot utilize well-known constraint optimization algorithms.
we cannot enumerate all the types that confirm to a given protocol or protocol composition or even determine whether such composition doesnât have any member types
Okay, thatâs frustrating. Needing to reduce infinity to a finite number is tough.
Itâs provably impossible to fix in the fully general case, but the expectation is that fewer and fewer real world cases will be problematic in practice.
We know that we cannot solve all of the cases but we are trying to make all of the reasonably complex code (with or without errors) out there type-check fast.
I donât think this is going to help since itâs still a literal, the fundamental problem here is that a type variable that represents a literal can assume any type that conforms to ExpressibleByIntegerLiteral protocol and that type can come from anywhere in an operator chain, so unless there is an implicit conversion from a default type i.e. Int to {U}Int* or any user-defined type that conforms to `ExpressibleByIntegerLiteral` protocol the binding for such literal cannot be established eagerly. Such conversions bring similar performance issues as well as we observed with a limited use-case of DoubleââCGFloat implicit conversion.