In the code, below, it appears that the compiler is inferring C() of C().id to be of type C<Int>. Without the id member attached, it fails to make the inference, and generates an error.
Is there a valid basis for the compiler to draw that inference? What is it about the id member that causes the compiler to make this inference?
Or, is this a bug?
protocol P { var id: String { get } } // (r1)
extension P { var id: String { "P" } } // (i1)
protocol Q: P {}
extension Q { var id: String { "Q" } }
class C<T>: P {
var id: String { "C" }
}
extension C: Q where T == Int {}
print(C<String>().id) // "C"
print(C<Int>().id) // "C"
print(C().id) // "Q" -- How does this compile? It is inferred that T is Int?
// print(C()) // Error: Generic parameter 'T' could not be inferred
Also, note that C().id accesses the implementation of id provided by Q, rather than the implementation provided by C. I'm guessing that behavior is a by-product of whatever is happening, here, with generic type inference.
This is new to me. I do not know what is happening.
Playing around with it a bit, here’s what I observe in Swift 5.1.3 (in Xcode 11.3.1):
print((C() as C).id) // "Q"
print((C() as C<Int>).id) // "C"
print((C() as Q).id) // "C"
print((C() as C & Q).id) // Compiler crash
print((C() as C<Int> & Q).id) // "C"
let a = C() as C // Error
let b = C() as C<Int> // Success, and b.id is "C"
let c = C() as Q // Success, and c.id is "C"
let d = C() as C & Q // Success, and d.id is "C"
let e = C() as C<Int> & Q // Success, and e.id is "C"
Repeating the last block with “let a: T” style type annotations instead of “as T” gives the same results.
I don’t know what type inference is doing, because I can’t find a type with the matching behavior.
I’d love to trace through how the compiler is analyzing this one. Unfortunately, I’m not currently setup to debug the compiler, and the overhead in getting setup is no fun.
Exactly. It must be a bug. I suspect something is amiss in the confluence of the code that resolves overloads and the code that infers generic types.
The bigger question is, in what other subtle ways might this bug be manifesting itself. And, that is why I suggest this issue may be worthy of investigation.
I think this is correct behavior; otherwise we wouldn't be able to provide default implementations in the standard library that include a default associated type, and that's important for protocols that have associated types that are mostly just implementation details. cc Ben Cohen, Karoy Lorentey
Right – this is the correct current behavior and is used to provide functionality for types like Collection. Changes to it would need to be evolution pitches rather than jiras.
Hmm. Need to understand the semantics behind this mechanism.
The associated type inference is pretty straightforward. The compiler considers all possibilities, and if it finds any that are valid then it ranks them and chooses the best (or balks when there’s ambiguity).
That does not explain the behavior from the example at the top of this thread. Specifically, it does not explain why C().id prints “Q” even though the inferred type C<Int> would print “C”.
After going back ... and re-reading threads about the inference aspect of it, from last year, ... I feel like I'm stuck in an endless loop.
But, yes, you are correct. We should not lose sight of the inconsistent dispatch behavior which appears to be more of a protocol-dispatching issue. Bug report?
Yes, I think this is worthy of at least 2 separate bug reports:
The original example printing “Q” is a bug.
“(C() as C & Q).id” crashing the compiler is a distinct bug, especially since when split over 2 lines it does not crash (eg. “let x = C() as C & Q; x.id”).