So is the rule, "don't have different definitions of a protocol conformance running around for a type?"
If so, which module author broke the rule, and when?
Also, why shouldn't the compiler have prevented this from happening?
I think it does. As I said in my previous posting
So even mentioning B.uniqueXs from D would be an error.
Again, the "conformance table" was to be part of the implementation model, and is really only relevant at runtime. At compile time we have conformance sets, and the set for X in module D conflicts with the set in module B, so they don't get to exchange Xs.
Edit:
To simplify thinking about this, you can completely ignore my ideas about conformance sets and tables.
Without a scoped conformances feature, what I'm saying amounts to this rule: you don't get to handle a type in any scope where that type has conflicting conformances.
I think this rule is enough to prevent conflicting conformances from âbiting you,â and if you disagree I'd really like to see a counterexample.
It seems to me that:
You have said you are planning to make some code fail to compile that âworksâ today in the presence of conflicting conformances.
I have shown how some other code can create a soundness problem in the presence of conflicting conformances, and propose that it also fails to compile.
Since conflicting conformances are still rare, the conservative thing to do is to start by closing the door on both problems. We can always loosen the rules later.
Workaround: If you are forced to import conflicting conformances into a file and you want to handle a type that would have conflicting conformances in that file, you have to resolve it by creating another file that only imports one of the conformances.
Should that compile? I'm not sure. The == needs an implementation that takes operands of type X.
Are either of the conformances of X to Equatable visible in this second file? Perhaps, the conformances to Equatable should not be visible, because they have not been imported, and so that should not compile.
If that does compile, which conformance to Equatableshould be used? I posit that it is indeterminate, and so an ambiguity error should be raised on the == operator at compile time (assuming that it hadn't already failed on account of the absence of a conformance to Equatable).
[FWIW: Under the current implementation, that actually does compile. Making X visible by importing A appears to drag into the visible scope at least one of the Equatable conformances. In my case, it happened to be the one from Module B, and it appeared that that conformance was used regardless of the order of the imports in the first file in Module D.]
Yeah, I've seen variants of this problem before. It would be awesome if you could file a bug with a reduced case that focuses just on this problem.
IMO it should not compile because no X: Equatable conformance is in scope; otherwise, what does it mean that imports are written in particular files?
I think it should work if you import just one of B or C, and as I noted earlier, I don't think it should be possible to use X at all in a context that imports both of them.
Presumably D broke the rule because it moves instances of X between modules that have different notions of Equatable for X. I'll answer your last question later.
This is an even tighter constraint than I thought you were going for, although I guess it's one I understand mechanically. It says that the presence of any conflicting protocol conformance for X in a source file in module D makes effectively all of the APIs that traffic in X unusable in that source file. That's true even if the source file never directly uses one of the conflicting conformances, because you never know whether that conflicting conformance is holding up some invariant. That's a very wide net to cast, because it means people are paying for (in compiler error messages) something that is very unlikely to cause a problem in practice.
It tends to crash at runtime, but you can dodge the raindrops here. There is a model here that's partially supported in the implementation.
I consider the collateral damage of your solution---that the mere presence of a conflicting conformance anywhere poisons a type completely---to be greater than the problem you're trying to solve.
Static type systems do not catch all bugs, nor should they. One hard thing to balance in a statically-typed language is when to back off on the type system, because the cost of appeasing the type checker exceeds the benefits. Having an unrelated conflicting conformance make a library type unusable creates busywork and frustration for users.
I disagree, because I think the model you are proposing is not the right one for Swift. The half-implemented model exists produces errors for outright conflicts, but won't burden the user with errors about irrelevant conflicts. It's the right balance for the language.
Perhaps not entirely relevant to precisely "unrelated conflicting conformances" but anyway: As a user, most if not all of my busywork and frustration comes from:
Lack of succinct and complete documentation of "easy-to-explain" intended behavior.
Swift's current tendency of "luring" me into thinking that I can express this or that (great) abstraction, secretly leading me into a rabbit hole of not knowing whether it will or should work or not. I've spent so much time here, even though I actively try to avoid it, but it's hard to not cross a line that is so blurry.
I'd (perhaps naively) prefer a strict compiler that stops me from introducing conflicts and ambiguities, over a forgiving (as in eg html parsers) one.
What exactly is the difference between outright and irrelevant conflicts?
An outright conflict is one where code I wrote directly relies on the conflicting requirement. If I'm calling
import A
import B
import C
func f<T: Equatable>(_: T) { }
f(X())
and there are conflicting X: Equatable conformances visible from that call, that's an outright conflict. The conformance is required by the code I wrote, so it's easy to understand: the compiler should tell me why I need the X: Equatable conformance (it's right there in the declaration of f), and where the two conflicting X: Equatable conformances come from (modules B and C).
An irrelevant conformance is when I write:
import A
import B
import C
print(B.uniqueXs)
and the compiler rejects my code because there are two conflicting X: Equatable conformances. I haven't tried to compare two X's, or use X: Equatable in any way, so why is my program rejected? The story we have to tell here---that maybe someone somewhere is depending on X: Equatable in a way that might break something somewhere, and you're the first person that stumbled on combining the two modules---is very abstract.
That is, as a user, I'd like to be informed that I've imported two modules whose definitions of what it means for X to be Equatable are in conflict. I don't think I'd like to be working/reasoning/debugging1 in such a context. Are such contexts common and/or necessary? Ie, are irrelevant conflicts common?
1 For example, I guess my program could print two results, both individually accepted by the compiler, though they are depending on different X: Equatable conformances. I'd see that the two printed results contradicted each other, and I'd have to start debugging. Had I been informed about or stopped from entering this context, I could have saved a lot of time.
But, to be clear, as soon as you do something interesting with B.uniqueXs, like call containsDuplicates() on it, you've crossed into the land of having a relevant conflict and an error. Surely, @Jens will do something interesting, so, in practice, he should bump into some sort of error via the complier or runtime.
Well, the change moving X instances between modules that have different notions of Equatable for X, came long after there were âdifferent definitions of a protocol conformance running around for a type.â So did I state the rule correctly above, or is it really don't move instances between modules (or presumably, files) that have conflicting conformances for those instances? I really wanna know.
Note that in this case:
C added its conformance in an update, long after D was already importing C. Did D break the rules of the programming model by allowing C to be updated?
D used code completion to find doSomething() on uniqueXs, and read all the documentation that popped up when they did that. Did D break the rules of the programming model by not tracking down the source of the symbol and noticing that it came from C and that C had added a conflicting conformance?
Yep, gloriously straightforward, right?
Also, true: as the compiler, you do never know whether a conformance is holding up some invariant. I hope we can all agree that upholding invariants is fundamental in supporting the creation of large, reliable systems. A huge part of this burden necessarily falls on humans, who must document and read API contracts. But if the language rules make it possible for a type to mean two different things and doesn't prevent me from mixing those things up, it undermines the whole effort. Note that even languages like Python, that have no static checking, don't have this issue: a type only ever means one thing at a time.
You may remember, from C++, that these can be the worst kinds of problems.
I consider the collateral damage of your solution---that the mere presence of a conflicting conformance anywhere poisons a type completely---to be greater than the problem you're trying to solve.
Really, you'd take "it's going to bite you eventuallyâ over âyou have to put some code in separate files to disambiguate?â
Static type systems do not catch all bugs, nor should they. One hard thing to balance in a statically-typed language is when to back off on the type system, because the cost of appeasing the type checker exceeds the benefits. Having an unrelated conflicting conformance make a library type unusable creates busywork and frustration for users.
You know I'm well aware of these tradeoffs, which is part of why we still don't have static constraints on which errors can be thrown in Swift. I also am painfully aware of where we've been too enthusiastic for static checking, which is why I have to sprinkle try liberally through code where the sources of errors are irrelevant (if the effect on serialization code is annoying you should see what it does to a BGL-style breadth-first search or just about anything that uses lots of closuresâso much harder to ignore the noise when it isn't all stacked uniformly at the beginning of every line). The reasons this is different are:
Unlike error propagation, conflicting conformances are extremely rare
When not extremely carefully managed, as you say they're âgoing to bite you eventually.â
The contexts they'll bite you in are unlimited, rather than just areas where you need to be careful anyway (where invariants are being broken).
Moving code into a separate file where the conformances are unambiguous is a relatively light burden that clarifies, rather than obfuscating.
Edit full disclosure: until I saw @mattrips' post yesterday I had forgotten that files in the same module (supposedly?) have their own sets of imports which should be hidden from one another, so one could argue that my simple rule about not mentioning X in a context where its conformances are ambiguous is incomplete. We should consider preventing X from being exchanged between files in the same module having conflicting conformances. I can see how that might be considered overly protective for individual module developers, but it might be necessary to support larger teams working on the same module; the implications should be thought through carefully.
Note that this situation we can create today has all the same implications as having scoped conformances (except that only scoped conformances let us reduce code bloat).
Iâd just like to echo what Jens said above. Iâm probably naive as well, but having the conflicting requirements be allowed (as long as Iâm not using them) sounds like a huge foot-gun hiding under the bed waiting to jump out at me 6 months later when I decided to add some more code in this context.
FWIW I also used to spend a lot of time re-architecting half finished designs that I discovered couldnât actually implement the semantics I wanted. I eventually made a list of âthings that I shouldnât use because theyâre likely to bite me laterâ and this definitely sounds like something that would bite me later but defies an easy to remember rule. âDonât import modulesâ isnât really viable
Sure, but since we are bolding things, not every invariant must be handled by the type system.
Not the same ballpark as Argument Dependent Lookup, sorry.
Not everything can be easily separated by files. You can't put the stored properties of a type, or the overridable methods of a class, or the requirements of protocol, into different files. With this rule you propose, users would have to try to defeat the static type checking to get an X, a type from B, and a type from C into the same place. Either they succeed with some contortions (special wrapper types with private fields scattered into different source files, maybe?), in which case the type checking wasn't actually as sound as you want, or they fail and you've prevented a probably-correct program from being written.
In this thread we've learned a lot about what the intended implementation model that exists in the compiler/runtime actually is, and it's a whole lot closer to what you're proposing than we all thought originally. The primary cases that matter would be caught be the model that Swift was designed for as well as the model you're proposing. The difference between the two, to me, is a deal breaker: if the Swift compiler cannot point at the place I needed X: Equatable when it complains about conflicting X: Equatable conformances, users will be confused and seriously annoyed at the compiler, and that place does not exist in your model because it is at best invisible (buried in function implementations you can't see) and at worst wholly theoretical.
I hope that some of the enlightenment about the currently-designed model makes it into some documentation, somewhere, so others don't have to go through the same discovery. I also hope that it can be useful to tackle some of the other problems that, to me, are far more important than the delta between the two models: namely, the ability for more-specialized witnesses to be chosen at witness table instantiation time, and providing a reasonable dynamic semantics for as? that fits within that model.
With that thought in mind, I've been taking notes, and trying to harmonize, correlate and synthesize. As part of that process, I've been applying what has been discussed in this thread to past examples of difficult-to-reason-about conformance and dispatch behavior.
On the theme of choosing more-specialized witnesses at witness-table instantiation time, one of the examples I've been trying to reason about it is set forth, below.
protocol P {
func a() -> String
func b() -> String
}
extension P {
func a() -> String { "slow a()" }
func b() -> String { "doing general stuff with \(self.a())" }
}
protocol Q: P, Equatable {}
extension Q {
func a() -> String { "fast a()" }
func b() -> String {
let x = self.a() // Since self must conform to Q, shouldn't
// self.a() always dispatch either to Q.a() or to a
// customization of a() on the concrete type?
return "doing special stuff with \(x)"
}
}
extension Int: Q {}
print(Int(1).b()) // As expected, does special stuff with fast a().
struct R<T>: P { let value: T }
extension R: Equatable where T: Equatable {}
extension R: Q where T: Equatable {}
print(R(value: 1).b()) // Does SPECIAL stuff, but with SLOW a(). Is that right?
If I'm understanding the intended model correctly, in the context in which self.a() is called, self is known to always conform to Q. Since Q is the most general possible form of self, the witness table entry for conformance to a() should point to Q.a(). Am I misunderstanding how the intended model works?
[EDIT: For clarity, I removed the where Self: Equatable conditional conformance on protocol Q, and replaced it with the more direct declaration that Q inherits from Equatable. Also, removed the unnecessary implementation of ==, in favor of automatic synthesis.]
Wow, when you disconnect that bolded bit from the rest of the paragraph it's almost like you're telling me something I don't know (0.25 x )! Seriously, this is both taking my statements completely out of context and completely missing the point, to wit:
A language shouldn't create the conditions where the standard manual means of upholding invariants (e.g. documenting stuff, reading the docs, developing best practices, following the implied rules) break down.
I assert that where we've done that, the language needs to change. A secondary point you also missed, was
It's about what you can easily express unintentionally, not what a type system âhandles.â
That's why I mentioned that Pythonâa language with no static type systemâdoes not create these conditions.
Not the same ballpark as Argument Dependent Lookup, sorry.
Look, I don't want to get into details about C++ here. I'll just say this: when I build the mental âfeature comparison chartâ for the effects both have on the programming model, their checked rows have significant overlap. I'm not assuming you've dismissed the comparison out of hand, but will encourage you to at least do the same mental exercise if you haven't.
OK, fair enough; I don't think the consequences of my rules are as bad as you imply, but TBH we're both guessing, since we don't have any real examples to work with. Suppose we compromise and consider alternative remedies with fewer potential downsides?
Under today's rules, the only possible conformance conflicts in programs that compile are retroactive and all retroactive conformances are public. We should give people a way to opt out of creating conflicts unintentionally, so I propose:
Retroactive conformances become internal by default, after a release in which you're asked to be explicit about whether they're public. A retroactive conformance is defined as one that could conflict under today's model.
These protections would fall on the API vendor side rather than on the client side. I also have ideas for things we can do on the client side that fall well short of âpoisoning types,â but let's discuss those separately.
(I'll address the rest of your post, which is important, in a separate message)
I think the question of how handle to the documentation is important enough that it deserves its own thread, which I'll launch in the next couple of days. Before I do that, though, would you kindly help me map the quoted text from the space of implementation details into the space of something a Swift programmer might understand, such as an example (if not a description of the intended language semantics)?
I suspect you mean you want the following program to print "Q Q".
protocol P { var id: String {get} }
extension P {
var id: String { "P" }
var id2: String { id }
}
struct X<T>: P {}
protocol Q: P {}
extension Q { var id: String { "Q" } }
extension X: Q where T: Equatable {}
print(X<Int>().id, X<Int>().id2) // "Q P" today.
I think your example can be simplified further, without changing its essence, by removing the Q: Equatable refinement and associated extension on R. Having just analyzed the code, if I understand your question correctly, it amounts to the same one I'm asking with this smaller example. Would you agree?
Very similar, but the two examples are substantively and purposively different.
I interpret your example and question to @Douglas_Gregor to be focussed on changing the model. My example and question are focussed on understanding whether certain observed behavior is a bug or consistent with the intended behavior.
One key difference between the two examples is your id2 is not a protocol requirement while my b() is a protocol requirement. So, under the existing model (as I understand it), id2 is handled statically at compile time, while b() is handled dynamically at run time.
Another key difference is, in the scope of your id2, it is perfectly possible that self might be a type that does not conform to Q while, in the scope of my Q.b(), self must be a type that conforms to Q.
Finally, as I understand the intended behavior of the existing model, in your example, the output is expected and correct while, in my example, the intended output might be unexpected and incorrect.
Our two examples serve different purposes. My example would be better placed in the documentation thread that you propose. Your example makes sense, here, in this future directions thread. Still, I fully expect the two paths to merge as unintended behavior explored in the documentation thread may impact the future directions thread.
You can reason about it and understand it, but it is more of a complex mental model and it is not clear to me if the goal of being one language to rule them all (domains) is worth a more complex mental model about dispatching rules.
If you told the people who really care about every ounce of CPU performance to write the most critical code paths in C++ like they used to do in the Objective-C days or like people do on Android daily in favour of a simpler dispatching model and (maybe it is even stupid to mention here) faster compilation times... would people drop Swift en-masse? I think not if you can bridge between the two languages easily.