[Proposal] Explicit Synthetic Behaviour

See, this is another flawed assumption; you are assuming that omitting a custom implementation of == is always intentional rather than an oversight, which is not guaranteed. This is one of my gripes with the retroactive change to Equatable, as it is currently impossible to omit an implementation.

Again, this applies equally to the addition of _any_ default implementation. And again, such changes don’t even require Swift Evolution approval.

So what? Because the Swift Evolution process is currently deficient we should just give up on discussing problems with features and the language altogether?

I don't claim that it's a deficiency; I claim it's reflective of Swift's opinionated take on default implementations. Are you, after all, saying that you have a problem with the addition of _any_ default implementation to an existing protocol? If so, this conversation isn't about synthesis/reflection at all.

No, and you should know that by now. I suggest actually reading some of what I have written as I am sick of repeating myself.

And precisely what kind of "evidence" am I expected to give? This is a set of features that do not exist yet, I am trying to argue in favour of an explicit end-developer centric opt-in rather than an implicit protocol designer centric one. Yet no-one seems interested in the merits of allowing developers to choose what they want, rather than having implicit behaviours appear potentially unexpectedly.

Both options were examined for Codable and for Equatable/Hashable. The community and core team decided to prefer the current design. At this point, new insights that arise which could not be anticipated at the time of review could prompt revision. However, so far, you have presented arguments already considered during review.

And so far all I have heard about this is how it was "decided"; no-one seems interested in showing how any of these concerns were addressed (if at all), so as far as I can tell they were not, or they were wilfully ignored.

They were addressed by being considered.

And yet no-one can apparently summarise what those "considerations" might be, suggesting that they were either not considered at all, or that the "consideration" was so weak that no-one is willing to step forward to defend it. Either way it is not sufficient by any reasonable measure.

If I were to run over your foot in my car, would you be happy to accept that I "considered" it first?

How do you mean? People wrote in with their opinions. Then, taking into account the community's response, the proposal was approved.

I mean because not once have you summarised what these alleged "considerations" were; if they exist then you should be able do so, yet all I am hearing is "it was considered", which frankly is not an argument at all as it is entirely without substance.

If it was genuinely considered then someone should be able to say what points were considered and what conclusions were reached and why. And even if there was an earlier decision, that doesn't necessarily make it right. We are discussing it now, and it is clear that any decision that has been made has been made poorly at best.

And if you're talking about the discussion on Equatable/Hashable specifically, I'm afraid your memory of the "considerations" is radically different to mine; as the concerns I raised were essentially ignored, as not a single person gave a justification more substantial than "but, but Codable!" which frankly isn't a justification at all.

Therefore, your argument reduces to one about which default implementations generally ought or ought not to be provided--that is, that they ought to be provided only when their correctness can be guaranteed for all (rather than almost all) possible conforming types. To which point I sketched a rebuttal above.

If a protocol defines something, and creates a default implementation based only upon those definitions then it must by its very nature be correct. A concrete type may later decided to go further, but that is a feature of the concrete type, not a failure of the protocol itself which can function correctly within the context it created. You want to talk evidence, yet there has been no example given that proves otherwise; thus far only Itai has attempted to do so, but I have already pointed out the flaws with that example.

The simple fact is that a default implementation may either be flawed or not within the context of the protocol itself; but a reflective or synthetic implementation by its very nature goes beyond what the protocol defines and so is automatically flawed because as it does not rely on the end-developer to confirm correctness, not when provided implicitly at least.

Again, if it applies generally, it must apply specifically. What is "automatically flawed" about the very reasonable synthesized default implementation of ==?

It makes the assumption that every equatable property of a type is necessarily relevant to its equality.

No necessarily, only provisionally and rebuttably. If it’s not the case, override the default.

So… entirely unlike standard default implementations which cannot "provisionally" assume something is relevant at all,

Why not?

Because they can only act upon properties/methods that they themselves (or a parent protocol) define. FFS, what is so unclear about that? Or are you arguing on this subject without every having actually used a protocol before?

thereby making them entirely different from synthesised/reflective implementations!

I'm sorry, but you keep trying to argue that they're the same, but then admitting that they're not. You can't have it both ways.

Well, certainly, synthesized default implementations differ from non-synthesized ones in key respects. However, they do not differ in terms of the user experience of conforming to the protocol and having to override the default.

Except that that's not true at all, is it?

Synthesised default implementations go much further in how they attempt (and potentially fail) to implement those defaults, and in the specific case of Equatable/Hashable they are fully implementing a protocol without a single property of method being raised as a requirement; they are utterly different at a fundamental level, no amount of mental contortion changes that fact.

Consider for example if a type stores a collection index for performance reasons; this isn't an intrinsic part of the type, nor relevant to testing equality, yet this default implementation will treat it as such because it knows nothing about the concrete type's properties. If a protocol does not define a property then any action taken upon such a property is necessarily based upon an assumption; just because it might be fine some of the time, does not make it any less flawed.

The big difference here between explicit and implicit synthetic implementations is where this assumption originates; if a method is synthesised implicitly then the assumption is made by the protocol designer alone, with no real involvement by the end developer. If I explicitly opt-in to that default however I am signalling to the protocol that it is okay to proceed. In the former case the assumption is unreasonable, in the latter it is explicitly authorised. It is a difference between "I want to make the decision on what's correct" and "I am happy for you (the protocol designer) to decide".

Right now, when I conform to Equatable, it is a declaration of "I will implement this", but with this retroactive implicit change it is now a declaration of "implement this for me", these are two entirely different things. Consider; what if I'm working on a piece of code that requires types to be Equatable, but one of the types I'm using currently isn't, so I quickly throw Equatable conformance onto it and go back to what I was doing, with the intention of completing conformance later. With this change that type may now receive a default implementation that is wrong, and I've lost the safety net that currently exists.

Right now, it still wouldn’t compile, so I don’t see why you would do that. In the future, if you want to make it not compile, there is nothing stopping you from conforming to a non-existent “NotYetEquatable”. This was something that you asked about earlier and it was answered.

So your solution is to intentionally write invalid code to work around the fact that a feature is being implemented badly?

You stated a use case where you *want* the compiler to stop your code from compiling by stating a conformance to Equatable without implementing its requirements. You then stated that the major problem you have with synthesized `==` is that the compiler will now use a default implementation that you might forget about instead of stopping compilation. Therefore, I demonstrated how you could continue to have the compiler stop your code from compiling. It's not my solution that is intentionally writing invalid code; your stated aim was to be able to do so.

My stated aim was nothing of the sort.

I was pointing out that right now conforming to Equatable means something entirely different from what it will mean in future if this idiotic change makes it into release. Please actually read what I write before deciding for yourself what my 'stated aim' is.

I am not asking for workarounds to circumvent a ridiculously flawed change to the language, I am arguing why it is flawed and must be changed. If I wanted a workaround I'd do what I'm now seriously considering, which is ditching Swift completely, as I will not use a language if I can no longer trust the team developing it or the decisions that they make.

A non-synthesised/reflective implementation cannot strictly be incorrect, because as long as it is implemented properly it will always be correct within the context of the protocol itself. It may not go quite as far as an end developer might want, but that is because they want to add something onto the protocol, not because the protocol is wrong.

A synthesised/reflective implementation differs because if it goes too far it is wrong not only within the context of the concrete type, but also the protocol itself, it is simply incorrect.

Again, this is an assertion that misses the mark. If the default implementation is unsuitable for a type, it’s unsuitable whether it “doesn’t go quite as far” or “goes too far.”

Because not going quite far enough is not a failure of the protocol, as protocols by their very nature can only go as far as what they define. If a protocol Foo defines two properties, a method which uses those two properties correctly, then the method is correct. A developer of a concrete type might want to add more information or tailor the behaviour, but that doesn't make the default implementation incorrect, it's just considering the type only within the context of being an instance of Foo.

Going too far is the opposite; it's the protocol designer messing around with stuff they do not define at all. It's only ever right by chance, as it's operating within the context of the concrete type, about which the protocol does not know anything with certainty.

Yes, you have defined "not going far enough" and "going too far" based on whether an implementation uses only protocol requirements or not. However, you haven't at all demonstrated why this distinction is at all meaningful in terms of the issue you describe with a user conforming to a protocol. If there is a default implementation, either it returns the expected result for the conforming type or it does not--those are the only two choices. Are you arguing that, empirically, the default implementation for Equatable will more often be unsuitable for conforming types? If so, what's your evidence?

What's yours? If this issue was as "considered" as you constantly claim then where is the evidence that there is no meaningful distinction? Surely such evidence exists, or else the issue hasn't been considered at all, has it?

Frankly I am sick of being asked to provide evidence when you are seemingly unwilling to do anything in return, especially when you have conveniently ignored every single example that I have already given.

It cuts both ways; you claim that "going too far" and "not going far enough" are the same thing? Well prove it.

You state but do not give any rationale for the claim that the former is not wrong in some context while the latter is always wrong.

By this line of argumentation, you’d be perfectly content if instead we simply had the default implementation of == as “return true” because it would be somehow not wrong.

Only if return true were a reasonable default to give in the context of the protocol, which it clearly is not, as it's not performing any kind of comparison of equality.

Sure it is; `return true` satisfies all the semantic requirements for equality: reflexivity, symmetry, transitivity; and, in the context of the protocol which only provides for this one facility (determination of equality or inequality), any two instances that compare equal _are_ completely interchangeable "within the context of the protocol itself," as you would say.

The purpose of Equatable is to identify types that can be compared for equality; returning true does not satisfy that aim because no such comparison is occurring, so your example is intentionally ridiculous. Even a less contrived example such as comparing memory addresses doesn't fulfil the purpose of Equatable, which is all about comparing equality of different instances that might still be the same.

Put another way, what the proposal about synthesizing implementations for Equatable and Hashable was about can be thought of in two parts: (a) should there be default implementations; and (b) given that it is impossible to write these in Swift, should we use magic? Now, as I said above, adding default implementations isn't (afaik) even considered an API change that requires review on this list. Really, what people were debating was (b), whether it is worth it to implement compiler-supported magic to make these possible. Your disagreement has to do with (a) and not (b).

Wrong. The use of magic in this case produces something else entirely; that's the whole point. It is not the same, otherwise it wouldn't be needed at all. It doesn't matter if it's compiler magic, some external script or a native macro, ultimately they are all doing something with a concrete type that is currently not possible.

And once again; I am not arguing against a default implementation that cuts boilerplate, I am arguing against it being implicit. What I want is to be the one asking for it, because it is not reasonable to assume that just throwing it in there is always going to be fine, because it quite simply is not.

If you have to ask for it, then it's not a default. You *are* against a default implementation.

A default implementation is an implementation that I, as the concrete type developer, do not have to provide myself. If you want default to mean only "automatic" then your attempt to pigeon-hole what I am arguing is incorrect, because what I am arguing is then neither about default implementations nor the means of actually implementing it, but something else entirely.

But as far as I'm concerned it still absolutely still a default implementation whether it is requested or not; the difference is I, as the end developer, am able to refine what type of defaults that I want.

The word “default” indicates something that arises in the absence of a user indication otherwise.

Then this proposal is just for a different mechanism for "indicating otherwise".

You keep trying to argue that a synthesised/reflective default implementation is the same as a normal default implementation, yet you seem to be consistently forgetting that even if that is true without this proposal, that the very proposal itself is to change that, effectively causing a category of default implementation to become explicitly opted-into, rather than implicitly. They're still implementations that will be provided automatically, just only when they are permitted to do-so.

So to be clear, you are *against* them being the *default*: you wish them to be the *otherwise*.

You seem to be insisting upon a narrow definition of default; what I want is control over which types of default implementations are provided. Just because they must be opted-into explicitly does not stop them being "default", as they are still implementations that I myself do not need to implement. The difference is that I want to actually want them rather than have provided through potentially flimsy assumptions made by a protocol designer. Just because there's an extra step doesn't make them any less automatic, otherwise having to conform to a protocol in the first place would also prevent them from being defaults.

Asking for something is more like a middle-ground between the two; the synthetic implementations are still possible defaults, they just aren't provided unless you allow them, while omitting the necessary keyword/attribute prevents them being used.

···

On 13 Sep 2017, at 03:26, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
On Tue, Sep 12, 2017 at 11:43 AM, Haravikk via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On 12 Sep 2017, at 12:08, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Mon, Sep 11, 2017 at 06:03 Haravikk via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On 9 Sep 2017, at 23:17, Gwendal Roué <gwendal.roue@gmail.com <mailto:gwendal.roue@gmail.com>> wrote:

All right, I'll be more positive: our science, IT, is a *constructive* science, by *essence*. If there is a problem, there must be a way to show it.
It you can't, then there is no problem.

You mean just as I have asked for examples that prove non-synthetic/reflective default implementations are as dangerous as synthetic/reflective ones? Plenty have suggested this is the case yet no reasonable examples of that have been given either.

However, examples highlighting problems with the synthesised behaviour are simple:

struct Foo : Equatable { var data:String } // Currently an error, won't be in future

Or something a bit more substantial:

struct KeyPair : Equatable {
  static var count:Int = 0

  var count:Int
  let key:String // This is the only property that should be equatable
  var value:String

  init(key:String, value:String) {
    let count = KeyPair.count &+ 1
    KeyPair.count = count; self.count = count
    self.key = key; self.value = value
  }
}

Here the only important property in the key pair is the key, the value isn't important (only the keys are to be considered unique) and the count is just a throwaway value. The synthesised default implementation for this concrete type will therefore be completely wrong, likewise for Hashable, which will likely produce radically different results for instances that should be the same.

I notice that despite asking endlessly for examples, the ones I've given are being ignored. In future I shall remind people asking for examples where they can shove them.

And once again, totally ignored. You seem to love asking for "evidence" but why exactly should I bother giving anything if you ignore it when I try to?

>> There is this sample code by Thorsten Seitz with a cached property
which is quite simple and clear :
[swift-evolution] [Proposal] Explicit Synthetic Behaviour
>>
>> This is the sample code that had me enter the "worried" camp.'
>>
> I really like Thorsten's example, because it actually proves that
requiring explicit derivation is NOT the correct approach here. (Let's set
aside the fact that Optionals prevent synthesis because we don't have
conditional conformances yet, and assume that we've gotten that feature as
well for the sake of argument.)
>
> Let's look at two scenarios:
>
> 1) Imagine I have a value type with a number of simple Equatable
properties. In a world where synthesis is explicit, I tell that value type
to "derive Equatable". Everything is fine. Later, I decide to add some
cache property like in Thorsten's example, and that property just happens
to also be Equatable. After doing so, the correct thing to do would be to
remove the "derive" part and provide my custom implementation. But if I
forget to do that, the synthesized operator still exists and applies to
that type. If you're arguing that "derive Equatable" is better because its
explicitness prevents errors, you must also accept that there are possibly
just as many cases where that explicitness does *not* prevent errors.

It looks like it is true, but it is not. The implicit world is harder to
deal with. And this is because in the implicit world, you never know if
conformance has been synthesized or not. In the explicit world, you know:
it's plain written down.

To see this point, please play the game I've proposed to Xiaodi at
[swift-evolution] [Proposal] Explicit Synthetic Behaviour

Since your game involves the specific question of a property that should
not be considered for Equatable synthesis, and since SE-0185 explicitly
calls out the notion of transient properties as a future direction to solve
that specific problem, then my solution is "I declare the property as
transient."

Surely this is an acceptable solution? It achieves the goal you set, and
more concisely/quickly than the ways you proposed in that post. It doesn't
require me to go fishing through code; the act of adding the property and
making it transient is completely localized to one place (it's one line!).
If there's a synthesized implementation of Equatable, then the property
gets ignored as desired. If there's a handwritten implementation, then the
new property is already ignored because it wasn't there to begin with, but
the transient declaration still provides valuable information to the reader
of the code about the intention.

If the rebuttal to that is going to be "a developer may not know about
transient", then where do we draw the line at expecting users to know how
to use the features their language? It's a significant leap to go from
"developers might do the wrong thing" to "so this specific approach is the
only right way to fix it."

···

On Tue, Sep 12, 2017 at 8:10 AM Gwendal Roué <gwendal.roue@gmail.com> wrote:

> 2) Imagine I have a value type with 10 Equatable properties and one
caching property that also happens to be Equatable. The solution being
proposed here says that I'm better off with explicit synthesis because if I
conform that type to Equatable without "derive", I get an error, and then I
can provide my own custom implementation.

Yes.

> But I have to provide that custom implementation *anyway* to ignore the
caching property even if we don't make synthesis explicit. Making it
explicit hasn't saved me any work—it's only given me a compiler error for a
problem that I already knew I needed to resolve.

Oh, so compiler errors and warnings bring no information to you? :-) I'm
impressed, but I'll keep on focusing on designing a language for average
developpers, not only geniuses.

Replay your argument and imagine that compiler outputs are useful to you.
This will change your conclusion.

Gwendal

>
>> There is this sample code by Thorsten Seitz with a cached property
which is quite simple and clear :
[swift-evolution] [Proposal] Explicit Synthetic Behaviour
>>
>> This is the sample code that had me enter the "worried" camp.
>
> Sorry, I fail to see what the problem is in that example. A method was
invoked that changed a stored property of one instance. Therefore, it’s no
longer equal to the other instance. If you want a custom notion of
equality, you should implement it yourself. In the absence of such, the
_default_ notion of equality is pretty clear.

Yes, I agree with you. As long as you reason in this way, there is no
problem at all, just a programmer error.

I'll try to show that you need to play some moves further before problems
appear. Don't look at the code as a static thing, but as a living thing.

Please follow me: will you play a little programmer game? Here is a
snippet of code. It is embedded in this Xcode project which, as you can
see, contains hundreds of files. Your mission is to add an extra property
which should not be considered for Equatable adoption. How do you do that?

        struct S : Equatable {
                var x: Int
        }

The answer is:

1. Find if there is an extension somewhere that implements Equatable
conformance.
2. If you are eventually sure that there is no explicit conformance yet,
write the correct one.

You may know that proving that something does not exist is harder than the
opposite, and the step 1 above seems dubious to you. You are right. You
thus use this smarter technique:

1. Write a dummy explicit conformance to Equatable, compile.
2. If compiler does not complain about duplicate implementation, conclude
that there is no explicit conformance yet, and write the correct one.

Problem solved. And nice display of smartness and funny little compiler
tricks. </irony>

----

No. Please look at the topic from above, and look at how developers are
working. Troubles in synthesized conformance can happen for several reasons:

1. Lack of knowledge. As you tend to day, nobody is supposed to ignore the
law, and the bug is the fault of the programmer.
2. The developper creates the type, plans to add specific implementation,
and forgets.
3. The developper eventually adds a property and misses the fact that a
synthesized conformance exists (and is now incorrect).
4. The developper eventually adds a property and misses the fact that the
synthesized conformance is now incorrect.
5. The developper eventually adds a property, plans to fix the synthesized
conformance, and forgets.

Cases 2, 3 and 5 are examples of good faith mistakes.

In none of those cases, the compiler emits any warning. It's thus easy to
forget or miss the problem, and uneasy to fix it (you'll need a runtime
failure to spot it, or a thorough code review).

I hope you agree with this last sentence. This unbalance between the
easiness of the mistake and the easiness of the fix should ring a bell to
language designers.

Suppose instead this were about a protocol named Fooable and a requirement
called foo() that has a default implementation. Everything you just talked
about would apply equally. Am I to understand that you are opposed to
default implementations in general? If so, then that’s got nothing to do
with synthesized Equatable conformance. If not, then you’ll have to justify
why.

Beyond that, it's very hard to improve one's skills in face of those

···

On Tue, Sep 12, 2017 at 09:53 Gwendal Roué <gwendal.roue@gmail.com> wrote:

> Le 12 sept. 2017 à 12:01, Xiaodi Wu <xiaodi.wu@gmail.com> a écrit :
programmer errors. They have not much value beyond blind punishment. You
can't learn from them. I'm the last one who wishes to see a weird cultural
paranoia against Equatable: look at how "smart" and versed in Swift
subtleties you need to be to just add a simple property to an innocuous
struct!

Gwendal

Good arguments, Tony, you have convinced me on all points. Transient is
the way to go. Thank you for your patience!

On many points, I agree with Tony, but I disagree that "transient"
addresses the issue at hand. The challenge being made is that, as Gwendal
puts it, it's _unwise_ to have a default implementation, because people
might forget that there is a default implementation. "Transient" only works
if you remember that there is a default implementation, and in that case,
we already have a clear syntax for overriding the default.

As others point out, there's a temptation here to write things like
"transient(Equatable)" so as to control the synthesis of implementations on
a per-protocol basis. By that point, you've invented a whole new syntax for
implementing protocol requirements. (Ah, you might say, but it's hard to
write a good hashValue implementation: sure, but that's adequately solved
by a library-supplied combineHashes() function.)

···

On Tue, Sep 12, 2017 at 9:58 AM, Thorsten Seitz via swift-evolution < swift-evolution@swift.org> wrote:

-Thorsten

Am 12.09.2017 um 16:38 schrieb Tony Allevato via swift-evolution < > swift-evolution@swift.org>:

On Mon, Sep 11, 2017 at 10:05 PM Gwendal Roué <gwendal.roue@gmail.com> > wrote:

This doesn't align with how Swift views the role of protocols, though.
One of the criteria that the core team has said they look for in a protocol
is "what generic algorithms would be written using this protocol?"
AutoSynthesize doesn't satisfy that—there are no generic algorithms that
you would write with AutoEquatable that differ from what you would write
with Equatable.

And so everybody has to swallow implicit and non-avoidable code
synthesis and shut up?

That's not what I said. I simply pointed out one of the barriers to
getting a new protocol added to the language.

Code synthesis is explicitly opt-in and quite avoidable—you either don't
conform to the protocol, or you conform to the protocol and provide your
own implementation. What folks are differing on is whether there should
have to be *two* explicit switches that you flip instead of one.

No. One does not add a protocol conformance by whim. One adds a protocol
conformance by need. So the conformance to the protocol is a *given* in our
analysis of the consequence of code synthesis. You can not say "just don't
adopt it".

As soon as I type the protocol name, I get synthesis. That's the reason
why the synthesized code is implicit. The synthesis is explicitly written
in the protocol documentation, if you want. But not in the programmer's
code.

I did use "non-avoidable" badly, you're right: one can avoid it, by
providing its custom implementation.

So the code synthesis out of a mere protocol adoption *is* implicit.

Let's imagine a pie. The whole pie is the set of all Swift types. Some
slice of that pie is the subset of those types that satisfy the conditions
that allow one of our protocols to be synthesized. Now that slice of pie
can be sliced again, into the subset of types where (1) the synthesized
implementation is correct both in terms of strict value and of business
logic, and (2) the subset where it is correct in terms of strict value but
is not the right business logic because of something like transient data.

Yes.

What we have to consider is, how large is slice (2) relative to the whole
pie, *and* what is the likelihood that developers are going to mistakenly
conform to the protocol without providing their own implementation, *and*
is the added complexity worth protecting against this case?

That's quite a difficult job: do you think you can evaluate this
likelihood?

Explicit synthesis has big advantage: it avoids this question entirely.

Remember that the main problem with slide (2) is that developers can not
*learn* to avoid it.

For each type is slide (2) there is a probability that it comes into
existence with a forgotten explicit protocol adoption. And this probability
will not go down as people learn Swift and discover the existence of slide
(2). Why? because this probability is driven by unavoidable human behaviors:
- developer doesn't see the problem (a programmer mistake)
- the developper plans to add explicit conformance later and happens to
forget (carelessness)
- a developper extends an existing type with a transient property, and
doesn't add the explicit protocol conformance that has become required.

Case 2 and 3 bite even experienced developers. And they can't be improved
by learning.

Looks like the problem is better defined as an ergonomics issue, now.

If someone can show me something that points to accidental synthesized
implementations being a significant barrier to smooth development in Swift,
I'm more than happy to consider that evidence. But right now, this all
seems hypothetical ("I'm worried that...") and what's being proposed is
adding complexity to the language (an entirely new axis of protocol
conformance) that would (1) solve a problem that may not exist to any great
degree, and (2) does not address the fact that if that problem does indeed
exist, then the same problem just as likely exists with certain
non-synthesized default implementations.

There is this sample code by Thorsten Seitz with a cached property which
is quite simple and clear : https://lists.swift.org/
pipermail/swift-evolution/Week-of-Mon-20170911/039684.html

This is the sample code that had me enter the "worried" camp.'

I really like Thorsten's example, because it actually proves that
requiring explicit derivation is NOT the correct approach here. (Let's set
aside the fact that Optionals prevent synthesis because we don't have
conditional conformances yet, and assume that we've gotten that feature as
well for the sake of argument.)

Let's look at two scenarios:

1) Imagine I have a value type with a number of simple Equatable
properties. In a world where synthesis is explicit, I tell that value type
to "derive Equatable". Everything is fine. Later, I decide to add some
cache property like in Thorsten's example, and that property just happens
to also be Equatable. After doing so, the correct thing to do would be to
remove the "derive" part and provide my custom implementation. But if I
forget to do that, the synthesized operator still exists and applies to
that type. If you're arguing that "derive Equatable" is better because its
explicitness prevents errors, you must also accept that there are possibly
just as many cases where that explicitness does *not* prevent errors.

2) Imagine I have a value type with 10 Equatable properties and one
caching property that also happens to be Equatable. The solution being
proposed here says that I'm better off with explicit synthesis because if I
conform that type to Equatable without "derive", I get an error, and then I
can provide my own custom implementation. But I have to provide that custom
implementation *anyway* to ignore the caching property even if we don't
make synthesis explicit. Making it explicit hasn't saved me any work—it's
only given me a compiler error for a problem that I already knew I needed
to resolve. If we tack on Hashable and Codable to that type, then I still
have to write a significant amount of boilerplate for those custom
operations. Furthermore, if synthesis is explicit, I have *more* work
because I have to declare it explicitly even for types where the problem
above does not occur.

So, making derivation explicit is simply a non-useful dodge that doesn't
solve the underlying problem, which is this: Swift's type system currently
does not distinguish between Equatable properties that *do* contribute to
the "value" of their containing instance vs. Equatable properties that *do
not* contribute to the "value" of their containing instance. It's the
difference between behavior based on a type and additional business logic
implemented on top of those types.

So, what I'm trying to encourage people to see is this: saying "there are
some cases where synthesis is risky because it's incompatible with certain
semantics, so let's make it explicit everywhere" is trying to fix the wrong
problem. What we should be looking at is *"how do we give Swift the
additional semantic information it needs to make the appropriate decision
about what to synthesize?"*

That's where concepts like "transient" come in. If I have an
Equatable/Hashable/Codable type with 10 properties and one cache property,
I *still* want the synthesis for those first 10 properties. I don't want
the presence of *one* property to force me to write all of that boilerplate
myself. I just want to tell the compiler which properties to ignore.

Imagine you're a stranger reading the code to such a type for the first
time. Which would be easier for you to quickly understand? The version with
custom implementations of ==, hashValue, init(from:), and encode(to:) all
covering 10 or more properties that you have to read through to figure out
what's being ignored (and make sure that the author has done so correctly),
or the version that conforms to those protocols, does not contain a custom
implementation, and has each transient property clearly marked? The latter
is more concise and "transient" carries semantic weight that gets buried in
a handwritten implementation.

Here's a fun exercise—you can actually write something like "transient"
without any additional language support today: https://gist.github.com/
allevato/e1aab2b7b2ced72431c3cf4de71d306d. A big drawback to this
Transient type is that it's not as easy to use as an Optional because of
the additional sugar that Swift provides for the latter, but one could
expand it with some helper properties and methods to sugar it up the best
that the language will allow today.

I would wager that this concept, either as a wrapper type or as a built-in
property attribute, would solve a significant majority of cases where
synthesis is viewed to be "risky". If we accept that premise, then we can
back to our slice of pie and all we're left with in terms of "risky" types
are "types that contain properties that conform to a certain protocol but
are not really transient but also shouldn't be included verbatim in
synthesized operations". I'm struggling to imagine a type that fits that
description, so if they do exist, it's doubtful that they're a common
enough problem to warrant introducing more complexity into the protocol
conformance system.

Gwendal

_______________________________________________

swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Maybe something like this as middle ground.

protocol Equatable {
    @syntetic static func ==(_ lhs: Self, _ rhs: Self) -> Bool
}

protocol itself contains default implementation, but without real body.
Instead the function is marked that the real body is generated by compiler.
There is explicit mentions of default impl (by compiler magic), but it does
not affects users as they would still use protocol in normal way:

struct Foo: Equatable { .... }

Ondrej B.

···

On Wed, Sep 13, 2017 at 4:14 PM, Haravikk via swift-evolution < swift-evolution@swift.org> wrote:

On 13 Sep 2017, at 03:26, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Tue, Sep 12, 2017 at 11:43 AM, Haravikk via swift-evolution <swift- > evolution@swift.org> wrote:

On 12 Sep 2017, at 12:08, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Mon, Sep 11, 2017 at 06:03 Haravikk via swift-evolution < >> swift-evolution@swift.org> wrote:

See, this is another flawed assumption; you are assuming that omitting a
custom implementation of == is always intentional rather than an oversight,
which is not guaranteed. This is one of my gripes with the retroactive
change to Equatable, as it is currently *impossible* to omit an
implementation.

Again, this applies equally to the addition of _any_ default
implementation. And again, such changes don’t even require Swift Evolution
approval.

So what? Because the Swift Evolution process is currently deficient we
should just give up on discussing problems with features and the language
altogether?

I don't claim that it's a deficiency; I claim it's reflective of Swift's
opinionated take on default implementations. Are you, after all, saying
that you have a problem with the addition of _any_ default implementation
to an existing protocol? If so, this conversation isn't about
synthesis/reflection at all.

No, and you should know that by now. I suggest actually reading some of
what I have written as I am sick of repeating myself.

And precisely what kind of "evidence" am I expected to give? This is a set

of features that *do not exist yet*, I am trying to argue in favour of
an explicit end-developer centric opt-in rather than an implicit protocol
designer centric one. Yet no-one seems interested in the merits of allowing
developers to choose what they want, rather than having implicit behaviours
appear potentially unexpectedly.

Both options were examined for Codable and for Equatable/Hashable. The
community and core team decided to prefer the current design. At this
point, new insights that arise which could not be anticipated at the time
of review could prompt revision. However, so far, you have presented
arguments already considered during review.

And so far all I have heard about this is how it was "decided"; no-one
seems interested in showing how any of these concerns were addressed (if at
all), so as far as I can tell they were not, or they were wilfully ignored.

They were addressed by being considered.

And yet no-one can apparently summarise what those "considerations" might
be, suggesting that they were either *not* considered at all, or that
the "consideration" was so weak that no-one is willing to step forward to
defend it. Either way it is not sufficient by any reasonable measure.

If I were to run over your foot in my car, would you be happy to accept
that I "considered" it first?

How do you mean? People wrote in with their opinions. Then, taking into
account the community's response, the proposal was approved.

I mean because not once have you summarised what these alleged
"considerations" were; if they exist then you should be able do so, yet all
I am hearing is "it was considered", which frankly is not an argument at
all as it is entirely without substance.

If it was genuinely considered then someone should be able to say what
points were considered and what conclusions were reached and why. And even
if there *was* an earlier decision, that doesn't necessarily make it
right. We are discussing it now, and it is clear that any decision that has
been made has been made poorly at best.

And if you're talking about the discussion on Equatable/Hashable
specifically, I'm afraid your memory of the "considerations" is radically
different to mine; as the concerns I raised were essentially ignored, as
not a single person gave a justification more substantial than "but, but
Codable!" which frankly isn't a justification at all.

Therefore, your argument reduces to one about which default

implementations generally ought or ought not to be provided--that is, that
they ought to be provided only when their correctness can be guaranteed for
all (rather than almost all) possible conforming types. To which point I
sketched a rebuttal above.

If a protocol defines something, and creates a default implementation
based only upon those definitions then it must by its very nature be
correct. A concrete type may later decided to go further, but that is a
feature of the concrete type, not a failure of the protocol itself which
can function correctly within the context it created. You want to talk
evidence, yet there has been no example given that proves otherwise; thus
far only Itai has attempted to do so, but I have already pointed out the
flaws with that example.

The simple fact is that a default implementation may either be flawed
or not within the context of the protocol itself; but a reflective or
synthetic implementation by its very nature goes beyond what the protocol
defines and so is automatically flawed because as it does not rely on the
end-developer to confirm correctness, not when provided implicitly at least.

Again, if it applies generally, it must apply specifically. What is
"automatically flawed" about the very reasonable synthesized default
implementation of ==?

It makes the assumption that every equatable property of a type is
necessarily relevant to its equality.

No necessarily, only provisionally and rebuttably. If it’s not the case,
override the default.

So… entirely unlike standard default implementations which *cannot* "provisionally"
assume something is relevant at all,

Why not?

Because they can only act upon properties/methods that they themselves (or
a parent protocol) define. FFS, what is so unclear about that? Or are you
arguing on this subject without every having actually used a protocol
before?

thereby making them entirely different from synthesised/reflective

implementations!

I'm sorry, but you keep trying to argue that they're the same, but then
admitting that they're not. You can't have it both ways.

Well, certainly, synthesized default implementations differ from
non-synthesized ones in key respects. However, they do not differ in terms
of the user experience of conforming to the protocol and having to override
the default.

Except that that's not true at all, is it?

Synthesised default implementations go much further in how they attempt
(and potentially fail) to implement those defaults, and in the specific
case of Equatable/Hashable they are fully implementing a protocol without a
single property of method being raised as a requirement; they are utterly
different at a fundamental level, no amount of mental contortion changes
that fact.

Consider for example if a type stores a collection index for performance

reasons; this isn't an intrinsic part of the type, nor relevant to testing
equality, yet this default implementation will treat it as such because it
*knows nothing about the concrete type's properties*. If a protocol
does not define a property then any action taken upon such a property is
necessarily based upon an assumption; just because it might be fine some of
the time, does not make it any less flawed.

The big difference here between explicit and implicit synthetic
implementations is where this assumption originates; if a method is
synthesised implicitly then the assumption is made by the protocol designer
alone, with no real involvement by the end developer. If I explicitly
opt-in to that default however I am signalling to the protocol that it is
okay to proceed. In the former case the assumption is unreasonable, in the
latter it is explicitly authorised. It is a difference between "I want to
make the decision on what's correct" and "I am happy for you (the protocol
designer) to decide".

Right now, when I conform to Equatable, it is a declaration of "I will
implement this", but with this retroactive implicit change it is now a
declaration of "implement this for me", these are two entirely different
things. Consider; what if I'm working on a piece of code that requires
types to be Equatable, but one of the types I'm using currently isn't, so I
quickly throw Equatable conformance onto it and go back to what I was
doing, with the intention of completing conformance later. With this change
that type may now receive a default implementation that is wrong, and I've
lost the safety net that currently exists.

Right now, it still wouldn’t compile, so I don’t see why you would do
that. In the future, if you want to make it not compile, there is nothing
stopping you from conforming to a non-existent “NotYetEquatable”. This was
something that you asked about earlier and it was answered.

So your solution is to intentionally write invalid code to work around
the fact that a feature is being implemented badly?

You stated a use case where you *want* the compiler to stop your code from
compiling by stating a conformance to Equatable without implementing its
requirements. You then stated that the major problem you have with
synthesized `==` is that the compiler will now use a default implementation
that you might forget about instead of stopping compilation. Therefore, I
demonstrated how you could continue to have the compiler stop your code
from compiling. It's not my solution that is intentionally writing invalid
code; your stated aim was to be able to do so.

My stated aim was nothing of the sort.

I was pointing out that right now conforming to Equatable means something
entirely different from what it will mean in future if this idiotic change
makes it into release. Please actually read what I write before deciding
for yourself what my 'stated aim' is.

I am *not* asking for workarounds to circumvent a ridiculously flawed
change to the language, I am arguing why it is flawed and must be changed.
If I wanted a workaround I'd do what I'm now seriously considering, which
is ditching Swift completely, as I will not use a language if I can no
longer trust the team developing it or the decisions that they make.

A non-synthesised/reflective implementation cannot strictly be incorrect,

because as long as it is implemented properly it will always be correct
within the context of the protocol itself. It may not go quite as far as an
end developer might want, but that is because they want to add something
onto the protocol, not because the protocol is wrong.

A synthesised/reflective implementation differs because if it goes too
far it is wrong not only within the context of the concrete type, but also
the protocol itself, it is simply incorrect.

Again, this is an assertion that misses the mark. If the default
implementation is unsuitable for a type, it’s unsuitable whether it
“doesn’t go quite as far” or “goes too far.”

Because not going quite far enough is not a failure of the protocol, as
protocols by their very nature can only go as far as what they define. If a
protocol Foo defines two properties, a method which uses those two
properties correctly, then the method is correct. A developer of a concrete
type might want to add more information or tailor the behaviour, but that
doesn't make the default implementation incorrect, it's just considering
the type only within the context of being an instance of Foo.

Going too far is the opposite; it's the protocol designer messing around
with stuff they do not define at all. It's only ever right by chance, as
it's operating within the context of the concrete type, about which the
protocol does not know anything with certainty.

Yes, you have defined "not going far enough" and "going too far" based on
whether an implementation uses only protocol requirements or not. However,
you haven't at all demonstrated why this distinction is at all meaningful
in terms of the issue you describe with a user conforming to a protocol. If
there is a default implementation, either it returns the expected result
for the conforming type or it does not--those are the only two choices. Are
you arguing that, empirically, the default implementation for Equatable
will more often be unsuitable for conforming types? If so, what's your
evidence?

What's yours? If this issue was as "considered" as you constantly claim
then where is the evidence that there is no meaningful distinction? Surely
such evidence exists, or else the issue hasn't been considered at all, has
it?

Frankly I am sick of being asked to provide evidence when you are
seemingly unwilling to do anything in return, especially when you have
conveniently ignored every single example that I have already given.

It cuts both ways; you claim that "going too far" and "not going far
enough" are the same thing? Well prove it.

You state but do not give any rationale for the claim that the former is

not wrong in some context while the latter is always wrong.

By this line of argumentation, you’d be perfectly content if instead we
simply had the default implementation of == as “return true” because it
would be somehow not wrong.

Only if return true were a reasonable default to give in the context of
the protocol, which it clearly is not, as it's not performing any kind of
comparison of equality.

Sure it is; `return true` satisfies all the semantic requirements for
equality: reflexivity, symmetry, transitivity; and, in the context of the
protocol which only provides for this one facility (determination of
equality or inequality), any two instances that compare equal _are_
completely interchangeable "within the context of the protocol itself," as
you would say.

The purpose of Equatable is to identify types that can be compared for
equality; returning true does not satisfy that aim because no such
comparison is occurring, so your example is intentionally ridiculous. Even
a less contrived example such as comparing memory addresses doesn't fulfil
the purpose of Equatable, which is all about comparing equality of
different instances that might still be the same.

Put another way, what the proposal about synthesizing implementations for

Equatable and Hashable was about can be thought of in two parts: (a) should
there be default implementations; and (b) given that it is impossible to
write these in Swift, should we use magic? Now, as I said above, adding
default implementations isn't (afaik) even considered an API change that
requires review on this list. Really, what people were debating was (b),
whether it is worth it to implement compiler-supported magic to make these
possible. Your disagreement has to do with (a) and not (b).

Wrong. The use of magic in this case produces something else entirely;
that's the whole point. It is *not the same*, otherwise it wouldn't be
needed at all. It doesn't matter if it's compiler magic, some external
script or a native macro, ultimately they are all doing something with a
concrete type that is currently not possible.

And once again; *I am not arguing against a default implementation
that cuts boilerplate*, I am arguing against it being implicit. What I
want is to be the one asking for it, because it is not reasonable to assume
that just throwing it in there is always going to be fine, because it quite
simply is not.

If you have to ask for it, then it's not a default. You *are* against a
default implementation.

A default implementation is an implementation that I, as the concrete
type developer, do not have to provide myself. If you want default to mean
only "automatic" then your attempt to pigeon-hole what I am arguing is
incorrect, because what I am arguing is then neither about default
implementations nor the means of actually implementing it, but something
else entirely.

But as far as I'm concerned it still absolutely still a default
implementation whether it is requested or not; the difference is I, as the
end developer, am able to refine what type of defaults that I want.

The word “default” indicates something that arises in the absence of a
user indication otherwise.

Then this proposal is just for a different mechanism for "indicating
otherwise".

You keep trying to argue that a synthesised/reflective default
implementation is the same as a normal default implementation, yet you seem
to be consistently forgetting that even if that is true without this
proposal, that the very proposal itself is to change that, effectively
causing a category of default implementation to become explicitly
opted-into, rather than implicitly. They're still implementations that will
be provided automatically, just only when they are permitted to do-so.

So to be clear, you are *against* them being the *default*: you wish them
to be the *otherwise*.

You seem to be insisting upon a narrow definition of default; what I want
is control over which types of default implementations are provided. Just
because they must be opted-into explicitly does not stop them being
"default", as they are still implementations that I myself do not need to
implement. The difference is that I want to actually *want* them rather
than have provided through potentially flimsy assumptions made by a
protocol designer. Just because there's an extra step doesn't make them any
less automatic, otherwise having to conform to a protocol in the first
place would also prevent them from being defaults.

Asking *for* something is more like a middle-ground between the two; the
synthetic implementations are still possible defaults, they just aren't
provided unless you allow them, while omitting the necessary
keyword/attribute prevents them being used.

On 9 Sep 2017, at 23:17, Gwendal Roué <gwendal.roue@gmail.com> wrote:

All right, I'll be more positive: our science, IT, is a *constructive*
science, by *essence*. If there is a problem, there must be a way to show
it.
It you can't, then there is no problem.

You mean just as I have asked for examples that prove
non-synthetic/reflective default implementations are as dangerous as
synthetic/reflective ones? Plenty have suggested this is the case yet no
reasonable examples of that have been given either.

However, examples highlighting problems with the synthesised behaviour
are simple:

struct Foo : Equatable { var data:String } // Currently an error, won't
be in future

Or something a bit more substantial:

struct KeyPair : Equatable {
static var count:Int = 0

var count:Int
let key:String // This is the only property that should be equatable
var value:String

init(key:String, value:String) {
let count = KeyPair.count &+ 1
KeyPair.count = count; self.count = count
self.key = key; self.value = value
}
}

Here the only important property in the key pair is the key, the value
isn't important (only the keys are to be considered unique) and the count
is just a throwaway value. The synthesised default implementation for this
concrete type will therefore be completely wrong, likewise for Hashable,
which will likely produce radically different results for instances that
should be the same.

I notice that despite asking endlessly for examples, the ones I've given
are being ignored. In future I shall remind people asking for examples
where they can shove them.

And once again, totally ignored. You seem to love asking for "evidence"
but why exactly should I bother giving anything if you ignore it when I try
to?

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

FWIW Fully agree with Haravikk. Just want to add my 2 cents. We hear an opinion that protocol with auto-synthesization of requirements should be treated as a normal protocol with deault implenentation. So, this is just a variant of default implementation.

Actually I can agree with this. *BUT.* Such a *well-declared* protocol, which uses macros/reflection/other to access *type's fields* to implement the default methods IMO should also be explicitly conformed with 'deriving'-like keyword to be able to synthesize methods. For example, as first thoughts, such protocol should be marked with some kind of @synthesizable directive and probably have some helpers in this case from compiler to implement synthesizable defaults. Yes, one probably can implement protocol which uses macros/reflection without @synthesizable directive for protocol, but this should be considered as not-well-formed protocol.

Vladimir.

···

On 13.09.2017 17:13, Haravikk via swift-evolution wrote:

Well, certainly, synthesized default implementations differ from non-synthesized ones in key respects. However, they do not differ in terms of the user experience of conforming to the protocol and having to override the default.

Except that that's not true at all, is it?

Synthesised default implementations go much further in how they attempt (and potentially fail) to implement those defaults, and in the specific case of Equatable/Hashable they are fully implementing a protocol without a single property of method being raised as a requirement; they are utterly different at a fundamental level, no amount of mental contortion changes that fact.

See, this is another flawed assumption; you are assuming that omitting a
custom implementation of == is always intentional rather than an oversight,
which is not guaranteed. This is one of my gripes with the retroactive
change to Equatable, as it is currently *impossible* to omit an
implementation.

Again, this applies equally to the addition of _any_ default
implementation. And again, such changes don’t even require Swift Evolution
approval.

So what? Because the Swift Evolution process is currently deficient we
should just give up on discussing problems with features and the language
altogether?

I don't claim that it's a deficiency; I claim it's reflective of Swift's
opinionated take on default implementations. Are you, after all, saying
that you have a problem with the addition of _any_ default implementation
to an existing protocol? If so, this conversation isn't about
synthesis/reflection at all.

No, and you should know that by now. I suggest actually reading some of
what I have written as I am sick of repeating myself.

And precisely what kind of "evidence" am I expected to give? This is a set

of features that *do not exist yet*, I am trying to argue in favour of
an explicit end-developer centric opt-in rather than an implicit protocol
designer centric one. Yet no-one seems interested in the merits of allowing
developers to choose what they want, rather than having implicit behaviours
appear potentially unexpectedly.

Both options were examined for Codable and for Equatable/Hashable. The
community and core team decided to prefer the current design. At this
point, new insights that arise which could not be anticipated at the time
of review could prompt revision. However, so far, you have presented
arguments already considered during review.

And so far all I have heard about this is how it was "decided"; no-one
seems interested in showing how any of these concerns were addressed (if at
all), so as far as I can tell they were not, or they were wilfully ignored.

They were addressed by being considered.

And yet no-one can apparently summarise what those "considerations" might
be, suggesting that they were either *not* considered at all, or that
the "consideration" was so weak that no-one is willing to step forward to
defend it. Either way it is not sufficient by any reasonable measure.

If I were to run over your foot in my car, would you be happy to accept
that I "considered" it first?

How do you mean? People wrote in with their opinions. Then, taking into
account the community's response, the proposal was approved.

I mean because not once have you summarised what these alleged
"considerations" were; if they exist then you should be able do so, yet all
I am hearing is "it was considered", which frankly is not an argument at
all as it is entirely without substance.

Of course it is not an argument at all. It is a factual statement. The
objections which you mentioned were also mentioned prior to a decision
about SE-0185. The community and the core team had an opportunity to view
those objections. After that time, a decision was made, having considered
all the stated pros and cons which included the ones that you are now
repeating. What "considerations" are you looking for?

If it was genuinely considered then someone should be able to say what

points were considered and what conclusions were reached and why. And even
if there *was* an earlier decision, that doesn't necessarily make it
right.

No, but it does mean that discussion of this topic has concluded on Swift
Evolution.

We are discussing it now, and it is clear that any decision that has been

made has been made poorly at best.

And if you're talking about the discussion on Equatable/Hashable
specifically, I'm afraid your memory of the "considerations" is radically
different to mine; as the concerns I raised were essentially ignored, as
not a single person gave a justification more substantial than "but, but
Codable!" which frankly isn't a justification at all.

Therefore, your argument reduces to one about which default

implementations generally ought or ought not to be provided--that is, that
they ought to be provided only when their correctness can be guaranteed for
all (rather than almost all) possible conforming types. To which point I
sketched a rebuttal above.

If a protocol defines something, and creates a default implementation
based only upon those definitions then it must by its very nature be
correct. A concrete type may later decided to go further, but that is a
feature of the concrete type, not a failure of the protocol itself which
can function correctly within the context it created. You want to talk
evidence, yet there has been no example given that proves otherwise; thus
far only Itai has attempted to do so, but I have already pointed out the
flaws with that example.

The simple fact is that a default implementation may either be flawed
or not within the context of the protocol itself; but a reflective or
synthetic implementation by its very nature goes beyond what the protocol
defines and so is automatically flawed because as it does not rely on the
end-developer to confirm correctness, not when provided implicitly at least.

Again, if it applies generally, it must apply specifically. What is
"automatically flawed" about the very reasonable synthesized default
implementation of ==?

It makes the assumption that every equatable property of a type is
necessarily relevant to its equality.

No necessarily, only provisionally and rebuttably. If it’s not the case,
override the default.

So… entirely unlike standard default implementations which *cannot* "provisionally"
assume something is relevant at all,

Why not?

Because they can only act upon properties/methods that they themselves (or
a parent protocol) define. FFS, what is so unclear about that? Or are you
arguing on this subject without every having actually used a protocol
before?

There is no guarantee whatsoever that any particular default implementation
is suitable for your particular conforming type. If it were so guaranteed,
it would not need to be a dynamically dispatched default; it could just be
a protocol extension member. Every default implementation is provisional in
some way or other.

thereby making them entirely different from synthesised/reflective

implementations!

I'm sorry, but you keep trying to argue that they're the same, but then
admitting that they're not. You can't have it both ways.

Well, certainly, synthesized default implementations differ from
non-synthesized ones in key respects. However, they do not differ in terms
of the user experience of conforming to the protocol and having to override
the default.

Except that that's not true at all, is it?

Synthesised default implementations go much further in how they attempt
(and potentially fail) to implement those defaults, and in the specific
case of Equatable/Hashable they are fully implementing a protocol without a
single property of method being raised as a requirement; they are utterly
different at a fundamental level, no amount of mental contortion changes
that fact.

You keep repeating this; but repeated assertion doesn't make it true. The
author of a type who conforms it to a protocol has to determine whether the
default implementations make sense for that type. They either do or they do
not; both are possible, and if they are not suitable, it does not matter
one bit whether this is because it "goes too far" or "not far enough."

Consider for example if a type stores a collection index for performance

reasons; this isn't an intrinsic part of the type, nor relevant to testing
equality, yet this default implementation will treat it as such because it
*knows nothing about the concrete type's properties*. If a protocol
does not define a property then any action taken upon such a property is
necessarily based upon an assumption; just because it might be fine some of
the time, does not make it any less flawed.

The big difference here between explicit and implicit synthetic
implementations is where this assumption originates; if a method is
synthesised implicitly then the assumption is made by the protocol designer
alone, with no real involvement by the end developer. If I explicitly
opt-in to that default however I am signalling to the protocol that it is
okay to proceed. In the former case the assumption is unreasonable, in the
latter it is explicitly authorised. It is a difference between "I want to
make the decision on what's correct" and "I am happy for you (the protocol
designer) to decide".

Right now, when I conform to Equatable, it is a declaration of "I will
implement this", but with this retroactive implicit change it is now a
declaration of "implement this for me", these are two entirely different
things. Consider; what if I'm working on a piece of code that requires
types to be Equatable, but one of the types I'm using currently isn't, so I
quickly throw Equatable conformance onto it and go back to what I was
doing, with the intention of completing conformance later. With this change
that type may now receive a default implementation that is wrong, and I've
lost the safety net that currently exists.

Right now, it still wouldn’t compile, so I don’t see why you would do
that. In the future, if you want to make it not compile, there is nothing
stopping you from conforming to a non-existent “NotYetEquatable”. This was
something that you asked about earlier and it was answered.

So your solution is to intentionally write invalid code to work around
the fact that a feature is being implemented badly?

You stated a use case where you *want* the compiler to stop your code from
compiling by stating a conformance to Equatable without implementing its
requirements. You then stated that the major problem you have with
synthesized `==` is that the compiler will now use a default implementation
that you might forget about instead of stopping compilation. Therefore, I
demonstrated how you could continue to have the compiler stop your code
from compiling. It's not my solution that is intentionally writing invalid
code; your stated aim was to be able to do so.

My stated aim was nothing of the sort.

I was pointing out that right now conforming to Equatable means something
entirely different from what it will mean in future if this idiotic change
makes it into release. Please actually read what I write before deciding
for yourself what my 'stated aim' is.

I am *not* asking for workarounds to circumvent a ridiculously flawed
change to the language, I am arguing why it is flawed and must be changed.
If I wanted a workaround I'd do what I'm now seriously considering, which
is ditching Swift completely, as I will not use a language if I can no
longer trust the team developing it or the decisions that they make.

You are certainly free to choose to write in any language you please; but
it is certainly not appropriate to attack proposals with words such as
'idiot.' The core team reviewed the changes in light of all comments and
came to this decision. If you want to persuade them otherwise, you'll need
to do more than to repeat the same comments you've already made.

A non-synthesised/reflective implementation cannot strictly be incorrect,

because as long as it is implemented properly it will always be correct
within the context of the protocol itself. It may not go quite as far as an
end developer might want, but that is because they want to add something
onto the protocol, not because the protocol is wrong.

A synthesised/reflective implementation differs because if it goes too
far it is wrong not only within the context of the concrete type, but also
the protocol itself, it is simply incorrect.

Again, this is an assertion that misses the mark. If the default
implementation is unsuitable for a type, it’s unsuitable whether it
“doesn’t go quite as far” or “goes too far.”

Because not going quite far enough is not a failure of the protocol, as
protocols by their very nature can only go as far as what they define. If a
protocol Foo defines two properties, a method which uses those two
properties correctly, then the method is correct. A developer of a concrete
type might want to add more information or tailor the behaviour, but that
doesn't make the default implementation incorrect, it's just considering
the type only within the context of being an instance of Foo.

Going too far is the opposite; it's the protocol designer messing around
with stuff they do not define at all. It's only ever right by chance, as
it's operating within the context of the concrete type, about which the
protocol does not know anything with certainty.

Yes, you have defined "not going far enough" and "going too far" based on
whether an implementation uses only protocol requirements or not. However,
you haven't at all demonstrated why this distinction is at all meaningful
in terms of the issue you describe with a user conforming to a protocol. If
there is a default implementation, either it returns the expected result
for the conforming type or it does not--those are the only two choices. Are
you arguing that, empirically, the default implementation for Equatable
will more often be unsuitable for conforming types? If so, what's your
evidence?

What's yours? If this issue was as "considered" as you constantly claim
then where is the evidence that there is no meaningful distinction? Surely
such evidence exists, or else the issue hasn't been considered at all, has
it?

Firstly, it is by definition not possible to prove a negative; secondly, as
you are the one who created this distinction, the onus is certainly on you
to provide convincing evidence of it, which you have not yet done.

Frankly I am sick of being asked to provide evidence when you are seemingly

unwilling to do anything in return, especially when you have conveniently
ignored every single example that I have already given.

It cuts both ways; you claim that "going too far" and "not going far
enough" are the same thing? Well prove it.

You state but do not give any rationale for the claim that the former is

not wrong in some context while the latter is always wrong.

By this line of argumentation, you’d be perfectly content if instead we
simply had the default implementation of == as “return true” because it
would be somehow not wrong.

Only if return true were a reasonable default to give in the context of
the protocol, which it clearly is not, as it's not performing any kind of
comparison of equality.

Sure it is; `return true` satisfies all the semantic requirements for
equality: reflexivity, symmetry, transitivity; and, in the context of the
protocol which only provides for this one facility (determination of
equality or inequality), any two instances that compare equal _are_
completely interchangeable "within the context of the protocol itself," as
you would say.

The purpose of Equatable is to identify types that can be compared for
equality; returning true does not satisfy that aim because no such
comparison is occurring, so your example is intentionally ridiculous.

Of course it is intentionally ridiculous. It is intended to show that
_your_ claim that "not going far enough" is still "correct within the
context of the protocol itself" and therefore OK for a default
implementation can produce absurd results, and that these results (though
they fill your criteria for "correctness") are far poorer defaults than
synthesized `==`, therefore demonstrating that "not going far enough" is
certainly not somehow necessarily less "wrong" than "going too far."

Even a less contrived example such as comparing memory addresses doesn't

fulfil the purpose of Equatable, which is all about comparing equality of
different instances that might still be the same.

Put another way, what the proposal about synthesizing implementations for

Equatable and Hashable was about can be thought of in two parts: (a) should
there be default implementations; and (b) given that it is impossible to
write these in Swift, should we use magic? Now, as I said above, adding
default implementations isn't (afaik) even considered an API change that
requires review on this list. Really, what people were debating was (b),
whether it is worth it to implement compiler-supported magic to make these
possible. Your disagreement has to do with (a) and not (b).

Wrong. The use of magic in this case produces something else entirely;
that's the whole point. It is *not the same*, otherwise it wouldn't be
needed at all. It doesn't matter if it's compiler magic, some external
script or a native macro, ultimately they are all doing something with a
concrete type that is currently not possible.

And once again; *I am not arguing against a default implementation
that cuts boilerplate*, I am arguing against it being implicit. What I
want is to be the one asking for it, because it is not reasonable to assume
that just throwing it in there is always going to be fine, because it quite
simply is not.

If you have to ask for it, then it's not a default. You *are* against a
default implementation.

A default implementation is an implementation that I, as the concrete
type developer, do not have to provide myself. If you want default to mean
only "automatic" then your attempt to pigeon-hole what I am arguing is
incorrect, because what I am arguing is then neither about default
implementations nor the means of actually implementing it, but something
else entirely.

But as far as I'm concerned it still absolutely still a default
implementation whether it is requested or not; the difference is I, as the
end developer, am able to refine what type of defaults that I want.

The word “default” indicates something that arises in the absence of a
user indication otherwise.

Then this proposal is just for a different mechanism for "indicating
otherwise".

You keep trying to argue that a synthesised/reflective default
implementation is the same as a normal default implementation, yet you seem
to be consistently forgetting that even if that is true without this
proposal, that the very proposal itself is to change that, effectively
causing a category of default implementation to become explicitly
opted-into, rather than implicitly. They're still implementations that will
be provided automatically, just only when they are permitted to do-so.

So to be clear, you are *against* them being the *default*: you wish them
to be the *otherwise*.

You seem to be insisting upon a narrow definition of default; what I want
is control over which types of default implementations are provided. Just
because they must be opted-into explicitly does not stop them being
"default",

Yes, they do stop being "default"; that is what the word means.

as they are still implementations that I myself do not need to implement.

There must be adjectives to describe this, but "default" is not it. Bottom
line is: you don't agree with SE-0185 providing a synthesized default
implementation for Equatable and Hashable requirements.

The difference is that I want to actually *want* them rather than have

···

On Wed, Sep 13, 2017 at 09:13 Haravikk via swift-evolution < swift-evolution@swift.org> wrote:

On 13 Sep 2017, at 03:26, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
On Tue, Sep 12, 2017 at 11:43 AM, Haravikk via swift-evolution < > swift-evolution@swift.org> wrote:

On 12 Sep 2017, at 12:08, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
On Mon, Sep 11, 2017 at 06:03 Haravikk via swift-evolution < >> swift-evolution@swift.org> wrote:

provided through potentially flimsy assumptions made by a protocol
designer. Just because there's an extra step doesn't make them any less
automatic, otherwise having to conform to a protocol in the first place
would also prevent them from being defaults.

Asking *for* something is more like a middle-ground between the two; the
synthetic implementations are still possible defaults, they just aren't
provided unless you allow them, while omitting the necessary
keyword/attribute prevents them being used.

On 9 Sep 2017, at 23:17, Gwendal Roué <gwendal.roue@gmail.com> wrote:

All right, I'll be more positive: our science, IT, is a *constructive*
science, by *essence*. If there is a problem, there must be a way to show
it.
It you can't, then there is no problem.

You mean just as I have asked for examples that prove
non-synthetic/reflective default implementations are as dangerous as
synthetic/reflective ones? Plenty have suggested this is the case yet no
reasonable examples of that have been given either.

However, examples highlighting problems with the synthesised behaviour
are simple:

struct Foo : Equatable { var data:String } // Currently an error, won't
be in future

Or something a bit more substantial:

struct KeyPair : Equatable {
static var count:Int = 0

var count:Int
let key:String // This is the only property that should be equatable
var value:String

init(key:String, value:String) {
let count = KeyPair.count &+ 1
KeyPair.count = count; self.count = count
self.key = key; self.value = value
}
}

Here the only important property in the key pair is the key, the value
isn't important (only the keys are to be considered unique) and the count
is just a throwaway value. The synthesised default implementation for this
concrete type will therefore be completely wrong, likewise for Hashable,
which will likely produce radically different results for instances that
should be the same.

I notice that despite asking endlessly for examples, the ones I've given
are being ignored. In future I shall remind people asking for examples
where they can shove them.

And once again, totally ignored. You seem to love asking for "evidence"
but why exactly should I bother giving anything if you ignore it when I try
to?
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

First - I can go either way on this issue, since proper traits/mixins and hygienic macros both appear to be strong influencers which will drive evolution in ways we cannot evaluate yet.

In my opinion, the difference between most protocol default implementations and Equatable/Hashable is that most protocol default implementations are based on the contract of the protocol itself. I can understand that Sequence#contains works by default because a Sequence implementation has a required makeIterator() function that meet the Sequence contract (and returns an Iterator that meets the iterator contract).

There is nothing in the Equatable contract saying "two instances of a type are considered Equatable for value types when all the properties on the type are Equatable and compare as being equal." That is a behavioral assumption made by the default implementation.

The current usage of traits in swift (in my experience) can be a bit deficient in that there is an assumption of a single behavior. For example - I might be able to default quite a bit of a view's delegate if I was willing to make assumptions, such as each row being a fixed size with a fixed (set of) views. But this is inappropriate as a default behavior for all table view delegates.

We also have the disadvantage that we haven't solved the problem of indicating a method is meant to implement a protocol, so typos and incorrect function signatures can result in the wrong behavior at runtime.

-DW

···

On Sep 12, 2017, at 11:00 AM, Xiaodi Wu via swift-evolution <swift-evolution@swift.org> wrote:

Suppose instead this were about a protocol named Fooable and a requirement called foo() that has a default implementation. Everything you just talked about would apply equally. Am I to understand that you are opposed to default implementations in general? If so, then that’s got nothing to do with synthesized Equatable conformance. If not, then you’ll have to justify why.

I think that it's reasonable to assume that the protocol would inform developers of the use of transient, but this comes back to the original topic of this thread; if the developer didn't ask for the synthesised behaviour then is it reasonable to assume they'll have properly marked their properties as transient?

My argument would (perhaps unsurprisingly) be no; a default implementation that can make use of attributes is IMO something that a developer should opt into explicitly as a convenient alternative to providing a full implementation themselves.

···

On 12 Sep 2017, at 16:29, Tony Allevato via swift-evolution <swift-evolution@swift.org> wrote:

Since your game involves the specific question of a property that should not be considered for Equatable synthesis, and since SE-0185 explicitly calls out the notion of transient properties as a future direction to solve that specific problem, then my solution is "I declare the property as transient."

Surely this is an acceptable solution? It achieves the goal you set, and more concisely/quickly than the ways you proposed in that post. It doesn't require me to go fishing through code; the act of adding the property and making it transient is completely localized to one place (it's one line!). If there's a synthesized implementation of Equatable, then the property gets ignored as desired. If there's a handwritten implementation, then the new property is already ignored because it wasn't there to begin with, but the transient declaration still provides valuable information to the reader of the code about the intention.

If the rebuttal to that is going to be "a developer may not know about transient", then where do we draw the line at expecting users to know how to use the features their language? It's a significant leap to go from "developers might do the wrong thing" to "so this specific approach is the only right way to fix it."

Your answer is totally valid in a world where properties can be declared transient.

So you may now be aware that in a world where there are no transient properties, implicit synthesis of Equatable and Hashable conformance has problems that explicit synthesis has not. Making more people aware of this is my only goal in this thread.

Gwendal

···

Le 12 sept. 2017 à 17:26, Tony Allevato <tony.allevato@gmail.com> a écrit :

Since your game involves the specific question of a property that should not be considered for Equatable synthesis, and since SE-0185 explicitly calls out the notion of transient properties as a future direction to solve that specific problem, then my solution is "I declare the property as transient."

Surely this is an acceptable solution? It achieves the goal you set, and more concisely/quickly than the ways you proposed in that post. It doesn't require me to go fishing through code; the act of adding the property and making it transient is completely localized to one place (it's one line!). If there's a synthesized implementation of Equatable, then the property gets ignored as desired. If there's a handwritten implementation, then the new property is already ignored because it wasn't there to begin with, but the transient declaration still provides valuable information to the reader of the code about the intention.

If the rebuttal to that is going to be "a developer may not know about transient", then where do we draw the line at expecting users to know how to use the features their language? It's a significant leap to go from "developers might do the wrong thing" to "so this specific approach is the only right way to fix it."

In none of those cases, the compiler emits any warning. It's thus easy to forget or miss the problem, and uneasy to fix it (you'll need a runtime failure to spot it, or a thorough code review).

I hope you agree with this last sentence. This unbalance between the easiness of the mistake and the easiness of the fix should ring a bell to language designers.

Suppose instead this were about a protocol named Fooable and a requirement called foo() that has a default implementation. Everything you just talked about would apply equally. Am I to understand that you are opposed to default implementations in general? If so, then that’s got nothing to do with synthesized Equatable conformance. If not, then you’ll have to justify why.

Sounds like a good argument, until one realises that if a protocol does not provide a default implementations for a method, it may be because a default implementations is impossible to provide (the most usual case), or because it would be unwise to do so.

And indeed, the topic currently discussed is not if we should remove or not default implementations. Instead, the question is: is it wise or not to provide an *implicit* default Equatable/Hashable/XXX implementation?

The tenant of explicit synthesis attempt to say that it would be unwise to do so. Please don't have us repeat again, that would be disrespectful.

BTW, Happy Keynote to everybody!
Gwendal

Good arguments, Tony, you have convinced me on all points. Transient is
the way to go. Thank you for your patience!

On many points, I agree with Tony, but I disagree that "transient"
addresses the issue at hand. The challenge being made is that, as Gwendal
puts it, it's _unwise_ to have a default implementation, because people
might forget that there is a default implementation. "Transient" only works
if you remember that there is a default implementation, and in that case,
we already have a clear syntax for overriding the default.

Right—I hope it hasn't sounded like I'm conflating the two concepts
completely. The reason I brought up "transient" is because nearly all of
the "risky" examples being cited so far have been of the variety "I have a
type where some properties happen to be Equatable but shouldn't be involved
in equality", so my intention has been to show that if we have a better
solution to that specific problem (which is, related to but not the same as
the question at hand), then there aren't enough risky cases left to warrant
adding this level of complexity to the protocol system.

As others point out, there's a temptation here to write things like
"transient(Equatable)" so as to control the synthesis of implementations on
a per-protocol basis. By that point, you've invented a whole new syntax for
implementing protocol requirements. (Ah, you might say, but it's hard to
write a good hashValue implementation: sure, but that's adequately solved
by a library-supplied combineHashes() function.)

I totally agree with this. A design that would try to annotate "transient"
with a protocol or list of protocols is missing the point of the semantics
that "transient" is supposed to provide. It's not a series of switches to
that can be flipped on and off for arbitrary protocols—it's a semantic tag
that assigns additional meaning to properties and certain protocols (such
as Equatable, Hashable, and Codable, but possibly others that haven't been
designed yet) would have protocol-specific behavior for those properties.

To better explain what I've been poking at, I'm kind of extrapolating this
out to a possible future where it may be possible to more generally (1)
define custom @attributes in Swift, like Java annotations, and then (2) use
some metaprogramming constructs to generate introspective default
implementations for a protocol at compile-time just as the compiler does
"magically" now, and the generator would be able to query attributes that
are defined by the same library author as the protocol and handle them
accordingly.

In a world where that's possible, I think it's less helpful to think in
terms of "I need to distinguish between conforming to X and getting a
synthesized implementation and conforming to X and avoiding the synthesized
implementation because the default might be risky", but instead to think in
terms of "How can I provide enough semantic information about my types to
remove the risk?"

In other words, the switches we offer developers to flip shouldn't be about
turning on/off entire features, but about giving the compiler enough
information to make it smart enough that we never need to turn it off in
the first place. As I alluded to before, if I have 10 properties in a type
and only 1 of those needs to be ignored in ==/hashValue/whatever, writing
"Equatable" instead of "derives Equatable" isn't all that helpful. Yes, it
spits out an error message where there wouldn't have been one, but it
doesn't reduce any of the burden of having to provide the appropriate
manual implementation.

But all that stuff about custom attributes and metaprogramming
introspection is a big topic of it's own that isn't going to be solved in
Swift 5, so this is a bit of a digression. :)

···

On Tue, Sep 12, 2017 at 7:10 PM Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Tue, Sep 12, 2017 at 9:58 AM, Thorsten Seitz via swift-evolution < > swift-evolution@swift.org> wrote:

-Thorsten

Am 12.09.2017 um 16:38 schrieb Tony Allevato via swift-evolution < >> swift-evolution@swift.org>:

On Mon, Sep 11, 2017 at 10:05 PM Gwendal Roué <gwendal.roue@gmail.com> >> wrote:

This doesn't align with how Swift views the role of protocols, though.
One of the criteria that the core team has said they look for in a protocol
is "what generic algorithms would be written using this protocol?"
AutoSynthesize doesn't satisfy that—there are no generic algorithms that
you would write with AutoEquatable that differ from what you would write
with Equatable.

And so everybody has to swallow implicit and non-avoidable code
synthesis and shut up?

That's not what I said. I simply pointed out one of the barriers to
getting a new protocol added to the language.

Code synthesis is explicitly opt-in and quite avoidable—you either don't
conform to the protocol, or you conform to the protocol and provide your
own implementation. What folks are differing on is whether there should
have to be *two* explicit switches that you flip instead of one.

No. One does not add a protocol conformance by whim. One adds a protocol
conformance by need. So the conformance to the protocol is a *given* in our
analysis of the consequence of code synthesis. You can not say "just don't
adopt it".

As soon as I type the protocol name, I get synthesis. That's the reason
why the synthesized code is implicit. The synthesis is explicitly written
in the protocol documentation, if you want. But not in the programmer's
code.

I did use "non-avoidable" badly, you're right: one can avoid it, by
providing its custom implementation.

So the code synthesis out of a mere protocol adoption *is* implicit.

Let's imagine a pie. The whole pie is the set of all Swift types. Some
slice of that pie is the subset of those types that satisfy the conditions
that allow one of our protocols to be synthesized. Now that slice of pie
can be sliced again, into the subset of types where (1) the synthesized
implementation is correct both in terms of strict value and of business
logic, and (2) the subset where it is correct in terms of strict value but
is not the right business logic because of something like transient data.

Yes.

What we have to consider is, how large is slice (2) relative to the
whole pie, *and* what is the likelihood that developers are going to
mistakenly conform to the protocol without providing their own
implementation, *and* is the added complexity worth protecting against this
case?

That's quite a difficult job: do you think you can evaluate this
likelihood?

Explicit synthesis has big advantage: it avoids this question entirely.

Remember that the main problem with slide (2) is that developers can not
*learn* to avoid it.

For each type is slide (2) there is a probability that it comes into
existence with a forgotten explicit protocol adoption. And this probability
will not go down as people learn Swift and discover the existence of slide
(2). Why? because this probability is driven by unavoidable human behaviors:
- developer doesn't see the problem (a programmer mistake)
- the developper plans to add explicit conformance later and happens to
forget (carelessness)
- a developper extends an existing type with a transient property, and
doesn't add the explicit protocol conformance that has become required.

Case 2 and 3 bite even experienced developers. And they can't be
improved by learning.

Looks like the problem is better defined as an ergonomics issue, now.

If someone can show me something that points to accidental synthesized
implementations being a significant barrier to smooth development in Swift,
I'm more than happy to consider that evidence. But right now, this all
seems hypothetical ("I'm worried that...") and what's being proposed is
adding complexity to the language (an entirely new axis of protocol
conformance) that would (1) solve a problem that may not exist to any great
degree, and (2) does not address the fact that if that problem does indeed
exist, then the same problem just as likely exists with certain
non-synthesized default implementations.

There is this sample code by Thorsten Seitz with a cached property which
is quite simple and clear :
[swift-evolution] [Proposal] Explicit Synthetic Behaviour

This is the sample code that had me enter the "worried" camp.'

I really like Thorsten's example, because it actually proves that
requiring explicit derivation is NOT the correct approach here. (Let's set
aside the fact that Optionals prevent synthesis because we don't have
conditional conformances yet, and assume that we've gotten that feature as
well for the sake of argument.)

Let's look at two scenarios:

1) Imagine I have a value type with a number of simple Equatable
properties. In a world where synthesis is explicit, I tell that value type
to "derive Equatable". Everything is fine. Later, I decide to add some
cache property like in Thorsten's example, and that property just happens
to also be Equatable. After doing so, the correct thing to do would be to
remove the "derive" part and provide my custom implementation. But if I
forget to do that, the synthesized operator still exists and applies to
that type. If you're arguing that "derive Equatable" is better because its
explicitness prevents errors, you must also accept that there are possibly
just as many cases where that explicitness does *not* prevent errors.

2) Imagine I have a value type with 10 Equatable properties and one
caching property that also happens to be Equatable. The solution being
proposed here says that I'm better off with explicit synthesis because if I
conform that type to Equatable without "derive", I get an error, and then I
can provide my own custom implementation. But I have to provide that custom
implementation *anyway* to ignore the caching property even if we don't
make synthesis explicit. Making it explicit hasn't saved me any work—it's
only given me a compiler error for a problem that I already knew I needed
to resolve. If we tack on Hashable and Codable to that type, then I still
have to write a significant amount of boilerplate for those custom
operations. Furthermore, if synthesis is explicit, I have *more* work
because I have to declare it explicitly even for types where the problem
above does not occur.

So, making derivation explicit is simply a non-useful dodge that doesn't
solve the underlying problem, which is this: Swift's type system currently
does not distinguish between Equatable properties that *do* contribute to
the "value" of their containing instance vs. Equatable properties that *do
not* contribute to the "value" of their containing instance. It's the
difference between behavior based on a type and additional business logic
implemented on top of those types.

So, what I'm trying to encourage people to see is this: saying "there are
some cases where synthesis is risky because it's incompatible with certain
semantics, so let's make it explicit everywhere" is trying to fix the wrong
problem. What we should be looking at is *"how do we give Swift the
additional semantic information it needs to make the appropriate decision
about what to synthesize?"*

That's where concepts like "transient" come in. If I have an
Equatable/Hashable/Codable type with 10 properties and one cache property,
I *still* want the synthesis for those first 10 properties. I don't want
the presence of *one* property to force me to write all of that boilerplate
myself. I just want to tell the compiler which properties to ignore.

Imagine you're a stranger reading the code to such a type for the first
time. Which would be easier for you to quickly understand? The version with
custom implementations of ==, hashValue, init(from:), and encode(to:) all
covering 10 or more properties that you have to read through to figure out
what's being ignored (and make sure that the author has done so correctly),
or the version that conforms to those protocols, does not contain a custom
implementation, and has each transient property clearly marked? The latter
is more concise and "transient" carries semantic weight that gets buried in
a handwritten implementation.

Here's a fun exercise—you can actually write something like "transient"
without any additional language support today:
https://gist.github.com/allevato/e1aab2b7b2ced72431c3cf4de71d306d\. A big
drawback to this Transient type is that it's not as easy to use as an
Optional because of the additional sugar that Swift provides for the
latter, but one could expand it with some helper properties and methods to
sugar it up the best that the language will allow today.

I would wager that this concept, either as a wrapper type or as a
built-in property attribute, would solve a significant majority of cases
where synthesis is viewed to be "risky". If we accept that premise, then we
can back to our slice of pie and all we're left with in terms of "risky"
types are "types that contain properties that conform to a certain protocol
but are not really transient but also shouldn't be included verbatim in
synthesized operations". I'm struggling to imagine a type that fits that
description, so if they do exist, it's doubtful that they're a common
enough problem to warrant introducing more complexity into the protocol
conformance system.

Gwendal

_______________________________________________

swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Maybe something like this as middle ground.

protocol Equatable {
     @syntetic static func ==(_ lhs: Self, _ rhs: Self) -> Bool
}

protocol itself contains default implementation, but without real body. Instead the function is marked that the real body is generated by compiler.
There is explicit mentions of default impl (by compiler magic), but it does not affects users as they would still use protocol in normal way:

struct Foo: Equatable { .... }

Yes, I also thought about this. And personally for me it is also good solution, while `struct S: Equatable {/*nothing*/}` will *still* lead to compiler's error or at least warning about not implemented requirements.
So, I'll be explicit regarding my intention: do I want requirements to be auto-generated or I want to do this manually.

But still. If you see

struct S: Equatable, Codable {
   // a lot of lines
}

you can't say right now if requirements for Equatable and/or Codable was implemented manually or will be auto-generated without checking all the code of a type. This knowledge can help to faster solve issues related to comparison/archiving.
So for me the best solution is still 'deriving'-like keyword, which adds clarity and show intention without any boilerplate code:

struct S: Equatable, deriving Codable {
   // all clear:
   // manually implemented Equatable
   // auto-generated Codable

   // a lot of lines
}

Vladimir.

···

On 13.09.2017 19:08, Ondrej Barina via swift-evolution wrote:

Ondrej B.

On Wed, Sep 13, 2017 at 4:14 PM, Haravikk via swift-evolution > <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

    On 13 Sep 2017, at 03:26, Xiaodi Wu <xiaodi.wu@gmail.com >> <mailto:xiaodi.wu@gmail.com>> wrote:

    On Tue, Sep 12, 2017 at 11:43 AM, Haravikk via >> swift-evolution<swift-evolution@swift.org <mailto:swift-evolution@swift.org>>wrote:

        On 12 Sep 2017, at 12:08, Xiaodi Wu <xiaodi.wu@gmail.com >>> <mailto:xiaodi.wu@gmail.com>> wrote:

        On Mon, Sep 11, 2017 at 06:03 Haravikk via swift-evolution >>>> <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

            See, this is another flawed assumption; you are assuming that
            omitting a custom implementation of == is always intentional rather
            than an oversight, which is not guaranteed. This is one of my gripes
            with the retroactive change to Equatable, as it is
            currently*impossible* to omit an implementation.

        Again, this applies equally to the addition of _any_ default
        implementation. And again, such changes don’t even require Swift Evolution
        approval.

        So what? Because the Swift Evolution process is currently deficient we
        should just give up on discussing problems with features and the language
        altogether?

    I don't claim that it's a deficiency; I claim it's reflective of Swift's
    opinionated take on default implementations. Are you, after all, saying that
    you have a problem with the addition of _any_ default implementation to an
    existing protocol? If so, this conversation isn't about synthesis/reflection at
    all.

    No, and you should know that by now. I suggest actually reading some of what I
    have written as I am sick of repeating myself.

            And precisely what kind of "evidence" am I expected to give? This
            is a set of features that*do not exist yet*, I am trying to argue
            in favour of an explicit end-developer centric opt-in rather than
            an implicit protocol designer centric one. Yet no-one seems
            interested in the merits of allowing developers to choose what they
            want, rather than having implicit behaviours appear potentially
            unexpectedly.

            Both options were examined for Codable and for Equatable/Hashable.
            The community and core team decided to prefer the current design. At
            this point, new insights that arise which could not be anticipated
            at the time of review could prompt revision. However, so far, you
            have presented arguments already considered during review.

            And so far all I have heard about this is how it was "decided";
            no-one seems interested in showing how any of these concerns were
            addressed (if at all), so as far as I can tell they were not, or they
            were wilfully ignored.

        They were addressed by being considered.

        And yet no-one can apparently summarise what those "considerations" might
        be, suggesting that they were either *not* considered at all, or that the
        "consideration" was so weak that no-one is willing to step forward to
        defend it. Either way it is not sufficient by any reasonable measure.

        If I were to run over your foot in my car, would you be happy to accept
        that I "considered" it first?

    How do you mean? People wrote in with their opinions. Then, taking into account
    the community's response, the proposal was approved.

    I mean because not once have you summarised what these alleged "considerations"
    were; if they exist then you should be able do so, yet all I am hearing is "it
    was considered", which frankly is not an argument at all as it is entirely
    without substance.

    If it was genuinely considered then someone should be able to say what points
    were considered and what conclusions were reached and why. And even if there
    *was* an earlier decision, that doesn't necessarily make it right. We are
    discussing it now, and it is clear that any decision that has been made has been
    made poorly at best.

    And if you're talking about the discussion on Equatable/Hashable specifically,
    I'm afraid your memory of the "considerations" is radically different to mine; as
    the concerns I raised were essentially ignored, as not a single person gave a
    justification more substantial than "but, but Codable!" which frankly isn't a
    justification at all.

                Therefore, your argument reduces to one about which default
                implementations generally ought or ought not to be
                provided--that is, that they ought to be provided only when
                their correctness can be guaranteed for all (rather than almost
                all) possible conforming types. To which point I sketched a
                rebuttal above.

                If a protocol defines something, and creates a default
                implementation based only upon those definitions then it must by
                its very nature be correct. A concrete type may later decided to
                go further, but that is a feature of the concrete type, not a
                failure of the protocol itself which can function correctly
                within the context it created. You want to talk evidence, yet
                there has been no example given that proves otherwise; thus far
                only Itai has attempted to do so, but I have already pointed out
                the flaws with that example.

                The simple fact is that a default implementation may either be
                flawed or not within the context of the protocol itself; but a
                reflective or synthetic implementation by its very nature goes
                beyond what the protocol defines and so is automatically flawed
                because as it does not rely on the end-developer to confirm
                correctness, not when provided implicitly at least.

            Again, if it applies generally, it must apply specifically. What is
            "automatically flawed" about the very reasonable synthesized default
            implementation of ==?

            It makes the assumption that every equatable property of a type is
            necessarily relevant to its equality.

        No necessarily, only provisionally and rebuttably. If it’s not the case,
        override the default.

        So… entirely unlike standard default implementations
        which*cannot* "provisionally" assume something is relevant at all,

    Why not?

    Because they can only act upon properties/methods that they themselves (or a
    parent protocol) define. FFS, what is so unclear about that? Or are you arguing
    on this subject without every having actually used a protocol before?

        thereby making them entirely different from synthesised/reflective
        implementations!

        I'm sorry, but you keep trying to argue that they're the same, but then
        admitting that they're not. You can't have it both ways.

    Well, certainly, synthesized default implementations differ from
    non-synthesized ones in key respects. However, they do not differ in terms of
    the user experience of conforming to the protocol and having to override the
    default.

    Except that that's not true at all, is it?

    Synthesised default implementations go much further in how they attempt (and
    potentially fail) to implement those defaults, and in the specific case of
    Equatable/Hashable they are fully implementing a protocol without a single
    property of method being raised as a requirement; they are utterly different at a
    fundamental level, no amount of mental contortion changes that fact.

            Consider for example if a type stores a collection index for
            performance reasons; this isn't an intrinsic part of the type, nor
            relevant to testing equality, yet this default implementation will
            treat it as such because it*knows nothing about the concrete type's
            properties*. If a protocol does not define a property then any action
            taken upon such a property is necessarily based upon an assumption;
            just because it might be fine some of the time, does not make it any
            less flawed.

            The big difference here between explicit and implicit synthetic
            implementations is where this assumption originates; if a method is
            synthesised implicitly then the assumption is made by the protocol
            designer alone, with no real involvement by the end developer. If I
            explicitly opt-in to that default however I am signalling to the
            protocol that it is okay to proceed. In the former case the
            assumption is unreasonable, in the latter it is explicitly
            authorised. It is a difference between "I want to make the decision
            on what's correct" and "I am happy for you (the protocol designer) to
            decide".

            Right now, when I conform to Equatable, it is a declaration of "I
            will implement this", but with this retroactive implicit change it is
            now a declaration of "implement this for me", these are two entirely
            different things. Consider; what if I'm working on a piece of code
            that requires types to be Equatable, but one of the types I'm using
            currently isn't, so I quickly throw Equatable conformance onto it and
            go back to what I was doing, with the intention of completing
            conformance later. With this change that type may now receive a
            default implementation that is wrong, and I've lost the safety net
            that currently exists.

        Right now, it still wouldn’t compile, so I don’t see why you would do
        that. In the future, if you want to make it not compile, there is nothing
        stopping you from conforming to a non-existent “NotYetEquatable”. This was
        something that you asked about earlier and it was answered.

        So your solution is to intentionally write invalid code to work around the
        fact that a feature is being implemented badly?

    You stated a use case where you *want* the compiler to stop your code from
    compiling by stating a conformance to Equatable without implementing its
    requirements. You then stated that the major problem you have with synthesized
    `==` is that the compiler will now use a default implementation that you might
    forget about instead of stopping compilation. Therefore, I demonstrated how you
    could continue to have the compiler stop your code from compiling. It's not my
    solution that is intentionally writing invalid code; your stated aim was to be
    able to do so.

    My stated aim was nothing of the sort.

    I was pointing out that right now conforming to Equatable means something
    entirely different from what it will mean in future if this idiotic change makes
    it into release. Please actually read what I write before deciding for yourself
    what my 'stated aim' is.

    I am *not* asking for workarounds to circumvent a ridiculously flawed change to
    the language, I am arguing why it is flawed and must be changed. If I wanted a
    workaround I'd do what I'm now seriously considering, which is ditching Swift
    completely, as I will not use a language if I can no longer trust the team
    developing it or the decisions that they make.

            A non-synthesised/reflective implementation cannot strictly be
            incorrect, because as long as it is implemented properly it will
            always be correct within the context of the protocol itself. It may
            not go quite as far as an end developer might want, but that is
            because they want to add something onto the protocol, not because the
            protocol is wrong.

            A synthesised/reflective implementation differs because if it goes
            too far it is wrong not only within the context of the concrete type,
            but also the protocol itself, it is simply incorrect.

        Again, this is an assertion that misses the mark. If the default
        implementation is unsuitable for a type, it’s unsuitable whether it
        “doesn’t go quite as far” or “goes too far.”

        Because not going quite far enough is not a failure of the protocol, as
        protocols by their very nature can only go as far as what they define. If a
        protocol Foo defines two properties, a method which uses those two
        properties correctly, then the method is correct. A developer of a concrete
        type might want to add more information or tailor the behaviour, but that
        doesn't make the default implementation incorrect, it's just considering
        the type only within the context of being an instance of Foo.

        Going too far is the opposite; it's the protocol designer messing around
        with stuff they do not define at all. It's only ever right by chance, as
        it's operating within the context of the concrete type, about which the
        protocol does not know anything with certainty.

    Yes, you have defined "not going far enough" and "going too far" based on
    whether an implementation uses only protocol requirements or not. However, you
    haven't at all demonstrated why this distinction is at all meaningful in terms
    of the issue you describe with a user conforming to a protocol. If there is a
    default implementation, either it returns the expected result for the
    conforming type or it does not--those are the only two choices. Are you arguing
    that, empirically, the default implementation for Equatable will more often be
    unsuitable for conforming types? If so, what's your evidence?

    What's yours? If this issue was as "considered" as you constantly claim then
    where is the evidence that there is no meaningful distinction? Surely such
    evidence exists, or else the issue hasn't been considered at all, has it?

    Frankly I am sick of being asked to provide evidence when you are seemingly
    unwilling to do anything in return, especially when you have conveniently ignored
    every single example that I have already given.

    It cuts both ways; you claim that "going too far" and "not going far enough" are
    the same thing? Well prove it.

        You state but do not give any rationale for the claim that the former is
        not wrong in some context while the latter is always wrong.

        By this line of argumentation, you’d be perfectly content if instead we
        simply had the default implementation of == as “return true” because it
        would be somehow not wrong.

        Only if return true were a reasonable default to give in the context of the
        protocol, which it clearly is not, as it's not performing any kind of
        comparison of equality.

    Sure it is; `return true` satisfies all the semantic requirements for equality:
    reflexivity, symmetry, transitivity; and, in the context of the protocol which
    only provides for this one facility (determination of equality or inequality),
    any two instances that compare equal _are_ completely interchangeable "within
    the context of the protocol itself," as you would say.

    The purpose of Equatable is to identify types that can be compared for equality;
    returning true does not satisfy that aim because no such comparison is occurring,
    so your example is intentionally ridiculous. Even a less contrived example such
    as comparing memory addresses doesn't fulfil the purpose of Equatable, which is
    all about comparing equality of different instances that might still be the same.

                Put another way, what the proposal about synthesizing
                implementations for Equatable and Hashable was about can be
                thought of in two parts: (a) should there be default
                implementations; and (b) given that it is impossible to write
                these in Swift, should we use magic? Now, as I said above,
                adding default implementations isn't (afaik) even considered an
                API change that requires review on this list. Really, what
                people were debating was (b), whether it is worth it to
                implement compiler-supported magic to make these possible. Your
                disagreement has to do with (a) and not (b).

                Wrong. The use of magic in this case produces something else
                entirely; that's the whole point. It is*not the same*, otherwise
                it wouldn't be needed at all. It doesn't matter if it's compiler
                magic, some external script or a native macro, ultimately they
                are all doing something with a concrete type that is currently
                not possible.

                And once again;*I am not arguing against a default implementation
                that cuts boilerplate*, I am arguing against it being implicit.
                What I want is to be the one asking for it, because it is not
                reasonable to assume that just throwing it in there is always
                going to be fine, because it quite simply is not.

            If you have to ask for it, then it's not a default. You *are* against
            a default implementation.

            A default implementation is an implementation that I, as the concrete
            type developer, do not have to provide myself. If you want default to
            mean only "automatic" then your attempt to pigeon-hole what I am
            arguing is incorrect, because what I am arguing is then neither about
            default implementations nor the means of actually implementing it, but
            something else entirely.

            But as far as I'm concerned it still absolutely still a default
            implementation whether it is requested or not; the difference is I, as
            the end developer, am able to refine what type of defaults that I want.

        The word “default” indicates something that arises in the absence of a
        user indication otherwise.

        Then this proposal is just for a different mechanism for "indicating
        otherwise".

        You keep trying to argue that a synthesised/reflective default
        implementation is the same as a normal default implementation, yet you seem
        to be consistently forgetting that even if that is true without this
        proposal, that the very proposal itself is to change that, effectively
        causing a category of default implementation to become explicitly
        opted-into, rather than implicitly. They're still implementations that will
        be provided automatically, just only when they are permitted to do-so.

    So to be clear, you are *against* them being the *default*: you wish them to be
    the *otherwise*.

    You seem to be insisting upon a narrow definition of default; what I want is
    control over which types of default implementations are provided. Just because
    they must be opted-into explicitly does not stop them being "default", as they
    are still implementations that I myself do not need to implement. The difference
    is that I want to actually *want* them rather than have provided through
    potentially flimsy assumptions made by a protocol designer. Just because there's
    an extra step doesn't make them any less automatic, otherwise having to conform
    to a protocol in the first place would also prevent them from being defaults.

    Asking *for* something is more like a middle-ground between the two; the
    synthetic implementations are still possible defaults, they just aren't provided
    unless you allow them, while omitting the necessary keyword/attribute prevents
    them being used.

            On 9 Sep 2017, at 23:17, Gwendal Roué <gwendal.roue@gmail.com >>>> <mailto:gwendal.roue@gmail.com>> wrote:

            All right, I'll be more positive: our science, IT, is a
            *constructive* science, by *essence*. If there is a problem, there
            must be a way to show it.
            It you can't, then there is no problem.

            You mean just as I have asked for examples that prove
            non-synthetic/reflective default implementations are as dangerous as
            synthetic/reflective ones? Plenty have suggested this is the case yet
            no reasonable examples of that have been given either.

            However, examples highlighting problems with the synthesised behaviour
            are simple:

                structFoo :Equatable{vardata:String}// Currently an error, won't
                be in future

            Or something a bit more substantial:

                structKeyPair :Equatable{
                staticvarcount:Int=0

                varcount:Int
                letkey:String// This is the only property that should be equatable
                varvalue:String

                init(key:String, value:String) {
                letcount =KeyPair.count&+1
                KeyPair.count= count;self.count= count
                self.key= key;self.value= value
                }

            Here the only important property in the key pair is the key, the value
            isn't important (only the keys are to be considered unique) and the
            count is just a throwaway value. The synthesised default
            implementation for this concrete type will therefore be completely
            wrong, likewise for Hashable, which will likely produce radically
            different results for instances that should be the same.

        I notice that despite asking endlessly for examples, the ones I've given
        are being ignored. In future I shall remind people asking for examples
        where they can shove them.

    And once again, totally ignored. You seem to love asking for "evidence" but why
    exactly should I bother giving anything if you ignore it when I try to?

    _______________________________________________
    swift-evolution mailing list
    swift-evolution@swift.org <mailto:swift-evolution@swift.org>
    https://lists.swift.org/mailman/listinfo/swift-evolution
    <https://lists.swift.org/mailman/listinfo/swift-evolution&gt;

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Ones with proof that they were ever made! Once again you are stating that these issues were "considered", yet you show not a single shred of proof that that was the case. You're asking me to take you at your word but I have no reason to trust that the problem has been as carefully considered as you claim.
I was involved in one such discussion and the response from the core team was frankly pitiful; they did not provide any justification whatsoever.

But since it's clear that you have no intention of ever responding substantively I will not dignify your messages with any further responses as it is nothing more than a waste of my time; you hypocritically accuse me of repetition, while you also ignore direct questions and any point that doesn't fit your "shut up about this" viewpoint. If that is all you have to say then you've said it a dozen times over, so kindly stop doing so.

···

On 14 Sep 2017, at 02:12, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Sep 13, 2017 at 09:13 Haravikk via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
I mean because not once have you summarised what these alleged "considerations" were; if they exist then you should be able do so, yet all I am hearing is "it was considered", which frankly is not an argument at all as it is entirely without substance.

Of course it is not an argument at all. It is a factual statement. The objections which you mentioned were also mentioned prior to a decision about SE-0185. The community and the core team had an opportunity to view those objections. After that time, a decision was made, having considered all the stated pros and cons which included the ones that you are now repeating. What "considerations" are you looking for?

>> In none of those cases, the compiler emits any warning. It's thus easy
to forget or miss the problem, and uneasy to fix it (you'll need a runtime
failure to spot it, or a thorough code review).
>>
>> I hope you agree with this last sentence. This unbalance between the
easiness of the mistake and the easiness of the fix should ring a bell to
language designers.
>
> Suppose instead this were about a protocol named Fooable and a
requirement called foo() that has a default implementation. Everything you
just talked about would apply equally. Am I to understand that you are
opposed to default implementations in general? If so, then that’s got
nothing to do with synthesized Equatable conformance. If not, then you’ll
have to justify why.

Sounds like a good argument, until one realises that if a protocol does
not provide a default implementations for a method, it may be because a
default implementations is impossible to provide (the most usual case), or
because it would be unwise to do so.

And indeed, the topic currently discussed is not if we should remove or
not default implementations. Instead, the question is: is it wise or not to
provide an *implicit* default Equatable/Hashable/XXX implementation?

Right, _that_ is the question. It was asked during review for the proposal,
and the agreed upon answer is _yes_.

The tenant of explicit synthesis attempt to say that it would be unwise to

···

On Tue, Sep 12, 2017 at 2:30 PM, Gwendal Roué <gwendal.roue@gmail.com> wrote:

do so. Please don't have us repeat again, that would be disrespectful.

BTW, Happy Keynote to everybody!
Gwendal

Good arguments, Tony, you have convinced me on all points. Transient is
the way to go. Thank you for your patience!

On many points, I agree with Tony, but I disagree that "transient"
addresses the issue at hand. The challenge being made is that, as Gwendal
puts it, it's _unwise_ to have a default implementation, because people
might forget that there is a default implementation. "Transient" only works
if you remember that there is a default implementation, and in that case,
we already have a clear syntax for overriding the default.

Right—I hope it hasn't sounded like I'm conflating the two concepts
completely. The reason I brought up "transient" is because nearly all of
the "risky" examples being cited so far have been of the variety "I have a
type where some properties happen to be Equatable but shouldn't be involved
in equality", so my intention has been to show that if we have a better
solution to that specific problem (which is, related to but not the same as
the question at hand), then there aren't enough risky cases left to warrant
adding this level of complexity to the protocol system.

As others point out, there's a temptation here to write things like
"transient(Equatable)" so as to control the synthesis of implementations on
a per-protocol basis. By that point, you've invented a whole new syntax for
implementing protocol requirements. (Ah, you might say, but it's hard to
write a good hashValue implementation: sure, but that's adequately solved
by a library-supplied combineHashes() function.)

I totally agree with this. A design that would try to annotate "transient"
with a protocol or list of protocols is missing the point of the semantics
that "transient" is supposed to provide. It's not a series of switches to
that can be flipped on and off for arbitrary protocols—it's a semantic tag
that assigns additional meaning to properties and certain protocols (such
as Equatable, Hashable, and Codable, but possibly others that haven't been
designed yet) would have protocol-specific behavior for those properties.

To better explain what I've been poking at, I'm kind of extrapolating this
out to a possible future where it may be possible to more generally (1)
define custom @attributes in Swift, like Java annotations, and then (2) use
some metaprogramming constructs to generate introspective default
implementations for a protocol at compile-time just as the compiler does
"magically" now, and the generator would be able to query attributes that
are defined by the same library author as the protocol and handle them
accordingly.

In a world where that's possible, I think it's less helpful to think in
terms of "I need to distinguish between conforming to X and getting a
synthesized implementation and conforming to X and avoiding the synthesized
implementation because the default might be risky", but instead to think in
terms of "How can I provide enough semantic information about my types to
remove the risk?"

In other words, the switches we offer developers to flip shouldn't be
about turning on/off entire features, but about giving the compiler enough
information to make it smart enough that we never need to turn it off in
the first place. As I alluded to before, if I have 10 properties in a type
and only 1 of those needs to be ignored in ==/hashValue/whatever, writing
"Equatable" instead of "derives Equatable" isn't all that helpful. Yes, it
spits out an error message where there wouldn't have been one, but it
doesn't reduce any of the burden of having to provide the appropriate
manual implementation.

But all that stuff about custom attributes and metaprogramming
introspection is a big topic of it's own that isn't going to be solved in
Swift 5, so this is a bit of a digression. :)

That said, we could have enums EquatingKeys and HashingKeys, a la
CodingKeys... That may not be a huge leap to propose and implement.

···

On Tue, Sep 12, 2017 at 22:07 Tony Allevato <tony.allevato@gmail.com> wrote:

On Tue, Sep 12, 2017 at 7:10 PM Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Tue, Sep 12, 2017 at 9:58 AM, Thorsten Seitz via swift-evolution < >> swift-evolution@swift.org> wrote:

-Thorsten

Am 12.09.2017 um 16:38 schrieb Tony Allevato via swift-evolution < >>> swift-evolution@swift.org>:

On Mon, Sep 11, 2017 at 10:05 PM Gwendal Roué <gwendal.roue@gmail.com> >>> wrote:

This doesn't align with how Swift views the role of protocols, though.
One of the criteria that the core team has said they look for in a protocol
is "what generic algorithms would be written using this protocol?"
AutoSynthesize doesn't satisfy that—there are no generic algorithms that
you would write with AutoEquatable that differ from what you would write
with Equatable.

And so everybody has to swallow implicit and non-avoidable code
synthesis and shut up?

That's not what I said. I simply pointed out one of the barriers to
getting a new protocol added to the language.

Code synthesis is explicitly opt-in and quite avoidable—you either
don't conform to the protocol, or you conform to the protocol and provide
your own implementation. What folks are differing on is whether there
should have to be *two* explicit switches that you flip instead of one.

No. One does not add a protocol conformance by whim. One adds a
protocol conformance by need. So the conformance to the protocol is a
*given* in our analysis of the consequence of code synthesis. You can not
say "just don't adopt it".

As soon as I type the protocol name, I get synthesis. That's the reason
why the synthesized code is implicit. The synthesis is explicitly written
in the protocol documentation, if you want. But not in the programmer's
code.

I did use "non-avoidable" badly, you're right: one can avoid it, by
providing its custom implementation.

So the code synthesis out of a mere protocol adoption *is* implicit.

Let's imagine a pie. The whole pie is the set of all Swift types. Some
slice of that pie is the subset of those types that satisfy the conditions
that allow one of our protocols to be synthesized. Now that slice of pie
can be sliced again, into the subset of types where (1) the synthesized
implementation is correct both in terms of strict value and of business
logic, and (2) the subset where it is correct in terms of strict value but
is not the right business logic because of something like transient data.

Yes.

What we have to consider is, how large is slice (2) relative to the
whole pie, *and* what is the likelihood that developers are going to
mistakenly conform to the protocol without providing their own
implementation, *and* is the added complexity worth protecting against this
case?

That's quite a difficult job: do you think you can evaluate this
likelihood?

Explicit synthesis has big advantage: it avoids this question entirely.

Remember that the main problem with slide (2) is that developers can
not *learn* to avoid it.

For each type is slide (2) there is a probability that it comes into
existence with a forgotten explicit protocol adoption. And this probability
will not go down as people learn Swift and discover the existence of slide
(2). Why? because this probability is driven by unavoidable human behaviors:
- developer doesn't see the problem (a programmer mistake)
- the developper plans to add explicit conformance later and happens to
forget (carelessness)
- a developper extends an existing type with a transient property, and
doesn't add the explicit protocol conformance that has become required.

Case 2 and 3 bite even experienced developers. And they can't be
improved by learning.

Looks like the problem is better defined as an ergonomics issue, now.

If someone can show me something that points to accidental synthesized
implementations being a significant barrier to smooth development in Swift,
I'm more than happy to consider that evidence. But right now, this all
seems hypothetical ("I'm worried that...") and what's being proposed is
adding complexity to the language (an entirely new axis of protocol
conformance) that would (1) solve a problem that may not exist to any great
degree, and (2) does not address the fact that if that problem does indeed
exist, then the same problem just as likely exists with certain
non-synthesized default implementations.

There is this sample code by Thorsten Seitz with a cached property
which is quite simple and clear :
[swift-evolution] [Proposal] Explicit Synthetic Behaviour

This is the sample code that had me enter the "worried" camp.'

I really like Thorsten's example, because it actually proves that
requiring explicit derivation is NOT the correct approach here. (Let's set
aside the fact that Optionals prevent synthesis because we don't have
conditional conformances yet, and assume that we've gotten that feature as
well for the sake of argument.)

Let's look at two scenarios:

1) Imagine I have a value type with a number of simple Equatable
properties. In a world where synthesis is explicit, I tell that value type
to "derive Equatable". Everything is fine. Later, I decide to add some
cache property like in Thorsten's example, and that property just happens
to also be Equatable. After doing so, the correct thing to do would be to
remove the "derive" part and provide my custom implementation. But if I
forget to do that, the synthesized operator still exists and applies to
that type. If you're arguing that "derive Equatable" is better because its
explicitness prevents errors, you must also accept that there are possibly
just as many cases where that explicitness does *not* prevent errors.

2) Imagine I have a value type with 10 Equatable properties and one
caching property that also happens to be Equatable. The solution being
proposed here says that I'm better off with explicit synthesis because if I
conform that type to Equatable without "derive", I get an error, and then I
can provide my own custom implementation. But I have to provide that custom
implementation *anyway* to ignore the caching property even if we don't
make synthesis explicit. Making it explicit hasn't saved me any work—it's
only given me a compiler error for a problem that I already knew I needed
to resolve. If we tack on Hashable and Codable to that type, then I still
have to write a significant amount of boilerplate for those custom
operations. Furthermore, if synthesis is explicit, I have *more* work
because I have to declare it explicitly even for types where the problem
above does not occur.

So, making derivation explicit is simply a non-useful dodge that doesn't
solve the underlying problem, which is this: Swift's type system currently
does not distinguish between Equatable properties that *do* contribute to
the "value" of their containing instance vs. Equatable properties that *do
not* contribute to the "value" of their containing instance. It's the
difference between behavior based on a type and additional business logic
implemented on top of those types.

So, what I'm trying to encourage people to see is this: saying "there
are some cases where synthesis is risky because it's incompatible with
certain semantics, so let's make it explicit everywhere" is trying to fix
the wrong problem. What we should be looking at is *"how do we give
Swift the additional semantic information it needs to make the appropriate
decision about what to synthesize?"*

That's where concepts like "transient" come in. If I have an
Equatable/Hashable/Codable type with 10 properties and one cache property,
I *still* want the synthesis for those first 10 properties. I don't want
the presence of *one* property to force me to write all of that boilerplate
myself. I just want to tell the compiler which properties to ignore.

Imagine you're a stranger reading the code to such a type for the first
time. Which would be easier for you to quickly understand? The version with
custom implementations of ==, hashValue, init(from:), and encode(to:) all
covering 10 or more properties that you have to read through to figure out
what's being ignored (and make sure that the author has done so correctly),
or the version that conforms to those protocols, does not contain a custom
implementation, and has each transient property clearly marked? The latter
is more concise and "transient" carries semantic weight that gets buried in
a handwritten implementation.

Here's a fun exercise—you can actually write something like "transient"
without any additional language support today:
https://gist.github.com/allevato/e1aab2b7b2ced72431c3cf4de71d306d\. A
big drawback to this Transient type is that it's not as easy to use as an
Optional because of the additional sugar that Swift provides for the
latter, but one could expand it with some helper properties and methods to
sugar it up the best that the language will allow today.

I would wager that this concept, either as a wrapper type or as a
built-in property attribute, would solve a significant majority of cases
where synthesis is viewed to be "risky". If we accept that premise, then we
can back to our slice of pie and all we're left with in terms of "risky"
types are "types that contain properties that conform to a certain protocol
but are not really transient but also shouldn't be included verbatim in
synthesized operations". I'm struggling to imagine a type that fits that
description, so if they do exist, it's doubtful that they're a common
enough problem to warrant introducing more complexity into the protocol
conformance system.

Gwendal

_______________________________________________

swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

But all that stuff about custom attributes and metaprogramming introspection is a big topic of it's own that isn't going to be solved in Swift 5, so this is a bit of a digression. :)

Is it really a digression, though? Seems like with source-compatibility being essentially required going forward, it's important to nail this stuff down sooner rather than later if we want a nice, consistent language for The Future™.

I mean, the idea of writing "@adjective var noun: Type" to indicate that a certain variable shouldn't take place in code synthesis seems fairly safe to me, but proving $Idea1 is generalizable without stepping on $Idea2 is outside my area of expertise.

Good arguments, Tony, you have convinced me on all points. Transient is the way to go. Thank you for your patience!

On many points, I agree with Tony, but I disagree that "transient" addresses the issue at hand. The challenge being made is that, as Gwendal puts it, it's _unwise_ to have a default implementation, because people might forget that there is a default implementation. "Transient" only works if you remember that there is a default implementation, and in that case, we already have a clear syntax for overriding the default.

Right—I hope it hasn't sounded like I'm conflating the two concepts completely. The reason I brought up "transient" is because nearly all of the "risky" examples being cited so far have been of the variety "I have a type where some properties happen to be Equatable but shouldn't be involved in equality", so my intention has been to show that if we have a better solution to that specific problem (which is, related to but not the same as the question at hand), then there aren't enough risky cases left to warrant adding this level of complexity to the protocol system.

As others point out, there's a temptation here to write things like "transient(Equatable)" so as to control the synthesis of implementations on a per-protocol basis. By that point, you've invented a whole new syntax for implementing protocol requirements. (Ah, you might say, but it's hard to write a good hashValue implementation: sure, but that's adequately solved by a library-supplied combineHashes() function.)

I totally agree with this. A design that would try to annotate "transient" with a protocol or list of protocols is missing the point of the semantics that "transient" is supposed to provide. It's not a series of switches to that can be flipped on and off for arbitrary protocols—it's a semantic tag that assigns additional meaning to properties and certain protocols (such as Equatable, Hashable, and Codable, but possibly others that haven't been designed yet) would have protocol-specific behavior for those properties.

To better explain what I've been poking at, I'm kind of extrapolating this out to a possible future where it may be possible to more generally (1) define custom @attributes in Swift, like Java annotations, and then (2) use some metaprogramming constructs to generate introspective default implementations for a protocol at compile-time just as the compiler does "magically" now, and the generator would be able to query attributes that are defined by the same library author as the protocol and handle them accordingly.

In a world where that's possible, I think it's less helpful to think in terms of "I need to distinguish between conforming to X and getting a synthesized implementation and conforming to X and avoiding the synthesized implementation because the default might be risky", but instead to think in terms of "How can I provide enough semantic information about my types to remove the risk?"

In other words, the switches we offer developers to flip shouldn't be about turning on/off entire features, but about giving the compiler enough information to make it smart enough that we never need to turn it off in the first place. As I alluded to before, if I have 10 properties in a type and only 1 of those needs to be ignored in ==/hashValue/whatever, writing "Equatable" instead of "derives Equatable" isn't all that helpful. Yes, it spits out an error message where there wouldn't have been one, but it doesn't reduce any of the burden of having to provide the appropriate manual implementation.

Speaking of which, what do you suppose the hit/miss ratio would be WRT synthesized `Equatable`, etc, if we introduced "trivial" or "simple" types ("trivial struct Foo {}", "trivial class Bar {}") -- meaning that all the stored properties (or associated values, for enums) are either all trivial value types or trivial reference types -- and only performing the code code synthesis for such trivial types? We'd probably need some mechanism of telling the compiler that for this purpose, a struct counts as a reference type (or the other way around)... mostly I'm thinking that the Unsafe*Pointer types would break the semantic contract, since they kinda have both (from a certain point of view).

trivial struct Foo : Equatable { // both x and y are value semantics all the way down, so "==" can be synthesized
var x: Int
var y: Int
}
struct FooWithHistory : Equatable { // Arrays use references under the covers; this can't be `trivial`, nothing is synthesized, and this wouldn't compile without the user supplying a `==` function
  var x: Int { didSet { xHistory.append(oldValue) } }
  var y: Int { didSet { yHistory.append(oldValue) } }
  var xHistory: [Int] =
  var yHistory: [Int] =
}

(I tried to come up with a simple example that was all references "all the way down", but you've eventually gotta answer the question "a reference to what?", at which point the type could no longer be "trivial", so I'm not sure that's possible, at least in a useful manner. I suppose maybe the "bottom" property could be a pointer... that might do it...)

It seems to me that there might be implications here for auto-parallelization, too. If so, this would be a great time in Swift's evolution (:smiley:) to explore the idea.

- Dave Sweeris

···

On Sep 12, 2017, at 8:07 PM, Tony Allevato via swift-evolution <swift-evolution@swift.org> wrote:
On Sep 12, 2017, at 8:07 PM, Tony Allevato via swift-evolution <swift-evolution@swift.org> wrote:
On Tue, Sep 12, 2017 at 7:10 PM Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:
On Tue, Sep 12, 2017 at 9:58 AM, Thorsten Seitz via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

> Maybe something like this as middle ground.
>
> protocol Equatable {
> @syntetic static func ==(_ lhs: Self, _ rhs: Self) -> Bool
> }
>
> protocol itself contains default implementation, but without real body.
Instead the
> function is marked that the real body is generated by compiler.
> There is explicit mentions of default impl (by compiler magic), but it
does not
> affects users as they would still use protocol in normal way:
>
> struct Foo: Equatable { .... }

Yes, I also thought about this. And personally for me it is also good
solution, while
`struct S: Equatable {/*nothing*/}` will *still* lead to compiler's error
or at least
warning about not implemented requirements.
So, I'll be explicit regarding my intention: do I want requirements to be
auto-generated or I want to do this manually.

But still. If you see

struct S: Equatable, Codable {
   // a lot of lines
}

you can't say right now if requirements for Equatable and/or Codable was
implemented
manually or will be auto-generated without checking all the code of a
type. This
knowledge can help to faster solve issues related to comparison/archiving.
So for me the best solution is still 'deriving'-like keyword, which adds
clarity and
show intention without any boilerplate code:

The sentences above apply equally to non-synthesized default protocol
implementations:

struct S: Foo {
  // a lot of lines
}

I can't say if the requirements for Foo were implemented manually by S or
by a default implementation in Foo (which could be in a different module
that I don't have source access to) without checking all the code for S. So
this can't be used as a basis to rationalize special-casing synthesized
implementations.

···

On Wed, Sep 13, 2017 at 10:21 AM Vladimir.S via swift-evolution < swift-evolution@swift.org> wrote:

On 13.09.2017 19:08, Ondrej Barina via swift-evolution wrote:

struct S: Equatable, deriving Codable {
   // all clear:
   // manually implemented Equatable
   // auto-generated Codable

   // a lot of lines
}

Vladimir.

>
> Ondrej B.
>
> On Wed, Sep 13, 2017 at 4:14 PM, Haravikk via swift-evolution > > <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
>
>
>> On 13 Sep 2017, at 03:26, Xiaodi Wu <xiaodi.wu@gmail.com > >> <mailto:xiaodi.wu@gmail.com>> wrote:
>>
>> On Tue, Sep 12, 2017 at 11:43 AM, Haravikk via > >> swift-evolution<swift-evolution@swift.org <mailto: > swift-evolution@swift.org>>wrote:
>>
>>
>>> On 12 Sep 2017, at 12:08, Xiaodi Wu <xiaodi.wu@gmail.com > >>> <mailto:xiaodi.wu@gmail.com>> wrote:
>>>
>>>> On Mon, Sep 11, 2017 at 06:03 Haravikk via swift-evolution > >>>> <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> > wrote:
>>>>
>>>> See, this is another flawed assumption; you are assuming
that
>>>> omitting a custom implementation of == is always
intentional rather
>>>> than an oversight, which is not guaranteed. This is one
of my gripes
>>>> with the retroactive change to Equatable, as it is
>>>> currently*impossible* to omit an implementation.
>>>
>>>
>>> Again, this applies equally to the addition of _any_ default
>>> implementation. And again, such changes don’t even require
Swift Evolution
>>> approval.
>>
>> So what? Because the Swift Evolution process is currently
deficient we
>> should just give up on discussing problems with features and
the language
>> altogether?
>>
>>
>> I don't claim that it's a deficiency; I claim it's reflective of
Swift's
>> opinionated take on default implementations. Are you, after all,
saying that
>> you have a problem with the addition of _any_ default
implementation to an
>> existing protocol? If so, this conversation isn't about
synthesis/reflection at
>> all.
>
> No, and you should know that by now. I suggest actually reading some
of what I
> have written as I am sick of repeating myself.
>
>>>>>> And precisely what kind of "evidence" am I expected to
give? This
>>>>>> is a set of features that*do not exist yet*, I am
trying to argue
>>>>>> in favour of an explicit end-developer centric opt-in
rather than
>>>>>> an implicit protocol designer centric one. Yet no-one
seems
>>>>>> interested in the merits of allowing developers to
choose what they
>>>>>> want, rather than having implicit behaviours appear
potentially
>>>>>> unexpectedly.
>>>>>
>>>>> Both options were examined for Codable and for
Equatable/Hashable.
>>>>> The community and core team decided to prefer the
current design. At
>>>>> this point, new insights that arise which could not be
anticipated
>>>>> at the time of review could prompt revision. However, so
far, you
>>>>> have presented arguments already considered during
review.
>>>>
>>>> And so far all I have heard about this is how it was
"decided";
>>>> no-one seems interested in showing how any of these
concerns were
>>>> addressed (if at all), so as far as I can tell they were
not, or they
>>>> were wilfully ignored.
>>>
>>>
>>> They were addressed by being considered.
>>
>> And yet no-one can apparently summarise what those
"considerations" might
>> be, suggesting that they were either *not* considered at all,
or that the
>> "consideration" was so weak that no-one is willing to step
forward to
>> defend it. Either way it is not sufficient by any reasonable
measure.
>>
>> If I were to run over your foot in my car, would you be happy
to accept
>> that I "considered" it first?
>>
>>
>> How do you mean? People wrote in with their opinions. Then, taking
into account
>> the community's response, the proposal was approved.
>
> I mean because not once have you summarised what these alleged
"considerations"
> were; if they exist then you should be able do so, yet all I am
hearing is "it
> was considered", which frankly is not an argument at all as it is
entirely
> without substance.
>
> If it was genuinely considered then someone should be able to say
what points
> were considered and what conclusions were reached and why. And even
if there
> *was* an earlier decision, that doesn't necessarily make it right.
We are
> discussing it now, and it is clear that any decision that has been
made has been
> made poorly at best.
>
> And if you're talking about the discussion on Equatable/Hashable
specifically,
> I'm afraid your memory of the "considerations" is radically
different to mine; as
> the concerns I raised were essentially ignored, as not a single
person gave a
> justification more substantial than "but, but Codable!" which
frankly isn't a
> justification at all.
>
>>>>>> Therefore, your argument reduces to one about which
default
>>>>>> implementations generally ought or ought not to be
>>>>>> provided--that is, that they ought to be provided
only when
>>>>>> their correctness can be guaranteed for all (rather
than almost
>>>>>> all) possible conforming types. To which point I
sketched a
>>>>>> rebuttal above.
>>>>>
>>>>> If a protocol defines something, and creates a
default
>>>>> implementation based only upon those definitions
then it must by
>>>>> its very nature be correct. A concrete type may
later decided to
>>>>> go further, but that is a feature of the concrete
type, not a
>>>>> failure of the protocol itself which can function
correctly
>>>>> within the context it created. You want to talk
evidence, yet
>>>>> there has been no example given that proves
otherwise; thus far
>>>>> only Itai has attempted to do so, but I have already
pointed out
>>>>> the flaws with that example.
>>>>>
>>>>> The simple fact is that a default implementation may
either be
>>>>> flawed or not within the context of the protocol
itself; but a
>>>>> reflective or synthetic implementation by its very
nature goes
>>>>> beyond what the protocol defines and so is
automatically flawed
>>>>> because as it does not rely on the end-developer to
confirm
>>>>> correctness, not when provided implicitly at least.
>>>>>
>>>>>
>>>>> Again, if it applies generally, it must apply
specifically. What is
>>>>> "automatically flawed" about the very reasonable
synthesized default
>>>>> implementation of ==?
>>>>
>>>> It makes the assumption that every equatable property of
a type is
>>>> necessarily relevant to its equality.
>>>
>>>
>>> No necessarily, only provisionally and rebuttably. If it’s not
the case,
>>> override the default.
>>
>> So… entirely unlike standard default implementations
>> which*cannot* "provisionally" assume something is relevant at
all,
>>
>>
>> Why not?
>
> Because they can only act upon properties/methods that they
themselves (or a
> parent protocol) define. FFS, what is so unclear about that? Or are
you arguing
> on this subject without every having actually used a protocol before?
>
>> thereby making them entirely different from
synthesised/reflective
>> implementations!
>>
>> I'm sorry, but you keep trying to argue that they're the same,
but then
>> admitting that they're not. You can't have it both ways.
>>
>>
>> Well, certainly, synthesized default implementations differ from
>> non-synthesized ones in key respects. However, they do not differ
in terms of
>> the user experience of conforming to the protocol and having to
override the
>> default.
>
> Except that that's not true at all, is it?
>
> Synthesised default implementations go much further in how they
attempt (and
> potentially fail) to implement those defaults, and in the specific
case of
> Equatable/Hashable they are fully implementing a protocol without a
single
> property of method being raised as a requirement; they are utterly
different at a
> fundamental level, no amount of mental contortion changes that fact.
>
>>>> Consider for example if a type stores a collection index
for
>>>> performance reasons; this isn't an intrinsic part of the
type, nor
>>>> relevant to testing equality, yet this default
implementation will
>>>> treat it as such because it*knows nothing about the
concrete type's
>>>> properties*. If a protocol does not define a property
then any action
>>>> taken upon such a property is necessarily based upon an
assumption;
>>>> just because it might be fine some of the time, does not
make it any
>>>> less flawed.
>>>>
>>>> The big difference here between explicit and implicit
synthetic
>>>> implementations is where this assumption originates; if a
method is
>>>> synthesised implicitly then the assumption is made by the
protocol
>>>> designer alone, with no real involvement by the end
developer. If I
>>>> explicitly opt-in to that default however I am signalling
to the
>>>> protocol that it is okay to proceed. In the former case
the
>>>> assumption is unreasonable, in the latter it is explicitly
>>>> authorised. It is a difference between "I want to make
the decision
>>>> on what's correct" and "I am happy for you (the protocol
designer) to
>>>> decide".
>>>>
>>>> Right now, when I conform to Equatable, it is a
declaration of "I
>>>> will implement this", but with this retroactive implicit
change it is
>>>> now a declaration of "implement this for me", these are
two entirely
>>>> different things. Consider; what if I'm working on a
piece of code
>>>> that requires types to be Equatable, but one of the types
I'm using
>>>> currently isn't, so I quickly throw Equatable conformance
onto it and
>>>> go back to what I was doing, with the intention of
completing
>>>> conformance later. With this change that type may now
receive a
>>>> default implementation that is wrong, and I've lost the
safety net
>>>> that currently exists.
>>>
>>>
>>> Right now, it still wouldn’t compile, so I don’t see why you
would do
>>> that. In the future, if you want to make it not compile, there
is nothing
>>> stopping you from conforming to a non-existent
“NotYetEquatable”. This was
>>> something that you asked about earlier and it was answered.
>>
>> So your solution is to intentionally write invalid code to work
around the
>> fact that a feature is being implemented badly?
>>
>>
>> You stated a use case where you *want* the compiler to stop your
code from
>> compiling by stating a conformance to Equatable without
implementing its
>> requirements. You then stated that the major problem you have with
synthesized
>> `==` is that the compiler will now use a default implementation
that you might
>> forget about instead of stopping compilation. Therefore, I
demonstrated how you
>> could continue to have the compiler stop your code from compiling.
It's not my
>> solution that is intentionally writing invalid code; your stated
aim was to be
>> able to do so.
>
> My stated aim was nothing of the sort.
>
> I was pointing out that right now conforming to Equatable means
something
> entirely different from what it will mean in future if this idiotic
change makes
> it into release. Please actually read what I write before deciding
for yourself
> what my 'stated aim' is.
>
> I am *not* asking for workarounds to circumvent a ridiculously
flawed change to
> the language, I am arguing why it is flawed and must be changed. If
I wanted a
> workaround I'd do what I'm now seriously considering, which is
ditching Swift
> completely, as I will not use a language if I can no longer trust
the team
> developing it or the decisions that they make.
>
>>>> A non-synthesised/reflective implementation cannot
strictly be
>>>> incorrect, because as long as it is implemented properly
it will
>>>> always be correct within the context of the protocol
itself. It may
>>>> not go quite as far as an end developer might want, but
that is
>>>> because they want to add something onto the protocol, not
because the
>>>> protocol is wrong.
>>>>
>>>> A synthesised/reflective implementation differs because
if it goes
>>>> too far it is wrong not only within the context of the
concrete type,
>>>> but also the protocol itself, it is simply incorrect.
>>>
>>>
>>> Again, this is an assertion that misses the mark. If the
default
>>> implementation is unsuitable for a type, it’s unsuitable
whether it
>>> “doesn’t go quite as far” or “goes too far.”
>>
>> Because not going quite far enough is not a failure of the
protocol, as
>> protocols by their very nature can only go as far as what they
define. If a
>> protocol Foo defines two properties, a method which uses those
two
>> properties correctly, then the method is correct. A developer
of a concrete
>> type might want to add more information or tailor the
behaviour, but that
>> doesn't make the default implementation incorrect, it's just
considering
>> the type only within the context of being an instance of Foo.
>>
>> Going too far is the opposite; it's the protocol designer
messing around
>> with stuff they do not define at all. It's only ever right by
chance, as
>> it's operating within the context of the concrete type, about
which the
>> protocol does not know anything with certainty.
>>
>>
>> Yes, you have defined "not going far enough" and "going too far"
based on
>> whether an implementation uses only protocol requirements or not.
However, you
>> haven't at all demonstrated why this distinction is at all
meaningful in terms
>> of the issue you describe with a user conforming to a protocol. If
there is a
>> default implementation, either it returns the expected result for
the
>> conforming type or it does not--those are the only two choices. Are
you arguing
>> that, empirically, the default implementation for Equatable will
more often be
>> unsuitable for conforming types? If so, what's your evidence?
>
> What's yours? If this issue was as "considered" as you constantly
claim then
> where is the evidence that there is no meaningful distinction?
Surely such
> evidence exists, or else the issue hasn't been considered at all,
has it?
>
> Frankly I am sick of being asked to provide evidence when you are
seemingly
> unwilling to do anything in return, especially when you have
conveniently ignored
> every single example that I have already given.
>
> It cuts both ways; you claim that "going too far" and "not going far
enough" are
> the same thing? Well prove it.
>
>>> You state but do not give any rationale for the claim that the
former is
>>> not wrong in some context while the latter is always wrong.
>>>
>>> By this line of argumentation, you’d be perfectly content if
instead we
>>> simply had the default implementation of == as “return true”
because it
>>> would be somehow not wrong.
>>
>> Only if return true were a reasonable default to give in the
context of the
>> protocol, which it clearly is not, as it's not performing any
kind of
>> comparison of equality.
>>
>>
>> Sure it is; `return true` satisfies all the semantic requirements
for equality:
>> reflexivity, symmetry, transitivity; and, in the context of the
protocol which
>> only provides for this one facility (determination of equality or
inequality),
>> any two instances that compare equal _are_ completely
interchangeable "within
>> the context of the protocol itself," as you would say.
>
> The purpose of Equatable is to identify types that can be compared
for equality;
> returning true does not satisfy that aim because no such comparison
is occurring,
> so your example is intentionally ridiculous. Even a less contrived
example such
> as comparing memory addresses doesn't fulfil the purpose of
Equatable, which is
> all about comparing equality of different instances that might still
be the same.
>
>>>>> Put another way, what the proposal about synthesizing
>>>>> implementations for Equatable and Hashable was about
can be
>>>>> thought of in two parts: (a) should there be default
>>>>> implementations; and (b) given that it is impossible
to write
>>>>> these in Swift, should we use magic? Now, as I said
above,
>>>>> adding default implementations isn't (afaik) even
considered an
>>>>> API change that requires review on this list.
Really, what
>>>>> people were debating was (b), whether it is worth it
to
>>>>> implement compiler-supported magic to make these
possible. Your
>>>>> disagreement has to do with (a) and not (b).
>>>>
>>>> Wrong. The use of magic in this case produces
something else
>>>> entirely; that's the whole point. It is*not the
same*, otherwise
>>>> it wouldn't be needed at all. It doesn't matter if
it's compiler
>>>> magic, some external script or a native macro,
ultimately they
>>>> are all doing something with a concrete type that is
currently
>>>> not possible.
>>>>
>>>> And once again;*I am not arguing against a default
implementation
>>>> that cuts boilerplate*, I am arguing against it being
implicit.
>>>> What I want is to be the one asking for it, because
it is not
>>>> reasonable to assume that just throwing it in there
is always
>>>> going to be fine, because it quite simply is not.
>>>>
>>>>
>>>> If you have to ask for it, then it's not a default. You
*are* against
>>>> a default implementation.
>>>
>>> A default implementation is an implementation that I, as
the concrete
>>> type developer, do not have to provide myself. If you want
default to
>>> mean only "automatic" then your attempt to pigeon-hole
what I am
>>> arguing is incorrect, because what I am arguing is then
neither about
>>> default implementations nor the means of actually
implementing it, but
>>> something else entirely.
>>>
>>> But as far as I'm concerned it still absolutely still a
default
>>> implementation whether it is requested or not; the
difference is I, as
>>> the end developer, am able to refine what type of defaults
that I want.
>>>
>>>
>>> The word “default” indicates something that arises in the
absence of a
>>> user indication otherwise.
>>
>> Then this proposal is just for a different mechanism for
"indicating
>> otherwise".
>>
>> You keep trying to argue that a synthesised/reflective default
>> implementation is the same as a normal default implementation,
yet you seem
>> to be consistently forgetting that even if that is true without
this
>> proposal, that the very proposal itself is to change that,
effectively
>> causing a category of default implementation to become
explicitly
>> opted-into, rather than implicitly. They're still
implementations that will
>> be provided automatically, just only when they are permitted to
do-so.
>>
>>
>> So to be clear, you are *against* them being the *default*: you
wish them to be
>> the *otherwise*.
>
> You seem to be insisting upon a narrow definition of default; what I
want is
> control over which types of default implementations are provided.
Just because
> they must be opted-into explicitly does not stop them being
"default", as they
> are still implementations that I myself do not need to implement.
The difference
> is that I want to actually *want* them rather than have provided
through
> potentially flimsy assumptions made by a protocol designer. Just
because there's
> an extra step doesn't make them any less automatic, otherwise having
to conform
> to a protocol in the first place would also prevent them from being
defaults.
>
> Asking *for* something is more like a middle-ground between the two;
the
> synthetic implementations are still possible defaults, they just
aren't provided
> unless you allow them, while omitting the necessary
keyword/attribute prevents
> them being used.
>
>>>> On 9 Sep 2017, at 23:17, Gwendal Roué < > gwendal.roue@gmail.com > >>>> <mailto:gwendal.roue@gmail.com>> wrote:
>>>>
>>>> All right, I'll be more positive: our science, IT, is a
>>>> *constructive* science, by *essence*. If there is a
problem, there
>>>> must be a way to show it.
>>>> It you can't, then there is no problem.
>>>
>>> You mean just as I have asked for examples that prove
>>> non-synthetic/reflective default implementations are as
dangerous as
>>> synthetic/reflective ones? Plenty have suggested this is
the case yet
>>> no reasonable examples of that have been given either.
>>>
>>> However, examples highlighting problems with the
synthesised behaviour
>>> are simple:
>>>
>>> structFoo :Equatable{vardata:String}// Currently an
error, won't
>>> be in future
>>>
>>>
>>> Or something a bit more substantial:
>>>
>>> structKeyPair :Equatable{
>>> staticvarcount:Int=0
>>>
>>> varcount:Int
>>> letkey:String// This is the only property that should
be equatable
>>> varvalue:String
>>>
>>> init(key:String, value:String) {
>>> letcount =KeyPair.count&+1
>>> KeyPair.count= count;self.count= count
>>> self.key= key;self.value= value
>>> }
>>> }
>>>
>>> Here the only important property in the key pair is the
key, the value
>>> isn't important (only the keys are to be considered
unique) and the
>>> count is just a throwaway value. The synthesised default
>>> implementation for this concrete type will therefore be
completely
>>> wrong, likewise for Hashable, which will likely produce
radically
>>> different results for instances that should be the same.
>>
>> I notice that despite asking endlessly for examples, the ones
I've given
>> are being ignored. In future I shall remind people asking for
examples
>> where they can shove them.
>
> And once again, totally ignored. You seem to love asking for
"evidence" but why
> exactly should I bother giving anything if you ignore it when I try
to?
>
> _______________________________________________
> swift-evolution mailing list
> swift-evolution@swift.org <mailto:swift-evolution@swift.org>
> https://lists.swift.org/mailman/listinfo/swift-evolution
> <https://lists.swift.org/mailman/listinfo/swift-evolution&gt;
>
>
>
>
> _______________________________________________
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

     > Maybe something like this as middle ground.
     >
     > protocol Equatable {
     > @syntetic static func ==(_ lhs: Self, _ rhs: Self) -> Bool
     > }
     >
     > protocol itself contains default implementation, but without real body.
    Instead the
     > function is marked that the real body is generated by compiler.
     > There is explicit mentions of default impl (by compiler magic), but it does not
     > affects users as they would still use protocol in normal way:
     >
     > struct Foo: Equatable { .... }

    Yes, I also thought about this. And personally for me it is also good solution, while
    `struct S: Equatable {/*nothing*/}` will *still* lead to compiler's error or at least
    warning about not implemented requirements.
    So, I'll be explicit regarding my intention: do I want requirements to be
    auto-generated or I want to do this manually.

    But still. If you see

    struct S: Equatable, Codable {
        // a lot of lines
    }

    you can't say right now if requirements for Equatable and/or Codable was implemented
    manually or will be auto-generated without checking all the code of a type. This
    knowledge can help to faster solve issues related to comparison/archiving.
    So for me the best solution is still 'deriving'-like keyword, which adds clarity and
    show intention without any boilerplate code:

The sentences above apply equally to non-synthesized default protocol implementations:

struct S: Foo {
   // a lot of lines
}

I can't say if the requirements for Foo were implemented manually by S or by a default implementation in Foo (which could be in a different module that I don't have source access to) without checking all the code for S. So this can't be used as a basis to rationalize special-casing synthesized implementations.

As was noted in this thread, some people believe that protocol synthesizing its requirements by accessing type's fields is of a different kind than 'usual' protocol with default implementation.
I belong to that camp. So, from my point of view, it is important to have 'deriving'-like marker for 'auto-senthesizeable' protocols as described above.

Also, some 'usual' protocol Foo can have no default implementations at the moment of *writing* the code, but can have them at the moment of *compilation* via protocol extension in separate file in project. So it is not possible to require similar marker for such protocol.
But Equatable/Hashable/Codable protocols has auto-generation feature already at the moment of writing the code and we can request that marker.

Vladimir.

···

On 13.09.2017 20:48, Tony Allevato wrote:

On Wed, Sep 13, 2017 at 10:21 AM Vladimir.S via swift-evolution > <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
    On 13.09.2017 19:08, Ondrej Barina via swift-evolution wrote:

    struct S: Equatable, deriving Codable {
        // all clear:
        // manually implemented Equatable
        // auto-generated Codable

        // a lot of lines
    }

    Vladimir.

     >
     > Ondrej B.
     >
     > On Wed, Sep 13, 2017 at 4:14 PM, Haravikk via swift-evolution > > <swift-evolution@swift.org <mailto:swift-evolution@swift.org> > <mailto:swift-evolution@swift.org <mailto:swift-evolution@swift.org>>> wrote:
     >
     >> On 13 Sep 2017, at 03:26, Xiaodi Wu <xiaodi.wu@gmail.com > <mailto:xiaodi.wu@gmail.com> > >> <mailto:xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>>> wrote:
     >>
     >> On Tue, Sep 12, 2017 at 11:43 AM, Haravikk via > >> swift-evolution<swift-evolution@swift.org > <mailto:swift-evolution@swift.org> <mailto:swift-evolution@swift.org > <mailto:swift-evolution@swift.org>>>wrote:
     >>
     >>> On 12 Sep 2017, at 12:08, Xiaodi Wu <xiaodi.wu@gmail.com > <mailto:xiaodi.wu@gmail.com> > >>> <mailto:xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>>> wrote:
     >>>
     >>>> On Mon, Sep 11, 2017 at 06:03 Haravikk via swift-evolution > >>>> <swift-evolution@swift.org <mailto:swift-evolution@swift.org> > <mailto:swift-evolution@swift.org <mailto:swift-evolution@swift.org>>> wrote:
     >>>>
     >>>> See, this is another flawed assumption; you are assuming that
     >>>> omitting a custom implementation of == is always intentional rather
     >>>> than an oversight, which is not guaranteed. This is one of my
    gripes
     >>>> with the retroactive change to Equatable, as it is
     >>>> currently*impossible* to omit an implementation.
     >>>
     >>> Again, this applies equally to the addition of _any_ default
     >>> implementation. And again, such changes don’t even require Swift
    Evolution
     >>> approval.
     >>
     >> So what? Because the Swift Evolution process is currently deficient we
     >> should just give up on discussing problems with features and the language
     >> altogether?
     >>
     >> I don't claim that it's a deficiency; I claim it's reflective of Swift's
     >> opinionated take on default implementations. Are you, after all, saying that
     >> you have a problem with the addition of _any_ default implementation to an
     >> existing protocol? If so, this conversation isn't about
    synthesis/reflection at
     >> all.
     >
     > No, and you should know that by now. I suggest actually reading some of what I
     > have written as I am sick of repeating myself.
     >
     >>>>>> And precisely what kind of "evidence" am I expected to give? This
     >>>>>> is a set of features that*do not exist yet*, I am trying to argue
     >>>>>> in favour of an explicit end-developer centric opt-in rather than
     >>>>>> an implicit protocol designer centric one. Yet no-one seems
     >>>>>> interested in the merits of allowing developers to choose
    what they
     >>>>>> want, rather than having implicit behaviours appear potentially
     >>>>>> unexpectedly.
     >>>>>
     >>>>> Both options were examined for Codable and for Equatable/Hashable.
     >>>>> The community and core team decided to prefer the current
    design. At
     >>>>> this point, new insights that arise which could not be anticipated
     >>>>> at the time of review could prompt revision. However, so far, you
     >>>>> have presented arguments already considered during review.
     >>>>
     >>>> And so far all I have heard about this is how it was "decided";
     >>>> no-one seems interested in showing how any of these concerns were
     >>>> addressed (if at all), so as far as I can tell they were not,
    or they
     >>>> were wilfully ignored.
     >>>
     >>> They were addressed by being considered.
     >>
     >> And yet no-one can apparently summarise what those "considerations" might
     >> be, suggesting that they were either *not* considered at all, or that the
     >> "consideration" was so weak that no-one is willing to step forward to
     >> defend it. Either way it is not sufficient by any reasonable measure.
     >>
     >> If I were to run over your foot in my car, would you be happy to accept
     >> that I "considered" it first?
     >>
     >> How do you mean? People wrote in with their opinions. Then, taking into
    account
     >> the community's response, the proposal was approved.
     >
     > I mean because not once have you summarised what these alleged
    "considerations"
     > were; if they exist then you should be able do so, yet all I am hearing is "it
     > was considered", which frankly is not an argument at all as it is entirely
     > without substance.
     >
     > If it was genuinely considered then someone should be able to say what points
     > were considered and what conclusions were reached and why. And even if there
     > *was* an earlier decision, that doesn't necessarily make it right. We are
     > discussing it now, and it is clear that any decision that has been made
    has been
     > made poorly at best.
     >
     > And if you're talking about the discussion on Equatable/Hashable specifically,
     > I'm afraid your memory of the "considerations" is radically different to
    mine; as
     > the concerns I raised were essentially ignored, as not a single person gave a
     > justification more substantial than "but, but Codable!" which frankly isn't a
     > justification at all.
     >
     >>>>>> Therefore, your argument reduces to one about which default
     >>>>>> implementations generally ought or ought not to be
     >>>>>> provided--that is, that they ought to be provided only when
     >>>>>> their correctness can be guaranteed for all (rather than
    almost
     >>>>>> all) possible conforming types. To which point I sketched a
     >>>>>> rebuttal above.
     >>>>>
     >>>>> If a protocol defines something, and creates a default
     >>>>> implementation based only upon those definitions then it
    must by
     >>>>> its very nature be correct. A concrete type may later
    decided to
     >>>>> go further, but that is a feature of the concrete type, not a
     >>>>> failure of the protocol itself which can function correctly
     >>>>> within the context it created. You want to talk evidence, yet
     >>>>> there has been no example given that proves otherwise;
    thus far
     >>>>> only Itai has attempted to do so, but I have already
    pointed out
     >>>>> the flaws with that example.
     >>>>>
     >>>>> The simple fact is that a default implementation may either be
     >>>>> flawed or not within the context of the protocol itself; but a
     >>>>> reflective or synthetic implementation by its very nature goes
     >>>>> beyond what the protocol defines and so is automatically
    flawed
     >>>>> because as it does not rely on the end-developer to confirm
     >>>>> correctness, not when provided implicitly at least.
     >>>>>
     >>>>> Again, if it applies generally, it must apply specifically.
    What is
     >>>>> "automatically flawed" about the very reasonable synthesized
    default
     >>>>> implementation of ==?
     >>>>
     >>>> It makes the assumption that every equatable property of a type is
     >>>> necessarily relevant to its equality.
     >>>
     >>> No necessarily, only provisionally and rebuttably. If it’s not the case,
     >>> override the default.
     >>
     >> So… entirely unlike standard default implementations
     >> which*cannot* "provisionally" assume something is relevant at all,
     >>
     >> Why not?
     >
     > Because they can only act upon properties/methods that they themselves (or a
     > parent protocol) define. FFS, what is so unclear about that? Or are you
    arguing
     > on this subject without every having actually used a protocol before?
     >
     >> thereby making them entirely different from synthesised/reflective
     >> implementations!
     >>
     >> I'm sorry, but you keep trying to argue that they're the same, but then
     >> admitting that they're not. You can't have it both ways.
     >>
     >> Well, certainly, synthesized default implementations differ from
     >> non-synthesized ones in key respects. However, they do not differ in terms of
     >> the user experience of conforming to the protocol and having to override the
     >> default.
     >
     > Except that that's not true at all, is it?
     >
     > Synthesised default implementations go much further in how they attempt (and
     > potentially fail) to implement those defaults, and in the specific case of
     > Equatable/Hashable they are fully implementing a protocol without a single
     > property of method being raised as a requirement; they are utterly
    different at a
     > fundamental level, no amount of mental contortion changes that fact.
     >
     >>>> Consider for example if a type stores a collection index for
     >>>> performance reasons; this isn't an intrinsic part of the type, nor
     >>>> relevant to testing equality, yet this default implementation will
     >>>> treat it as such because it*knows nothing about the concrete type's
     >>>> properties*. If a protocol does not define a property then any
    action
     >>>> taken upon such a property is necessarily based upon an assumption;
     >>>> just because it might be fine some of the time, does not make
    it any
     >>>> less flawed.
     >>>>
     >>>> The big difference here between explicit and implicit synthetic
     >>>> implementations is where this assumption originates; if a method is
     >>>> synthesised implicitly then the assumption is made by the protocol
     >>>> designer alone, with no real involvement by the end developer. If I
     >>>> explicitly opt-in to that default however I am signalling to the
     >>>> protocol that it is okay to proceed. In the former case the
     >>>> assumption is unreasonable, in the latter it is explicitly
     >>>> authorised. It is a difference between "I want to make the decision
     >>>> on what's correct" and "I am happy for you (the protocol
    designer) to
     >>>> decide".
     >>>>
     >>>> Right now, when I conform to Equatable, it is a declaration of "I
     >>>> will implement this", but with this retroactive implicit change
    it is
     >>>> now a declaration of "implement this for me", these are two
    entirely
     >>>> different things. Consider; what if I'm working on a piece of code
     >>>> that requires types to be Equatable, but one of the types I'm using
     >>>> currently isn't, so I quickly throw Equatable conformance onto
    it and
     >>>> go back to what I was doing, with the intention of completing
     >>>> conformance later. With this change that type may now receive a
     >>>> default implementation that is wrong, and I've lost the safety net
     >>>> that currently exists.
     >>>
     >>> Right now, it still wouldn’t compile, so I don’t see why you would do
     >>> that. In the future, if you want to make it not compile, there is
    nothing
     >>> stopping you from conforming to a non-existent “NotYetEquatable”.
    This was
     >>> something that you asked about earlier and it was answered.
     >>
     >> So your solution is to intentionally write invalid code to work
    around the
     >> fact that a feature is being implemented badly?
     >>
     >> You stated a use case where you *want* the compiler to stop your code from
     >> compiling by stating a conformance to Equatable without implementing its
     >> requirements. You then stated that the major problem you have with
    synthesized
     >> `==` is that the compiler will now use a default implementation that you
    might
     >> forget about instead of stopping compilation. Therefore, I demonstrated
    how you
     >> could continue to have the compiler stop your code from compiling. It's
    not my
     >> solution that is intentionally writing invalid code; your stated aim was
    to be
     >> able to do so.
     >
     > My stated aim was nothing of the sort.
     >
     > I was pointing out that right now conforming to Equatable means something
     > entirely different from what it will mean in future if this idiotic change
    makes
     > it into release. Please actually read what I write before deciding for
    yourself
     > what my 'stated aim' is.
     >
     > I am *not* asking for workarounds to circumvent a ridiculously flawed
    change to
     > the language, I am arguing why it is flawed and must be changed. If I wanted a
     > workaround I'd do what I'm now seriously considering, which is ditching Swift
     > completely, as I will not use a language if I can no longer trust the team
     > developing it or the decisions that they make.
     >
     >>>> A non-synthesised/reflective implementation cannot strictly be
     >>>> incorrect, because as long as it is implemented properly it will
     >>>> always be correct within the context of the protocol itself. It may
     >>>> not go quite as far as an end developer might want, but that is
     >>>> because they want to add something onto the protocol, not
    because the
     >>>> protocol is wrong.
     >>>>
     >>>> A synthesised/reflective implementation differs because if it goes
     >>>> too far it is wrong not only within the context of the concrete
    type,
     >>>> but also the protocol itself, it is simply incorrect.
     >>>
     >>> Again, this is an assertion that misses the mark. If the default
     >>> implementation is unsuitable for a type, it’s unsuitable whether it
     >>> “doesn’t go quite as far” or “goes too far.”
     >>
     >> Because not going quite far enough is not a failure of the protocol, as
     >> protocols by their very nature can only go as far as what they
    define. If a
     >> protocol Foo defines two properties, a method which uses those two
     >> properties correctly, then the method is correct. A developer of a
    concrete
     >> type might want to add more information or tailor the behaviour, but that
     >> doesn't make the default implementation incorrect, it's just considering
     >> the type only within the context of being an instance of Foo.
     >>
     >> Going too far is the opposite; it's the protocol designer messing around
     >> with stuff they do not define at all. It's only ever right by chance, as
     >> it's operating within the context of the concrete type, about which the
     >> protocol does not know anything with certainty.
     >>
     >> Yes, you have defined "not going far enough" and "going too far" based on
     >> whether an implementation uses only protocol requirements or not.
    However, you
     >> haven't at all demonstrated why this distinction is at all meaningful in
    terms
     >> of the issue you describe with a user conforming to a protocol. If there is a
     >> default implementation, either it returns the expected result for the
     >> conforming type or it does not--those are the only two choices. Are you
    arguing
     >> that, empirically, the default implementation for Equatable will more
    often be
     >> unsuitable for conforming types? If so, what's your evidence?
     >
     > What's yours? If this issue was as "considered" as you constantly claim then
     > where is the evidence that there is no meaningful distinction? Surely such
     > evidence exists, or else the issue hasn't been considered at all, has it?
     >
     > Frankly I am sick of being asked to provide evidence when you are seemingly
     > unwilling to do anything in return, especially when you have conveniently
    ignored
     > every single example that I have already given.
     >
     > It cuts both ways; you claim that "going too far" and "not going far
    enough" are
     > the same thing? Well prove it.
     >
     >>> You state but do not give any rationale for the claim that the former is
     >>> not wrong in some context while the latter is always wrong.
     >>>
     >>> By this line of argumentation, you’d be perfectly content if instead we
     >>> simply had the default implementation of == as “return true” because it
     >>> would be somehow not wrong.
     >>
     >> Only if return true were a reasonable default to give in the context
    of the
     >> protocol, which it clearly is not, as it's not performing any kind of
     >> comparison of equality.
     >>
     >> Sure it is; `return true` satisfies all the semantic requirements for
    equality:
     >> reflexivity, symmetry, transitivity; and, in the context of the protocol
    which
     >> only provides for this one facility (determination of equality or
    inequality),
     >> any two instances that compare equal _are_ completely interchangeable "within
     >> the context of the protocol itself," as you would say.
     >
     > The purpose of Equatable is to identify types that can be compared for
    equality;
     > returning true does not satisfy that aim because no such comparison is
    occurring,
     > so your example is intentionally ridiculous. Even a less contrived example
    such
     > as comparing memory addresses doesn't fulfil the purpose of Equatable,
    which is
     > all about comparing equality of different instances that might still be
    the same.
     >
     >>>>> Put another way, what the proposal about synthesizing
     >>>>> implementations for Equatable and Hashable was about can be
     >>>>> thought of in two parts: (a) should there be default
     >>>>> implementations; and (b) given that it is impossible to write
     >>>>> these in Swift, should we use magic? Now, as I said above,
     >>>>> adding default implementations isn't (afaik) even
    considered an
     >>>>> API change that requires review on this list. Really, what
     >>>>> people were debating was (b), whether it is worth it to
     >>>>> implement compiler-supported magic to make these possible.
    Your
     >>>>> disagreement has to do with (a) and not (b).
     >>>>
     >>>> Wrong. The use of magic in this case produces something else
     >>>> entirely; that's the whole point. It is*not the same*,
    otherwise
     >>>> it wouldn't be needed at all. It doesn't matter if it's
    compiler
     >>>> magic, some external script or a native macro, ultimately they
     >>>> are all doing something with a concrete type that is currently
     >>>> not possible.
     >>>>
     >>>> And once again;*I am not arguing against a default
    implementation
     >>>> that cuts boilerplate*, I am arguing against it being implicit.
     >>>> What I want is to be the one asking for it, because it is not
     >>>> reasonable to assume that just throwing it in there is always
     >>>> going to be fine, because it quite simply is not.
     >>>>
     >>>> If you have to ask for it, then it's not a default. You *are*
    against
     >>>> a default implementation.
     >>>
     >>> A default implementation is an implementation that I, as the
    concrete
     >>> type developer, do not have to provide myself. If you want
    default to
     >>> mean only "automatic" then your attempt to pigeon-hole what I am
     >>> arguing is incorrect, because what I am arguing is then neither
    about
     >>> default implementations nor the means of actually implementing
    it, but
     >>> something else entirely.
     >>>
     >>> But as far as I'm concerned it still absolutely still a default
     >>> implementation whether it is requested or not; the difference is
    I, as
     >>> the end developer, am able to refine what type of defaults that
    I want.
     >>>
     >>> The word “default” indicates something that arises in the absence of a
     >>> user indication otherwise.
     >>
     >> Then this proposal is just for a different mechanism for "indicating
     >> otherwise".
     >>
     >> You keep trying to argue that a synthesised/reflective default
     >> implementation is the same as a normal default implementation, yet
    you seem
     >> to be consistently forgetting that even if that is true without this
     >> proposal, that the very proposal itself is to change that, effectively
     >> causing a category of default implementation to become explicitly
     >> opted-into, rather than implicitly. They're still implementations
    that will
     >> be provided automatically, just only when they are permitted to do-so.
     >>
     >> So to be clear, you are *against* them being the *default*: you wish them
    to be
     >> the *otherwise*.
     >
     > You seem to be insisting upon a narrow definition of default; what I want is
     > control over which types of default implementations are provided. Just because
     > they must be opted-into explicitly does not stop them being "default", as they
     > are still implementations that I myself do not need to implement. The
    difference
     > is that I want to actually *want* them rather than have provided through
     > potentially flimsy assumptions made by a protocol designer. Just because
    there's
     > an extra step doesn't make them any less automatic, otherwise having to
    conform
     > to a protocol in the first place would also prevent them from being defaults.
     >
     > Asking *for* something is more like a middle-ground between the two; the
     > synthetic implementations are still possible defaults, they just aren't
    provided
     > unless you allow them, while omitting the necessary keyword/attribute prevents
     > them being used.
     >
     >>>> On 9 Sep 2017, at 23:17, Gwendal Roué <gwendal.roue@gmail.com > <mailto:gwendal.roue@gmail.com> > >>>> <mailto:gwendal.roue@gmail.com > <mailto:gwendal.roue@gmail.com>>> wrote:
     >>>>
     >>>> All right, I'll be more positive: our science, IT, is a
     >>>> *constructive* science, by *essence*. If there is a problem, there
     >>>> must be a way to show it.
     >>>> It you can't, then there is no problem.
     >>>
     >>> You mean just as I have asked for examples that prove
     >>> non-synthetic/reflective default implementations are as dangerous as
     >>> synthetic/reflective ones? Plenty have suggested this is the
    case yet
     >>> no reasonable examples of that have been given either.
     >>>
     >>> However, examples highlighting problems with the synthesised
    behaviour
     >>> are simple:
     >>>
     >>> structFoo :Equatable{vardata:String}// Currently an error, won't
     >>> be in future
     >>>
     >>> Or something a bit more substantial:
     >>>
     >>> structKeyPair :Equatable{
     >>> staticvarcount:Int=0
     >>>
     >>> varcount:Int
     >>> letkey:String// This is the only property that should be
    equatable
     >>> varvalue:String
     >>>
     >>> init(key:String, value:String) {
     >>> letcount =KeyPair.count&+1
     >>> KeyPair.count= count;self.count= count
     >>> self.key= key;self.value= value
     >>> }
     >>>
     >>> Here the only important property in the key pair is the key, the
    value
     >>> isn't important (only the keys are to be considered unique) and the
     >>> count is just a throwaway value. The synthesised default
     >>> implementation for this concrete type will therefore be completely
     >>> wrong, likewise for Hashable, which will likely produce radically
     >>> different results for instances that should be the same.
     >>
     >> I notice that despite asking endlessly for examples, the ones I've given
     >> are being ignored. In future I shall remind people asking for examples
     >> where they can shove them.
     >
     > And once again, totally ignored. You seem to love asking for "evidence"
    but why
     > exactly should I bother giving anything if you ignore it when I try to?
     >
     > _______________________________________________
     > swift-evolution mailing list
     > swift-evolution@swift.org <mailto:swift-evolution@swift.org>
    <mailto:swift-evolution@swift.org <mailto:swift-evolution@swift.org>>
     > https://lists.swift.org/mailman/listinfo/swift-evolution
     > <https://lists.swift.org/mailman/listinfo/swift-evolution&gt;
     >
     > _______________________________________________
     > swift-evolution mailing list
     > swift-evolution@swift.org <mailto:swift-evolution@swift.org>
     > https://lists.swift.org/mailman/listinfo/swift-evolution
     >
    _______________________________________________
    swift-evolution mailing list
    swift-evolution@swift.org <mailto:swift-evolution@swift.org>
    https://lists.swift.org/mailman/listinfo/swift-evolution

Chris Lattner already said that the core team discussed your concerns:

<https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20170814/038854.html&gt;

<https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20170814/038883.html&gt;

The original idea was for most types to be *implicitly* equatable and hashable:

<https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20160307/012099.html&gt;

The accepted proposal, with *explicit* declaration of conformance, is a good compromise.

Instead of discussing hypothetical issues with SE-0185, we can wait for Swift 4.1 beta.

-- Ben

···

On 14 Sep 2017, at 15:31, Haravikk wrote:

On 14 Sep 2017, at 02:12, Xiaodi Wu wrote:

On Wed, Sep 13, 2017 at 09:13 Haravikk wrote:

I mean because not once have you summarised what these alleged "considerations" were; if they exist then you should be able do so, yet all I am hearing is "it was considered", which frankly is not an argument at all as it is entirely without substance.

Of course it is not an argument at all. It is a factual statement. The objections which you mentioned were also mentioned prior to a decision about SE-0185. The community and the core team had an opportunity to view those objections. After that time, a decision was made, having considered all the stated pros and cons which included the ones that you are now repeating. What "considerations" are you looking for?

Ones with proof that they were ever made! Once again you are stating that these issues were "considered", yet you show not a single shred of proof that that was the case. You're asking me to take you at your word but I have no reason to trust that the problem has been as carefully considered as you claim.
I was involved in one such discussion and the response from the core team was frankly pitiful; they did not provide any justification whatsoever.