–0.5.
This removes the clear footgun that I detailed in the previous review thread. Adding a member with a default value is now always a backward compatible change, whether you’re working with a struct
, a class
or an enum
. That’s a clear and important improvement over the previous iteration. I think the proposal has finally gotten to a place where it isn’t actively harmful.
At the same time, removing the footgun also means that the design has ended up in a weird middle-ground in the design space that I think will be unintuitive to everyone. It obviously doesn’t align with the expectations of those who prefer the case-as-value approach, since the case isn’t encoded as a value. But it doesn’t align with the expectations of those that prefer the case-as-key approach either, because they expect single unlabelled values to be encoded without a nested container. (As @Morten_Bek_Ditlevsen mentions above.) I don’t think anyone will find this encoding of Either<L,R>
intuitive or elegant:
{
"left": {
"_0": 1
}
}
It seems like there are two design goals at play here: one is supporting backward compatibility, and the other is aligning with user expectations for what the encoded data is going to look like. (We clearly can’t align with everyone’s expectations, but we could at least align with a wide subset of users.)
What this iteration of the proposal clearly demonstrates is that there is no way to satisfy both design goals with the case-as-key approach. We either have to sacrifice backward compatibility, as earlier versions of the proposal did, or we have to sacrifice alignment with the expectations of those used to the case-as-key approach, as this version does. Compare this to the case-as-value approach: it doesn’t have to make that trade-off. It naturally supports backward compatibility and aligns with the expectations of those used to that approach. (It also has a number of other desirable properties, mentioned previously, like avoiding duplicated CodingKeys
enums, clearly encoding the fact that there can only be one case, matching how unions are commonly encoded in JSON APIs that use the OpenAPI spec, and potentially letting you evolve a struct
into an enum
if all of its properties have default values.)
So given those design goals, and the sacrifices needed to make the case-as-key approach backward compatible, what motivation is left for choosing it instead of the case-as-value approach? If the encoded data isn’t going to match the expectations of people used to the case-as-key approach, why not at least let it match the expectations of people used to the case-as-value approach and reap the other benefits of that approach?
What is gained by stubbornly holding on to the case-as-key approach, even though by supporting backward compatibility we’ve lost what made it elegant in the first place?