That's a neat idea.
That will give even more structure as you say, but I think it would be possible to suggest the same structure using protocols.
As a proof-of-concept I tried adding functionality to JSONEncoder and JSONDecoder for different enum coding strategies.
The code can be found here and explored by running the tests:
My example starts out with the synthesized Codable
conformance for the Command
enum in the example.
I added an unlabelled single value case and a case with no values - and applied the code that would be synthesized in this proposal.
Then I added the following three (empty) marker protocols: EnumCodingKey
, SingleUnlabelledEnumCaseCodingKey
and NoValueEnumCaseCodingKey
.
The (would be synthesized) CodingKeys
is made to conform to EnumCodingKey
, the unlabelled single value case keys are made to conform to SingleUnlabelledEnumCaseCodingKey
and cases without associated values are made to conform to NoValueEnumCaseCodingKey
.
In the JSONEncoder and Decoder I added the following encoding and decoding strategies:
For the encoder:
public enum EnumEncodingStrategy {
case useDefaultEncoding
case useDiscriminator(String)
case flattenUnlabelledSingleValues
}
public enum NoValueEnumEncodingStrategy {
case useDefaultEncoding
case useBool(Bool)
case useInt(Int)
case useString(String)
}
and similarly for the decoder:
public enum EnumDecodingStrategy {
case useDefaultDecoding
case useDiscriminator(String)
case flattenUnlabelledSingleValues
}
public enum NoValueEnumDecodingStrategy {
case useDefaultDecoding
case useBool(Bool)
case useInt(Int)
case useString(String)
}
The .useDiscriminator(String)
flattens the two levels of keyed containers into one - using the supplied String as the discriminator. Similar to what other people have requested support for in previous discussions. The result is:
func testDiscriminatorCoding() {
let encoder = EnumCoding.JSONEncoder()
encoder.enumEncodingStrategy = .useDiscriminator("_discrim")
let data = try! encoder.encode(Command.store(key: "a", value: 42))
XCTAssertEqual(String(bytes: data, encoding: .utf8), """
{"_discrim":"store","key":"a","value":42}
""")
let decoder = EnumCoding.JSONDecoder()
decoder.enumDecodingStrategy = .useDiscriminator("_discrim")
let command = try! decoder.decode(Command.self, from: data)
XCTAssertEqual(command, Command.store(key: "a", value: 42))
}
Similarly, the .flattenUnlabelledSingleValues
does the following:
let encoder = EnumCoding.JSONEncoder()
encoder.enumEncodingStrategy = .flattenUnlabelledSingleValues
let data = try! encoder.encode(Command.single(Person(name: "Jane Doe")))
XCTAssertEqual(String(bytes: data, encoding: .utf8), """
{"single":{"name":"Jane Doe"}}
""")
let decoder = EnumCoding.JSONDecoder()
decoder.enumDecodingStrategy = .flattenUnlabelledSingleValues
let command = try! decoder.decode(Command.self, from: data)
XCTAssertEqual(command, Command.single(Person(name: "Jane Doe")))
It required a small bit of hackery in the JSONEncoder
, but it's really not too bad, and it was basically just to prove, that with a little bit of extra synthesized information, we could put these transformations in the coders themselves.
But I guess that both your suggestion and this protocol based suggestion could both be add-ons to the current proposal.