Handling edge cases in JSONEncoder/JSONDecoder key conversion strategies (SR-6629)

@hartbit recently revived some discussion about this topic in a PR by @norio_nomura, and I figured that instead of restricting the discussion to the PR, this would be a perfect opportunity to solicit some community feedback. As such:

Hi all,

In Swift 4.1, we introduced key strategies to JSONEncoder and JSONDecoder to make it easier to conform to some common JSON practices. Specifically, we wanted to facilitate converting keys from Swift’s camelCaseGuidelinesForPropertyNames to the common_json_snake_casing_of_keys, so these key strategies allow for automatic case conversion of all CodingKeys to snake case on encode, and from snake case on decode.

However, there are some edge cases in round-tripping keys which the API does not currently address. Specifically, properties which contain words in all-caps (usually initialisms like “HTTP”/“URL”/“UUID”/etc.) don’t directly round-trip due to the lossy nature of these transformations; for instance, a key for myHTTPLink will appear in the JSON as my_http_link, but when converted back from JSON, it becomes myHttpLink (because the converter has no way of knowing that "http" is a special case which should become "HTTP" and not "Http").

More concretely, given the following definition:

struct Foo : Codable {
    var myHTTPLink: URL

the synthesized init(from:) will try to make the following decode(_:forKey:) call:

myHTTPLink = try container.decode(URL.self, forKey: .myHTTPLink)

When a .keyDecodingStrategy other than .useDefaultKeys is applied to JSONDecoder, requests for keyed containers cause it to first map the keys in the found container:

switch decoder.options.keyDecodingStrategy {
case .useDefaultKeys:
    self.container = container
case .convertFromSnakeCase:
    // Convert the snake case keys in the container to camel case.
    // If we hit a duplicate key after conversion, then we'll use the first one we saw. Effectively an undefined behavior with JSON dictionaries.
    self.container = Dictionary(container.map {
        key, value in (JSONDecoder.KeyDecodingStrategy._convertFromSnakeCase(key), value)
    }, uniquingKeysWith: { (first, _) in first })
case .custom(let converter):
    self.container = Dictionary(container.map {
        key, value in (converter(decoder.codingPath + [_JSONKey(stringValue: key, intValue: nil)]).stringValue, value)
    }, uniquingKeysWith: { (first, _) in first })

This means that a JSON object which looks like

    "my_http_link": ...,
    "my_other_thing": ...,

is mapped to

    "myHttpLink": ...,
    "myOtherThing": ...

When the decode(_:forKey:) call above happens, an attempt is made to look up myHTTPLink's stringValue ("myHTTPLink") in the container; since no such key is present, the property can’t be found.

The current workaround for this is to give the CodingKey a string value which matches the round-tripped string:

private enum CodingKeys : String, CodingKey {
    case myHTTPLink = "myHttpLink"

but of course, this is suboptimal. Besides the fact that this behavior is undiscoverable until you get bitten by it, requiring the user to define a CodingKeys enum somewhat defeats the purpose of using the strategy in the first place.

There are a few approaches we could take to address this issue (each with tradeoffs), but we’re interested in getting some feedback on them, especially from folks who use these strategies:

  1. @norio_nomura’s PR offers a solution which addresses the issue of attempting to convert "my_http_link" into "myHTTPLink" by sidestepping the problem altogether. With a new key strategy (.useSnakeCasedKeys), instead of trying to convert from a String to a CodingKey, the solution is to only ever apply the transformation in one direction — from the CodingKey to the String. Containers are left alone, and when a decode(_:forKey:) call is made, the key is reconverted into a string to match what’s in the payload. This is promising, but has two main wrinkles:
    • This relies on us having access to the encode-side conversion for both directions whether we’re encoding or decoding, which means that it behaves differently from .custom conversions. With .custom conversions, we only ever have one side of the transformation, so writing your own .custom strategy would leave you in no better a place
    • Slightly more problematic is access to KeyedDecodingContainer.allKeys, which is a view into a container’s keys based on the key type you’re accessing the container with. This is done via a mapping of the actual string keys in the container to the CodingKey type: public var allKeys: [Key] { return self.container.keys.compactMap { Key(stringValue: $0) } }. This operation inherently converts from strings to keys, which means that we cannot apply a transformation of key-to-string. This means that as-is, the value of allKeys would have keys missing from it, even though you would be able to get a result by decode(_:forKey:)'ing one of those missing keys. This is problematic for types which inspect allKeys in order to dynamically decode values from the container
      • One solution to this solution could be to handle CaseIterable CodingKey types specially — if a CodingKey type is CaseIterable, we can iterate over its cases and convert those to snake case instead of going the other way. Of course, no existing CodingKeys today are CaseIterable, which means that in order to opt into this implicit behavior, you have to know to make your CodingKeys CaseIterable (and we’d likely need to make synthesized CodingKeys CaseIterable as well).
  2. Another solution could be to expose the underlying functions that JSONEncoder and JSONDecoder use to perform these conversions (say, as JSONEncoder.KeyEncodingStrategy.toSnakeCase(_:) and JSONDecoder.KeyDecodingStrategy.fromSnakeCase(_:)); with access to these functions, it might be easier to write your own .custom conversion which watches out for these specific cases that you might care about and want to handle specifically. This still requires you to be aware of the issue, and write your own solution
  3. Offer, either through a property or another strategy, a customizable list of words that are special-cased, e.g. .convertFromSnakeCaseWithExceptions(["HTTP", "URL", "UUID", ...]); these words would be used in the conversion so that when we split words on "_", words that match the list can be handled specially
    • The wrinkle here is some amount of complexity: does it matter whether the exceptions are given in upper-case, lower case, or mixed case (e.g. given "hTTp" should "my_http_link" become "myhTTpLink")? What about keys in JSON which don’t exactly match the expected case (e.g. "my_hTTp_link")?
  4. Some combination of the above, or a different approach which we haven’t considered
  5. Leave things as-is; this is an acceptable risk for the convenience of applying the strategy to a whole payload possibly full of edge cases. [To be clear, I don’t actually believe that this is a reasonable long-term solution to the problem.]

All three main solutions have a bit of a discoverability problem as you won’t necessarily know to apply a different strategy until you hit the problem. There are two riffs on solution 3 which might help a bit with this:

  1. Deprecate .convertFromSnakeCase in favor of the option which explicitly requires exceptions; this way existing code is made aware of a potential solution to a problem that the author might not know they had
  2. Instead of offering a new strategy, give JSONEncoder and JSONDecoder a mutable property holding the list of exceptional words. The list can come prepopulated with some reasonable defaults (for some definition of reasonable), but the list can be added to/removed from as necessary. This doesn’t make the problem more discoverable, but might give a slightly better default answer. The risk here comes for folks who have already accounted for this deficiency and provided their own CodingKeys with values like myHttpLink because the current workaround will now be wrong

Thoughts on this issue? Have you run into this case and had to work around it somehow? Curious to see if folks have hit this in practice.

(To be clear, at this point in the release, we are not looking to introduce new API into Swift 4.2; we are interested in feedback on a solution that would be both ergonomic and more easily correct than what we have in place today.)

1 Like

Something that’s helped me consider this issue is picking a word that’s both a normal word and an acronym, like “stun” and “STUN”:

struct Phaser: Codable {
    var useStun: Bool

struct Proxy: Codable {
    var useSTUN: Bool

These both snake-case encode to { "use_stun" : true }, but only Phasers can be successfully roundtripped:

print(try! decoder.decode(
    Phaser.self, from: encodedPhaser.data(using: .utf8)!
)) // OK: Phaser(useStun: true)

print(try? decoder.decode(
    Proxy.self, from: encodedProxy.data(using: .utf8)!
)) // Failure: nil

Neither decoding case is more correct than another without context: use_stun to useStun is right for Phasers, and use_stun to useSTUN is right for Proxies.

Part of what @itaiferber and @norio_nomura are considering is having Proxy’s snake-case decoder look at Proxy’s coding keys for encoding. This would solve the roundtripping issue (because it uses context), but it has the caveats mentioned in Itai’s solution 1.

Full Example
import Foundation

struct Phaser: Codable {
    var useStun: Bool

struct Proxy: Codable {
    var useSTUN: Bool

var encoder = JSONEncoder()
encoder.keyEncodingStrategy = .convertToSnakeCase

let encodedPhaser = String(data: try! encoder.encode(Phaser(useStun: true)), encoding: .utf8)!
print(encodedPhaser) // Prints {"use_stun":true}

let encodedProxy = String(data: try! encoder.encode(Proxy(useSTUN: true)), encoding: .utf8)!
print(encodedProxy) // Prints {"use_stun":true}

var decoder = JSONDecoder()
decoder.keyDecodingStrategy = .convertFromSnakeCase

print(try! decoder.decode(
    Phaser.self, from: encodedPhaser.data(using: .utf8)!
)) // OK: Phaser(useStun: true)

print(try? decoder.decode(
    Proxy.self, from: encodedProxy.data(using: .utf8)!
)) // Failure: nil

Without knowing the current implementation, that is how I had assumed it would already work. Even though it is not, I think this sounds like the most clean solution, especially since I’m not sure types that use allKeys even want the converted form most of the time.

The issue described in this thread keyDecodingStrategy and keys with custom raw values would probably also go away with approach 1, right?

I actually like the second option. Having just a closure ([CodingKey]) -> CodingKey type and multiple strategies with that signature, would:

  1. Allow the simple cases to be solved in one line as before:
    encoder.keyEncodingStrategy = KeyEncodingStrategy.toSnakeCase
  2. Allow flexibility in more complex cases and
  3. Allow delegation to the already existing strategies
encoder.keyEncodingStrategy = { keys in
if ... {
// do your custom transform
else {

This of course doesn’t exclude 1 since the upper case strategy can still be supported if we found it’s not too specific.

The concern here is that for types that rely on allKeys, there’s no way of knowing that the keys are missing for this reason, and there’s no recourse.

If you encode a type that doesn’t belong to you which relies on allKeys, it would be unexpected for it to stop working because you want to apply the key conversion strategy; you don’t necessarily control the type, nor would you be able to help it along.

Yes — because @ilyapuchka is already providing the key in snake_case, this would end up matching what’s in the JSON. If the key were intentionally not in snake_case, decoding would similarly fail (i.e. the edge case here would be flipped).

Thanks for this succinct example, by the way! This helps illustrate the issue much nicer than my abstract rambling. :grimacing:

I am in full support of @norio_nomura’s solution - I think it’s the perfect way to solve this - and I have a few comments about the ‘wrinkles’:

  • As I see it, all key conversions should be one-way:
    If custom conversions also only went in the Encoding direction, then the exact same strategy that @norio_nomura is suggesting can be applied here too.
    So in general I would say that revising the API so that there is just a single key-coding strategy for both encoding and decoding instead of having two ways would be the right way to go.

  • Regarding the allKeys view. I am not sure when this is used. It is a requirement from the protocol, but isn’t the protocol designed along with the actual Foundation implementations? Would it make sense to deprecate the allKeys requirement from the KeyedDecodingContainerProtocol? If the PR in question does not break anything it must mean that the allKeys property is not being used internally, right? And if it can’t always be generated correctly (when key conversion is involved) then perhaps it could be removed?

  • Random thought: Could allKeys conditionally be provided in case the associated type Key of KeyedDecodingContainerProtocol is indeed CaseIterable?

  • Another random thought: With regards to CodingKey being made CaseIterable in the synthesized CodingKeys I think that could make sense too. And for existing CodingKeys implementations, could the compiler show a warning in case a CodingKey-conforming type is an enum without associated values? Perhaps a warning that goes away after setting a compiler flag (similarly to the Swift 3 @objc inference, where you could switch the compiler flag off once you had considered all the cases)…

Above I voiced my support for option 1.

I should also comment on the other options:

With regards to option 2. It would be a huge amount of work to handle all edge cases in a custom conversion - and this custom conversion could be implemented ‘far away’ from the model types in question. I think that it would lead to code that is hard to reason about.

@krilnon’s example shows why option 3 is sub-optimal. The exceptions may not always be correct - even in the same ‘decoding session’.

One further comment about solution 1:
If you have already manually created snake case conversions inside of your CodingKeys and then enable snake case conversion, then currently this breaks decoding for the snake cased keys (since the camelCased keys will be looked up on decoding).
With solution 1, you would actually be free to leave these in your code base as long as you wish since no matter if your original key was useStun, useSTUN or even use_stun, solution 1 will be able to match in when decoding. I think that this is a nice ‘robustness’ quality.

Thanks for the input! Some comments:

This might be possible to do in the long run, but we’d likely need to come up with new names for the properties/types; once ABI stability hits, we’ll need to keep these conversions working (though deprecated), even if they don’t necessarily do the right thing.

I don’t think removing allKeys is the right direction to go in (nor can we even do it). allKeys is used for types which want to dynamically iterate over the contents of a container, key by key, in order to decode some values. Dictionary itself does this, for example, in its init(from:) though you already know that in that particular case, we’re looking to opt Dictionary keys out of the conversion process because of the type of data they represent (user data as opposed to model data). Types other than Dictionary, however, won’t get this special exemption.

allKeys is most often helpful when you don’t know what’s in the payload but want to decode it anyway. Usually, the approach taken here is what Dictionary above does — define your own struct CustomKey : CodingKey type (or similar) which can take on any String or Int value; create a container keyed on that custom key type, and iterate allKeys, decoding values as you go. This allows you to get at all the values in a container, even if you don’t know what keys would be used.

Because the conversion to allKeys is String-to-CodingKey, in this dynamic case, even though you’ll always get all of the keys in the container, they won’t be converted in the expected direction (e.g. you’d always successfully get back CustomKey("myHttpLink")).

  • On the one hand, this might be alright — chances are, you’d use the key to immediately decode a value, in which case, the conversion going from CodingKey-to-String will handle things (going from CustomKey("myHttpLink") back to "my_http_link")
  • On the other, if you want to pass on the stringValue of any of these keys to something else, i.e. you’re doing something approximately likely what Dictionary does, the key values will be wrong. This is exactly what you’re currently fixing in PR-16238, but for all other types

It’s also perfectly reasonable to want to use allKeys to automate the decoding of a container with a lot of known possible key values:

private enum KnownAPIKeys : String, CodingKey {
    case someAPIKey1
    case someAPIKey2
    // ...
    case someAPIKey40


init(from decoder: Decoder) throws {
    var container = ...
    // ...

    let apiPayloadContainer = try container.nestedContainer(keyedBy: KnownAPIKeys.self, ...)
    self.apiPayload = [String : ...]()
    for key in apiPayloadContainer.allKeys {
        if let value = apiPayloadContainer.decodeIfPresent(..., forKey: key) {
            self.apiPayload[key.stringValue] = value

In other words, it’s perfectly reasonable to use a container keyed on CodingKeys and allKeys to filter a container based on the data you actually care about, provided by another API.

In this case, any of the API keys may be missing, but preserving them if they’re there is important. If we break allKeys, data here would just be silently missing for any key which falls into this edge case.

Now, how common is this use case? I would guess that this is relatively rare. It would be difficult to find out, but it’s important to consider. Data loss is no joke, and there won’t be anything (within the language at least) that would help warn about this.

Figuring out what the PR breaks is the hard part. :slightly_smiling_face: In this case, allKeys is definitely used internally, but I care about external usages as well.

I don’t think we can refine the protocol right now in this way without it being a breaking change. However, as mentioned above, this is exactly the opposite of when allKeys is most often used — when you don’t know what the keys are (and thus, they cannot be CaseIterable).

We could likely do this, yes. That’s a separate issue that we should pitch, though.

A warning about what? If the CodingKeys type could be CaseIterable, but isn’t? I don’t think we have reason to force folks to have CaseIterable enums if they don’t care about them being so.

In all of this, I want to say that I am not against option 1, on principle or otherwise — I think it’s a simple, clean solution that can fix issues that folks already have, and it seems like the biggest surprise is that things don’t already work this way.

I do however want to make sure that given that we’d be shuffling one prominent edge case for another less common but more subtle edge case, we’re willing to accept the cost after understanding the tradeoff.

Thank you so much for the very detailed comments. Your patience in explaining the importance of the ABI stability and the purpose of the allKeys API are greatly appreciated! The examples you give for the allKeys give great inspiration for new ways to use the encoding and decoding APIs.

Would it be a solution to present a key conversion strategy like what is suggested in @norio_nomura s PR, and then accept that allKeys may not actually contain all keys?
Would it already now make sense to check if the CodingKey type is CaseIterable when generating the allKeys collection?

Always happy to help clarify!

Yes — from the few responses so far, most are in favor of this. This is possible if we believe that the cost of having allKeys behaving inconsistently is worth making other cases behave better.

This is something I believe we can add, but I’m not sure whether making a subtle edge case behave even more subtly inconsistently would help or hinder the situation. Sometimes it’s better to be consistently wrong than inconsistently wrong. :slightly_smiling_face: