CodingKeyPath: Add support for Encoding and Decoding nested objects with dot notation

I added a few more sections to Alternatives Considered:

Name the type CodingPath instead of CodingKeyPath

In the pitch thread for this proposal, it was brought up that the name CodingKeyPath could potentially cause confusion with the existing KeyPath type. We could potentially choose a different name for this type, like CodingPath.

We would also need to rename the other types and methods added in this proposal:

  • encoder.keyPathContainer(keyedBy: CodingKeyPaths.self) would become encoder.pathContainer(keyedBy: CodingPaths.self)
  • KeyPathEncodingContainer would become PathEncodingContainer

Enable this behavior by setting a static flag on the CodingKeys type

We could potentially allow authors to opt-in to this behavior by configuring a static flag on their CodingKeys type:

// In the Standard Library:

public protocol CodingKey {
  // A new protocol requirement:
  static var options: CodingKeyOptions { get }

public struct CodingKeyOptions {
  var dotNotationRepresentsNestedPath: Bool

// Default configuration to preserve source compatability and existing behavior:
public extension CodingKey {
  static var options: CodingKeyOptions {
    CodingKeyOptions(dotNotationRepresentsNestedPath: false)
// EvolutionProposal.swift
struct EvolutionProposal: Codable { 
  enum CodingKeys: String, CodingKey {
    case id
    case title
    case reviewStartDate = "metadata.review_start_date"
    case reviewEndDate = "metadata.review_end_date"

    static var options: CodingKeyOptions { 
      CodingKeyOptions(dotNotationRepresentsNestedPath: true) 

This approach seems appealing on the surface:

  • We would only need to introduce one new type to the Standard Library (CodingKeyOptions)
  • CodingKeyOptions could be extended in the future to provide other customization points.
    • For example, we could add a key-transformation option similar to Foundation.JSONEncoder.KeyEncodingStrategy.convertToSnakeCase.

The unfortunate downside is that it's not possible to introduce new behavior on the existing CodingKeys type without breaking backward compatability with existing Encoder and Decoder implementations.

  • We could update Foundation's encoders and decoders (JSONEncoder, PlistEncoder, etc.) to respect these new options, but existing third-party implementations would also need to be updated.
  • We shouldn't introduce options that aren't guaranteed to be respected in the concrete Encoder or Decoder implementation being used.

The only way to add new behavior to all existing Encoder and Decoder implementations is to introduce a new enhanced version of CodingKey, along with corresponding enchanced KeyedEncodingContainer and KeyedDecodingContainer wrappers:

/// Like a `CodingKey`, but with additional configuration options. ("CodingKey 2.0")
public protocol ConfigurableCodingKey {
  var stringValue: String { get }
  var intValue: Int? { get }
  static var options: CodingKeyOptions { get }

public struct CodingKeyOptions {
  var dotNotationRepresentsNestedPath: Bool

public extension Encoder {
  func container<ConfigurableKey: ConfigurableCodingKey>(keyedBy: ConfigurableKey) -> ConfiguredKeyedEncodingContainer<ConfigurableKey>

/// This `ConfigurableKeyedEncodingContainer` would wrap existing `KeyedEncodingContainer` implementations,
/// which would allows the Standard Library to apply additional transformations.
/// All existing `Encoder` implementations would get this support "for free".
public struct ConfigurableKeyedEncodingContainer<ConfigurableKey: ConfigurableCodingKey> {

  private let underlyingKeyedEncodingContainer: KeyedEncodingContainer<_>

  public func encode<T: Encodable>(_ value: T, atKey key: ConfigurableKey) {
    // Apply transformations to the key as specified by the `CodingKeyOptions`
    // The Standard Library can add arbitrary complex key transformations here
    // and it would apply to all existing `Encoder` implementations.

// along with a corresponding `ConfigurableKeyedDecodingContainer` implementation.
  • The CodingKeyPath implementation in this proposal uses this exact approach to add additional behavior on top of the existing KeyedEncodingContainer and KeyedDecodingContainer APIs.

  • This would be an improvement over the existing CodingKeys type, but it has worse ergonomics than CodingKeys and the proposed CodingKeyPaths.

    • The author belives there aren't enough additional use cases for a static CodingKeyOptions customization point for it to pull its syntactic weight.

    • Static type-level configuration is less useful than per-property configuration, which cannot be done ergonomically using the existing CodingKeys design.

  • A "key" and a "path" have fundamentally different encoding and decoding semantics. It seems more appropriate to treat a CodingKeyPath as a distinct type rather than a flag or option on some CodingKey type.

Introduce an annotation-based alternative to CodingKeys

Instead of building upon the design of CodingKeys, we could design an entirely new system using property-wrapper-like annotations.

struct EvolutionProposal: Codable {

  // @Key("id")  (compiler-synthesized)
  var id: String
  // @Key("title")  (compiler-synthesized)
  var title: String
  var reviewStartDate: Date
  var reviewEndDate: Date

The author believes it's more appropriate to extend and built upon the existing CodingKeys-based system:

  • CodingKeys cannot be removed or replaced, since that would be massively source-breaking.
  • The language should not include two separate / competing Codable systems.

I'm really finding the addition of a new container type to be a stumbling block. Right now we have a great story there:

  • 1 value -- single value container
  • N values
    • list -- unkeyed container
    • keyed -- keyed container

If we start down the path of adding a variety of container types, I am concerned that it becomes more difficult to understand the basic structure of what you're encoding or decoding.


I disagree. CodingKeys was a nice hack to enable users to tweak synthesis within the boundaries of swift syntax at the time, but it is still kind of a weird system that is not that easy to explain, and—as evidenced by this proposal—hard to extend/enhance in a way that "fits" nicely.

Removed? No. But eventually deprecated and users directed to a simpler, more expressive system? Absolutely possible with no source breakage required.

Since this just affects synthesis, even a new system to customize keys etc. would still use the same coding apis in the end, so there would not be two systems, and types using the old form could happily work together with others using some new syntax.

1 Like

I totally agree with this. Codable is amazing when you have full control of the situation, but it's still far from ideal when dealing with the ugly real world APIs, to the point that I still prefer most of the time because it give you a consistent and extensible API that works on every case. I wish that Codable could keep its amazing convenience with a better solution for extensibility/configuration.

But if we need to move forward with this feature I prefer to bake the strategy specifically to JSON*Coder

A big missing one I hit is default values - omitting a value from serialized results and setting a default when deserializing if the value is missing. It doesn't appear you can work around this with property wrappers, you have to write a full Codable implementation.

I've hit cases where a top-scoped configuration for encoding is not possible, such as when one piece of Data should be encoded as base64 , and another should be encoded as hexadecimal.

You also have problems when serializing into more flexible formats, such as XML.

In my CBOR encoder, there is a challenge that there is the ability to tag the semantics of a bit of data, for instance this number should be interpreted as milliseconds since epoch or this object (dictionary) should be interpreted as a vcard. You could both decide that a piece of data you are defining should be sent tagged, or want to control whether a child element sends that tag (since the semantics are already known and parsers might choke if the tag was sent when they weren't expecting it). I can imagine a way to set the tag on a type (define a protocol which gives me the tag value and have the types implement that protocol, but defining that it should appear only in certain contexts is not feasible.

There isn't guidance today on how to maintain an interoperable data format, for instance what changes are expected to work round trip through an Encoder/Decoder pair. Example here - if I reorder the declaration of properties within my type, could that be a breaking change for certain encoders which expect keys to be encoded and decoded in the same order?

In addition, there isn't guidance on how to support backward/forward compatibility if you need to change the encoding of a type (such as to deal with new data, or if I find out I had breakage in the field through some change like the aforementioned reordering of properties).

Recall the stated aims for Codable:

  • It aims to provide a solution for the archival of Swift struct and enum types
  • It aims to provide a more type-safe solution for serializing to external formats, such as JSON and plist

Obviously, these facilities may also be useful for ingesting JSON that's not serialized from Swift models but from other sources. But a design that is capable of dealing with arbitrary third-party APIs requires a degree of extensibility and configuration significantly beyond what's necessary or most usable for the stated primary aims.

There's obviously no need to make it unnecessarily difficult to work with third-party data that happens to be well behaved. However, when it comes to formats that would require a whole laundry list of additional features to be added to Swift, I think it's worth thinking carefully about the intended scope of the problem that Codable is meant to address (i.e., what features we should add to Codable) versus something that's altogether different (i.e., a separate set of facilities, perhaps even an entire library).

AlamoFireObjectWrapper has a feature similar to this. In the normal case when decoding you tell the decoder: decode my object of this Type. In the alternate case you tell the decoder: decode my object of this Type at this Keypath. It's simple and works well. This is one pain point in the current implementation of Decodable. I was recently converting some code from AlamoFireObjectWrapper to Decodable (since the release of Alamofire 5 supports Decodable better) and was forced to write a generic wrapper class to access my nested Type.

(Part of) this functionality can therefore be implemented by adding a keypath to the decoder. The other part of the proposal aims to flatten Types that are nested in the JSON. That's not so important to me. In a declarative world one would do something like that with something like the new enum or adding support for dot notation in the existing CodingKeys enum rather than forcing the developer to write code for this, like now.

One problem I see is that if the Type is modified to de-nest it then if the Type is found in different places in different JSON then there would be a conflict. I often work with different APIs that return either a single instance of a Type or a list of instances of the Type where the list may be paged so that other properties exist at the top level. If the Type has to be changed to indicate that it's nested then things won't work in the not nested case.

Terms of Service

Privacy Policy

Cookie Policy