Add userInfo protocols to standard library

It is impossible for a decoding algorithm to require that certain keys in userInfo contain values of certain types. All of them already need to handle the possibility that a value isn’t there. I don’t see how this would be any different.

Just to give a little bit of background on userInfo and its inclusion in the protocol types. The Codable API surface is a meeting point for 3 (potentially different) actors:

  1. A Codable-conforming type being encoded/decoded
  2. An Encoder/Decoder actually encoding/decoding the type (1)
  3. A top-level actor triggering the encoding/decoding from a top-level view of the encoder/decoder (2)

Each of these actors has some amount of say in the encoding/decoding process, which you can think of as a relationship between two of the actors:

  • The actual Encodable/Decodable conformance forms the contract between (1) and (2), where (1) offers behavior through its conformance, and (2) can offer different behavior via specific overrides
  • Encoding/decoding strategies form the contract between (2) and (3), where (2) offers a default implementation of encoding and decoding, and (3) can request different behavior via specific overrides
  • The userInfo dictionary forms the contract between (1) and (3), where (1) offers behavior through its conformance to the protocols, and (3) can request different behavior via specific overrides, assuming there is shared knowledge between (1) and (3) about how to handle those cases

The point of requiring the userInfo dictionary is to ensure that the relationship between (1) and (3) is always possible, even if it isn't necessary. You're right that this is less often used than the other two methods for asserting control, but short of global state, there's no other way for (1) and (3) to communicate with one another except for through (2). We included it in the protocol to offer guarantees to (1) and (3), not to augment (2). The number of Codable-conforming types, and actors triggering encoding/decoding far outweigh the number of Encoders and Decoders, so rather than exploding the protocol hierarchy even more, we wanted to offer that guarantee from the get-go.

You're also correct that a Codable-conforming type shouldn't require a specific value inside the userInfo dictionary because it can be missing, but this is a debatable part of library design — I think it's just an unclear implicit contract, and it's better to provide a sensible default.

With this, I think it would be relatively reasonable to want to add userInfo dictionaries to TopLevelEncoder/TopLevelDecoder specifically, but I don't know if I would go so far as to suggest expanding this requirement into its own protocol — given source- and ABI-compatibility requirements, API has to clear a very high bar for inclusion, and I don't think I'd personally advocate for this.

I think this suggestion has merit as one of the more reasonable ways to expose userInfo generically (the alternatives including adding a requirement to TopLevelEncoder/TopLevelDecoder with dummy default implementations to maintain source- and ABI-compatibility), but I'm not sure I would personally consider that quite sufficient for introducing API which will need to be effectively supported forever.


As an aside, CodingUserInfoKey itself is useful as a place to hang predefined string constants without polluting String, like other String-RawRepresentable types found throughout Foundation and many other frameworks. The mistake is not CodingUserInfoKey but that its initializer is incorrectly failable.


In practice, to avoid a breaking change, Decoder and Encoder would have to inherit UserInfoProviding, and provide concrete values for the associated types. However, future protocols could benefit from keeping userInfo separate.

As for TopLevelDecoder and TopLevelEncoder, a separate protocol is the only way a generic method could access userInfo without adding requirements that aren’t in the Combine version.

That relationship is only possible with shared knowledge, as you said. That knowledge consists of which keys to use and which values to expect. If a concrete userInfo property isn’t required, (1) also needs to know the type of the key and whether a userInfo dictionary exists at all: but (1) already had to know that, since it has to have the keys in the first place.

The point is somewhat moot for Decoder and Encoder now, but I’d like to propose this ahead of any future standard library types that might need this.

Alternatively, someone could add it to Foundation, which could then extend Decoder and Encoder along with its many relevant types. They use it most, after all. I can’t write pitches for Foundation, though, so here we are.

I agree with @itaiferber.

Since there's a concrete use case involving TopLevelEncoder and TopLevelDecoder, then it makes sense to consider adding these requirements to those protocols. As Itai has explained, it was already decided not to explode the protocol hierarchy when it came to Encoder and Decoder. There is no reason why we cannot add requirements to a new standard library protocol that we sink from Combine.

Meanwhile, there is no facility in Swift for changing Decoder and Encoder to refine another protocol anyway (we say protocols refine other protocols, not inherit, by the way), so that is not even a possibility to begin with.

I was going with the nomenclature used in the Language Guide. I’ve heard it both ways.

Is that a breaking change even if the requirements are unchanged? If so, then fair enough I suppose. I thought this would be non-breaking:

public protocol Decoder: UserInfoProviding {
  var codingPath: [CodingKey] { get }
  var userInfo: [CodingUserInfoKey: Any] { get }
  func container<Key>(keyedBy type: Key.Type) throws -> KeyedDecodingContainer<Key>
  func unkeyedContainer() throws -> UnkeyedDecodingContainer
  func singleValueContainer() throws -> SingleValueDecodingContainer

From the language guide:

Protocol Inheritance

A protocol can inherit one or more other protocols and can add further requirements on top of the requirements it inherits.

From the language reference:

protocol protocol name: inherited protocols {
protocol member declarations


Protocol types can inherit from any number of other protocols. When a protocol type inherits from other protocols, the set of requirements from those other protocols are aggregated, and any type that inherits from the current protocol must conform to all those requirements. For an example of how to use protocol inheritance, see Protocol Inheritance.

Fair enough! I hadn't seen that usage in the language guide. Thanks for pointing that out.


I still think that the bar for pitching protocols should not be usefulness within the Standard Library. If that’s the criteria, then we’re going to keep needing to sink conspicuously-absent protocols like TopLevelDecoder and Identifiable from Apple frameworks.

Behavior that the Standard Library implicitly encourages or requires should be described by protocols for the sake of interoperability. People should be able to write generic methods without waiting for someone at Apple to need to. Those protocols do not necessarily need to exist in the Standard Library, but they need to be supported everywhere Swift is. That might mean the Core Libraries, it might mean some other project.

Decoder and Encoder use userInfo dictionaries. That’s a common design pattern across the community. Ergo, it is worth considering protocols that encapsulate that. I thought I would start the discussion here.

The bar isn't necessarily "usefulness within the Standard Library"—the question, as @xwu posted in the first reply to this thread, is:

The mere fact that a property is common amongst many different types does not inherently mean that it deserves to be a protocol. The goal of a pitch for a new protocol should be to demonstrate how that protocol would be used in a generic context.

@itaiferber wrote an excellent justification for the presence of a userInfo property on Encoder and Decoder, but it's heavily tied to the specific details of those protocols and the Codable API in general.

This proposal strikes me as highly similar to the discussion around the DefaultInitializable protocol, which would have had one requirement: init(). Ultimately, it fell short on the same grounds, as no one could present a compelling use for the protocol aside from noticing "hey, a lot of types offer init()".

Your posts at the beginning where you were writing functions with signatures like func example<T: UserInfoProviding>(_ input: T) were on the right track for justifying this protocol, but the bodies of those functions need to do more than just access an arbitrary value from the userInfo dictionary. The algorithm should strive to do something useful which couldn't be done with existing types.

My personal opinion aligns with many others here. It is not at all apparent that knowing the fact "this type provides some dictionary called 'userInfo'" is helpful to know when divorced from all other context about that type.

1 Like

I was thinking this would mainly be used with protocol composition. Knowing that a type provides some dictionary called userInfo and conforms to other specified protocols is useful.

I'm still not convinced. If those "other" protocols expect conforming types to provide a userInfo dictionary with particular semantics, that's an argument to have a userInfo requirement on those protocols. If you always have to constrain your T: UserInfoProviding to additionally figure out "exactly what sort of userInfo do I have?", the utility is vastly reduced. If you believe otherwise, please provide an example of how you would actually use this protocol.

1 Like

This would be substantially more useful with generalized existentials: they would let you check for userInfo dictionaries using unwrapping methods like if let instead of a generic constraint on a method. That is, you could write a method that works whether or not a type has a userInfo dictionary, while adjusting behavior based on whether values exist with types that do.

Thanks for linking that example, I'd missed it. IMO, that speaks more strongly as motivation for a var userInfo: [String: Any]? requirement with a default implementation that returns nil.

A further issue with breaking out every bit of configurability into its own protocol (especially when the functionality is only useful in the context of other protocols) is that it precludes a hypothetical type from conforming to multiple of those protocols in a helpful way. E.g., imagine a DecodingNotification type, which provides Decoder functionality as well as Notification functionality (for a non-existent protocol Notification). With the help of an @implements attribute, you could specify @implements(Notification.userInfo) var notificationUserInfo and @implements(Decoder.userInfo) var decoderUserInfo. If we package UserInfoProviding into a separate protocol, we can no longer differentiate between userInfo in the context of Notification, or userInfo in the context of Decoder.

Does such an attribute officially exist? Are there any plans to add it eventually?

It exists privately as @_implements today, and AFAIK it is in the backlog to expose it publicly in some form.

1 Like