SE-0425: 128-bit Integer Types

IMO, when considering a type's encoded representation, the use of the type itself is not the most relevant consideration. If you're doing serious number crunching, you're not going to want to decode integers on-the-fly; you're going to want to perform that work in advance in a separate pipeline step, have your numbers all decoded, and then crunch them.

So the most relevant consideration is that decoding pipeline step. Set against all the other typical overheads that decoding using Decoder typically involves, is there a significant performance difference between decoding a string or a pair of ints? My intuition says that it won't be significant, but no data has been presented either way.

I'm not sure about calling it a "temporary hack", but yes. Let's say you use a package such as XMLCoder, CodableCSV, or Yams, and you want to encode a model type using a 128-bit integer.

Eventually those libraries may be updated with native support for 128-bit integers (or not), but until they are, your model data still needs to be encoded using some combination of existing primitives.

Another thing that is worth considering related to this is that if/when they are updated, they will need to support decoding from both their native format and this fallback representation. I think we should probably work through what that is going to look like for them, and add a separate method to make it easier if necessary. I think they're going to have to implement methods such as UnkeyedDecodingContainer.decode(_ type: Int128.Type) to implement their native formats, but also call in to the protocol's default implementation to support the fallback.

I know it can be awkward to call a protocol's default implementation when you also have a custom implementation. So we should think about that.

It is certainly more human-readable than a binary format! But people can always argue about whether complex concepts are presented in an easily digestible manner. But if that's the metric, many of the world's most important literary works, from Plato's Republic to Das Kapital, would barely qualify as human-readable.

There is no generic concept of a native encoding for 128-bit integers (or anything else for that matter). That is something that encoders/decoders can do by implementing the requirements @beccadax suggested.

It may be that Foundaton's JSONEncoder chooses the approach you've described, but you'd need to ask the Foundation maintainers. They may also allow various approaches using configuration options, as they do for date encoding.

2 Likes