I'm wondering if there are any plans to support concurrent JSON decoding/encoding in the JSONDecoder and JSONEncoder API, such as with a concurrent DispatchQueue. I'm currently using this with great success in vanilla JSON parsing code (grabbing values out of [String: Any], ~4x faster parsing on iPhoneX), and think it would be a free performance win for most apps. Speed is the main reason I haven't completely switched over to the new Codable API.
I tried implementing my own ConcurrentJSONDecoder class that conforms to the Swift Decoder protocols, but some leaky abstractions gave me trouble so I think this might need to be done at the stdlib-level. For example, UnkeyedDecodingContainer contains a currentIndex property which implies synchronous parsing. With modern phones, I don't see why this work can't be spread across the available cores.
edit:
To clarify what I mean since the above isn't very clear, I'm trying to write my own ConcurrentUnkeyedDecodingContainer that conforms to UnkeyedDecodingContainer, so eventually I can create my own ConcurrentJSONDecoder and have everything magically work with the Decodable protocol.
The idea is that ConcurrentUnkeyedDecodingContainer can parse JSON array values in parallel with a concurrent queue (depending on your CPU cores), and then join the results at the end. The array parsing is still synchronous, it just parses chunks in parallel for faster execution.
It doesn't look like this is possible with the current UnkeyedDecodingContainer protocol because it has a currentIndex property, meaning values can't be decoded out of order. I think this is very limiting to decoding implementations and am asking if we can re-think this API a bit.
Sorry, unfortunately I don't have any public benchmarks to share, I can try to make a public one that mimics our JSON structure when I have some free time. Our payloads are quite big (although not unreasonable IMO) at about 1MB, with ~90% of that being a single array.
@rovertsnikle If you can make such a benchmark and add it to the swift benchmark suite it will ensure that over time the Swift team can monitor your benchmark and improve it over time (vs focusing on other benchmarks)... It will of course be appreciated as well ; ).
I think my main point is getting a little lost with the benchmark discussion, which is that the Decoder API may be too restrictive. Adding concurrent decoding/encoding to the existing JSON**coder classes is probably unnecessary since it's a small use case, but we should be able to implement our own if we want to, right? Currently I don't think this is possible.
I'm curious about the exact use-case here — since you mention pulling values out of [String : Any], it sounds like you decode using JSONSerialization first, then asynchronously pass over the decoded structure and pull values out. How do you use those values in practice?
JSON decoding (as a general process) can be done in a parallel manner (to some extent, depending on what you're willing to accept as the failure model), but it sounds like your use case parallelizes the consumption of the data. This is counter to what Codable offers currently offers, which is an initialization model from the data, which is a level above parsing but below consumption in some ways.
The Swift initialization model right now is inherently linear — you have to initialize all properties of a value before the initializer can return, and each of these properties are set in order. Without some language-level help (e.g. something like async init), there isn't a good way to perform initialization of values in parallel while ensuring that all properties of a value are set before returning from init. You can imagine something in the future which allows you to async assign to properties in an initializer and await automatically before returning from the initializer, or similar. With that, it might be possible to integrate with Codable, allowing you to decode multiple properties in parallel before fully initializing.
In any case, it sounds to me at the moment that your use-case doesn't map to Codable 1-to-1, so I'd be interested in seeing a sample of the gist of what your code does (a reproduction w/o a benchmark would be enough).
Thanks for the reply, I think we are not on the same page here. I am talking about the initialization step.
Say I have a struct User and Post:
struct User: Codable {
let posts: [Post]
}
struct Post: Codable {
let title: String
let body: String
}
Now say that I have an endpoint that sends me back a user, but that user (as an extreme example) has 1 million posts to their account. The current JSONDecoder behavior will sequentially parse each Post value in the posts array.
Ideally, this work can be spread across cores, for example some rough pseudocode:
init(from decoder: Decoder) throws {
self.posts = decoder.asyncUnkeyedDecoderAndWait(forKey: "posts") // Note the "Wait" part
}
I would like to create my own ConcurrentUnkeyedDecodingContainer that parses arrays like this:
Personally, I'd say consuming 1MB JSON files directly is rather excessive for a mobile app, especially if you're downloading them first. You'd likely be better served by querying for the data you need, rather than parsing everything and then just extracting the bits you need. But like everyone has said, we'll need to see some actual code evaluate whether this would be something useful in the general case.
You'd likely be better served by querying for the data you need, rather than parsing everything and then just extracting the bits you need.
I am using only the data I need, it just happens to be a lot. Aside from that, Swift is used for more than just mobile apps so we shouldn't limit our discussion to that IMO. I think this is beside the point though.
But like everyone has said, we'll need to see some actual code evaluate whether this would be something useful in the general case.
I posted some code above if you could kindly check it out, I don't think my original post clearly explained what I meant.
I meant your current code that gives you a 4x speedup, not what you'd like to see. Though I would think any sort of builtin async parsing should be more automatic than what you've outlined.
As pre-Codable user of the Argo library, there was considerable effort put into making its rather expensive parsing on top of JSONSerialization less expensive. There was some work around async parsing, but the most promising effort was making parsing lazy, so the entire payload didn't need to be parsed on once, only when the values were accessed. Essentially, it would only parse the top level object, store the intermediate dictionary, and parse the sub values as they were accessed. This produced great speedups to initial parsing at the cost of slightly more expensive accesses, which were paid only once, IIRC. So it came out as a win if less than the whole object was accessed. So perhaps lazy evaluation is another option.
In any case, there's a fairly long list of enhancements to be made to JSON encoding and decoding, I think it should comes down to direction and investment. Async or lazy evaluation could be added to the list.
Ah, gotcha. My current code is basically what I posted. I have a helper func for array decoding that breaks the array into chunks (based on the # of device cores), parses each chunk in parallel, and then joins the result at the end. The function is still synchronous since I wait for the queue to finish, but overall decoding time is much faster on some devices. Since my JSON structure is dominated by large arrays, this results in ~4x faster performance for my case. Apologies that my earlier claim looked like a general one.
Though I would think any sort of builtin async parsing should be more automatic than what you've outlined.
This is what I'm trying to achieve.
If it were possible to make a ConcurrentUnkeyedDecodingContainer then ideally I could do ConcurrentJSONDecoder().decode(User.self, from: data) and everything would automatically work with Decodable.
In any case, there's a fairly long list of enhancements to be made to JSON encoding and decoding, I think it should comes down to direction and investment. Async or lazy evaluation could be added to the list.
Agreed, but I think if some of the protocols & core classes were tweaked then much of custom cases could be implemented by the great Swift community.
So you're currently parsing into a [String: Any] using JSONSerialization and then parsing into a some type in such a way that finds any Array values and asynchronously decodes them into [DesiredType]?
I think the "right" solution for this kind of issue is pagination in your APIs, but a Swift solution is always better
Messing around with it, I think the only thing required is for UnkeyedDecodingContainer's API to allow more access outside of directly decoding the item at the current index. E.g. by making currentIndex settable, directly accessing items at an index, or adding a skip function that ignores the current item. That should allow an extension of KeyedDecodingContainer like:
extension KeyedDecodingContainer {
public func concurrentlyDecode<T>(_ type: T.Type, forKey key: KeyedDecodingContainer<K>.Key) throws -> [T] where T : Decodable {
var container: UnkeyedDecodingContainer = try nestedUnkeyedContainer(forKey: key)
// container now holds the array, but the only option is to decode one by one.
// More access is needed to either duplicate and have multiple things decoding with the ability to skip around,
// Or simply allow indexed access to the contents.
}
}
It would require customizing init(with decoder), but at least then it would be possible with Decodable.
I find these kind of replies very unconstructive. Being able to parallelize the decoding of a big JSON file is a very valid problem on its own. Hell, there may be use cases for 100MB JSON files. You don't have to mansplain how they are doing it wrong.
It's not unreasonable to point out that sometimes higher-level solutions are appropriate, and both of the replies you've quoted went on to earnestly engage with the question as posed.
Experience has taught me that when someone says "Let's make this parallel!", the first reply should be, "Can we make it faster some other way instead?" This is a combination of wariness over the inevitable complexity of any parallel implementation, the need to allow for user control over the degree of parallelization, and the fact that tuning is inevitably necessary to find the line between worthwhile speedups and parallel overhead slowing things down.
In addition to this general issue is the fact that JSONDecoder/Encoder have a lot of other low hanging performance fruit to pick until we get to parallelization. It's generally one of the slowest ways to parse JSON at the moment and it seems like there are a lot of other, easier performance wins to be had before parallel processing of massive arrays.
That said, concurrent containers, or simpler concurrent decoding methods offer interesting avenues to explore. I just think they'll need to offer both implementers and users ways to customize the amount of parallelization done and things like what queue it'd be run on.