Someday we’ll be in a brave new world of concurrency. I was thinking how CoreData would have looked like if actors would have been around when it was released.
Reading and writing to persistent storage (e.g. SQLITE) could occur on a (background) Actor. Long lasting operations such as import/export to json or other representations could be done with another (background) actor.
And there is also the mainActor to present data to the user and for any user edits.
Safely transferring data between these actors requires the Sendable marker protocol. The proposal excludes classes from participating unless using @unchecked.
However CoreData represents a graph and so requires (I think) classes in order to model the relationships. Sending class instances between actors is not possible (unless using @unchecked). @unchecked is for those who know what they are doing (I’ll keep a healthy distance) and deep copying an entire graph each time & sending it across as structs does not seem prudent either.
In CoreData every object has an unique identifier (i.e. ManagedObjectID) which can be used to retrieve objects in a different (managedObject)context (i.e. thread) and thus actors too.
So it seems to me that every actor would need to maintain it’s own (partial) copy of the graph and send data and ManagedObjectIDs across. Feels like a lot of unnecessary copying though.
Alternatively each object in the graph could be an actor on its own and accessed via async/await. Then there would only be a single copy though the number of actors could become very large. However in contrast with queues in GCD having lots and lots of actors doesn’t or shouldn’t be an issue?
Did I go wrong somewhere in my understanding of the upcoming concurrency features? Thank you for reading.