I don't think I have a good example in a vacuum (for now?), and I'll have to preface with a disclaimer that this all might be terribly wrong; the following is just a framework I'm currently thinking in, and it might easily get corrected by someone more experienced.
But I can start with a counterexample: it might be typical for an imaginaty todo app to have a class TodoManager
where it loads, parses, stores, retrieves, filters etc. some todo items. There's typically a lot of business logic inside with some mostly constant handles to the local SQLite DB, observers, publishers and stuff, and people would attempt to just transform this into an actor.
First, I strongly opine that the business logic part has to straight up be written as a pure procedural global (or static) function like this
func loadRemoteTodos(ids: [String],
writeTo: some DatabaseHandle
) async throws { }
for two reasons:
- It becomes a non-isolated function, which is actually the correct semantics: the function only touches the explicitly passed DB at some point, but otherwise the operation has no business of being serialised w.r.t. some other logic.
- It is now a "mini concurrency domain" of its own: you will typically then only have to reason about the ordering of other calls to this function, but not any adjacent ones.
If one has correctly figured out the transactionality of this function and others alike, there's much less cognitive burden because it already sits in the global concurrency domain, and thus makes no implicit guarantees and hides no stateful information.
If I were to describe when to use actors in one sentence, it would be something like "only for compact, stateful data types that benefit from being highly concurrent", but I think it's easier to reason in terms of when not to use actors, similar to how this document does it:
- Is there even an underlying reason for it to vend an async API (i.e. non-blocking I/O, networking, custom scheduling, computation offload)? If not, you (very likely) don't need an actor because you don't have an innate source of asynchrony.
- Do you actually intend to allow your callers to suspend? If not, you don't need an actor, you need a leaf-level mutex.
- Are you relying on strict FIFO execution order? If not, you don't need an actor, you need a (threadsafe) queue.
- Does it matter if jobs get reordered due to underlying task's priority? If it does, you again need a FIFO queue, not an actor.
And so on. Benefitting from being highly concurrent is the key, because that's what actors are by design, while most problems in the typical app programming space aren't, and so people are losing the battle of trying to bend a tool to fit the problem it doesn't fit.
Good examples for such highly concurrent data types could be message queues, database handles, network ports, caches etc. It's easy to imagine them being generic; also I vaguely remember that actors originally were conceived as entities that manage network connections indeed.
Distancing oneself from the OOP paradigm helps in the portion where one tries to appoint a bunch of functionality to a singular "object", whereby the choices are pretty much either a struct
, a class
or an actor
, only the latter being threadsafe by default, which is where people immediately get misguided. When the majority of functionality is implemented as standalone functions (which is, again, oftentimes even more semantically correct), there's no such decision paralysis anymore.