This is my personal opinion, and could be wrong in some aspects, so please write if something is off. But think in order to understand actors, it's better look a bit into history, and it's quite important for summary in the end. It's not enough to think just from client or single machine view, but Better to have perspective from different angles.
Overall actors are hard to crack sometimes just because of two simple things—we're:
- used to deterministic nature of imperative programming, when every statement guarantees outcome;
- forgetting what actual computation is.
For the first point in reality you could accomplish this guarantees only on single machine with one thread [1]. Adding one more thread already gives you a headache with data racing, locks and etc. Because, going to second point: computation is simply a state change. It's just hidden for you in imperative languages, as every statement could change some state without explicitly saying you that [2]. So in order to have multiple parallel statements with some guarantees, you need to be careful with those state changes.
Why you need multiple threads, though? Topic of multitasking, concurrency and parallelism is actually quite old (rooted back to 50s), but it should be looked at from two perspectives:
- People tried to improve performance and economy to run multiple tasks on single machine, e.g. Dijsktra was bothered with problems described earlier
and pushed for structured programming and come up with the concept of semaphore.
- There was a need for computer networks, and a way to communicate and execute some work between them. Message passing was one of the natural concurrency models in this systems.
I wouldn't touch first point, but will focus on second, as you can see it related to the topic. Computers still compute, but everything is distributed now—this gives lots of pain points. What if one computer fails? Network disconnection? Meanwhile you still need consistency and reliability. There were different approaches and models at that time, including objects and actor model, but IMHO we should discuss not Hewitt's definition of actor model, cause he was trying to create a model of a computation overall, but rather Erlang and its processes.
Erlang started as a research project at Ericsson labs to come up with reliable distributed system. Company had exact computer networks, but in the form of switches, and this system should be reliable to be able to handle massive number of calls. And for this system, rather than focusing on guarantees, they focused on what exactly is not guaranteed:
- One node is not enough.
- Node can fail.
- Message can fail.
and etc.[3] So they've come up with processes—lightweight abstractions with isolated state, which can only be changed by messages. Team was influenced by things like Prolog and Smalltalk, and lately realised they've basically rediscovered actor model, as you can see, but rather than focusing on just state isolation, they've also focused on errors. Fault tolerance also helped them to build concurrency-first language, and they've later realised that concurrency and fault tolerance come together.
I want go into details, but for someone interested suggest to check Joe Armstrong's thesis [3] or some other resources related to Erlang and how they've achieved reliable systems with this approach.
Ok, this ideas work fine in the context of distributed system, but one can ask that with Swift we're usually building single iOS/macOS apps, so regular Dijkstra semaphores should work? And answer as usual—it depends. Yes, mutex and semaphores are helpful. Especially if your app is not fully async, I would actually suggest to use them first. But as discussed, actors are not only about data racing, but also about handling errors, especially in concurrency context.
Think it's not about what to start first with, or how to write functions, it's all about right mindset. As soon as you have concurrency, several states and changes, and you're already feeling that something could fail (basically having throw somewhere)—probably it's a good idea to introduce actors to wrap this logic. Language gives you a good isolation tools for that.
This is not something new, and reliability topic is actually emphasised in Swift Concurrency Manifesto[4]. Note that it's discussing reliable actor, which never landed, but we have now Distributed module, which I really suggest to check (especially with Cluster System[6])
Now, talking about all actors vs only when needed, think @vns got a right insight about actors—in actor model everything should be an actor, as it was with objects in Smalltalk for example[7], so comparing with OOP is correct. On other hand Swift's implementation (and Erlang's) is more specific, and language is general purpose, so of course not everything will be an actor. But, when you already have actors, and you need additional logic—best solution in most cases is actually to add more actors. Remember was struggling with something in distributed actors, and @ktoso just suggest add more actors, which worked... well.
Especially when you can easily combine distributed with regular actors.
- This sequential machine/one thread model is important though, as it's basically the only way how we can model computation (basically Turing machine).
- In this regards, contrary to imperative approach, it's interesting to learn Haskell with its State# RealWorld and I/O Monad. Gives a good insight about computation.
- Making reliable distributed systems in the presence of software errors
- Swift Concurrency Manifesto. Part 3: Reliability through fault isolation
- Distributed | Apple Developer Documentation
- GitHub - apple/swift-distributed-actors: Peer-to-peer cluster implementation for Swift Distributed Actors
- Hewitt and Kay actually co-influenced each others ideas.