Please share here in this space what you think is essential to know for employing actors wisely.

Here is one, transactionality, which I discovered today, posted by @nkbelov.

1 Like

Start with a non-actor interface and see if it works for you.

Sometimes eliminating states reduces a certain functionality to a single entry point meaning that you don't need an actor, you just need a standalone async function. nkbelov's comment is a great illustration of that, i.e. if

func loadImage(at: URL, addingTags: [String], saveToDisk: Bool) async throws

is all you have then declare it as a static function and there will be no need for an actor altogether. In fact network calls backed by URLRequest are almost always purely functional and don't require any states or actors (unless you use local caching but that's a different story).

In general, the functional approach is a lot more concurrency-friendly, i.e. when you move your state to the stack.


I'm still not entirely sure if I know exactly when to use actors in client apps. At first glance you use them when you have a state that can be accessed from more than one execution thread.

Most of the time though you don't have additional execution threads in your client app unless either (1) you create them because you want something to happen in parallel with your UI, or (2) the OS forces you to, usually via hardware related API's, such as camera or audio.

For example I've been struggling with this - [Concurrency] Actors and Audio Units, now trying to rewrite my audio engine for Swift 5.10/6 and it's still not clear whether I should use actors and how. I'll probably come back to the forum with some new questions.

1 Like

Actors bring new programming model/paradigm to Swift, and that’s quite often missed. I’ve made the same wrong take first time, tried to avoid actors mostly. While in fact you probably want to try the opposite. Actors model is really similar to OOP in the way that it treats everything as actor, akin to everything is object. So with actors you actually model everything as actor or part of one — there are mostly no opt-outs from that.

If you think how Swift Concurrency is designed, most of your code is part of an actor, because in order to make asynchronous work, you have to be isolated (or sendable, but even in that case more likely the code will eventually run inside an actor). Which makes use of actors in concurrent code just inevitable: even if you’ve made your types thread-safe, say, using mutex, it will still be isolated on an actor.

The same way it makes a little sense to not use objects in OO-languages, not using actors in Actors model is also would’ve been odd. Currently I try to make more use of actors and global actors: the latter allow you to design subsystems isolated to the same global actor, but spread through a number of types, that logically belong to this subsystem you isolating.

1 Like

That's exactly what I call an actor trap. You may easily end up with code that sends messages instead of directly calling functions with no benefit whatsoever and a performance penalty too. Actors are only needed where there's true parallelism, you don't want to inject phony parallelism where there's none.

4 Likes

Actors are a model for concurrency, not just parallelism. That’s an extreme narrowing of a concept.

1 Like

What is concurrency if not a synonym of parallelism?

1 Like

Parallelism is a subset of a concurrency. Concurrent execution isn’t necessarily parallel.

1 Like

Some examples would be great because I don't understand what you are saying.

Simplest illustrative example is single-core environments: it is possible to have concurrency there, yet no job would ever execute in parallel, because there is not enough resources to run all scheduled jobs simultaneously. In a more broad sense, system can have hundreds of tasks scheduled for execution, and even make progress within each of these hundreds, but not for all of them at the same time, making executing of them concurrent, but not parallel.

I like this visual illustration, while it is not being 100% accurate on all the nuances that can be in real system, general notion of what's happening is pretty good:

Concurrency vs Parallelism

2 Likes

I understand that, although I'd argue that from the high-level perspective you usually don't care how many cores you have, the underlying system will take care of efficiently using them (or not).

But we digress from your main thesis that everything can be (should be?) an actor the same way as everything can be an object in OOP. In my opinion, it's a trap in which you may end up with inefficient implementations where simple calls become async ones for no good reason. In my original comment above I was arguing that true concurrency emerges only where you create tasks that can be or should be executed in parallel with others, or enforced by the OS when dealing with hardware.

1 Like

But that’s what actors model all about. It exactly states that “everything is an actor”. Yes, you still need to consider how it might affect overall execution flow and whether there would be unnecessary jumps back and forth between isolation domains, if we talk about Swift, yet this is a design detail on how to use this model efficiently.

Asynchronous calls between actors not necessarily ineffective. There can be domains in which actors might prove itself inefficient, but that’s certain cases, not the general rule. They not necessarily add a significant overhead, especially if you don’t need nanoseconds-level performance (or even milliseconds).

On the other hand, designing in terms of actors and their isolation boundaries is often effective, both in terms of structure and performance.

I’m not sure what to understand under “true concurrency”, but unless you directly controls execution environment and schedule jobs in precise logic — meaning writing your own runtime with such rules enforcement, which is not the case for most of the concurrency systems in languages, you don’t have control in what manner a job is executed: it can be run in parallel, it can be run on the same thread, options are limitless. You can’t and shouldn’t tell the difference, since that’s an implementation detail.

You do have certain control via await. You may have a program that runs everything serialized even though your code is sprinkled with async/await and actors, and looks like there's concurrency whereas there's none. Concurrency is always hard, and this is why I'm strongly against "actors are like objects in OOP, use them whenever you can". But I already repeat myself.

This is my personal opinion, and could be wrong in some aspects, so please write if something is off. But think in order to understand actors, it's better look a bit into history, and it's quite important for summary in the end. It's not enough to think just from client or single machine view, but Better to have perspective from different angles.

Overall actors are hard to crack sometimes just because of two simple things—we're:

  1. used to deterministic nature of imperative programming, when every statement guarantees outcome;
  2. forgetting what actual computation is.

For the first point in reality you could accomplish this guarantees only on single machine with one thread [1]. Adding one more thread already gives you a headache with data racing, locks and etc. Because, going to second point: computation is simply a state change. It's just hidden for you in imperative languages, as every statement could change some state without explicitly saying you that [2]. So in order to have multiple parallel statements with some guarantees, you need to be careful with those state changes.

Why you need multiple threads, though? Topic of multitasking, concurrency and parallelism is actually quite old (rooted back to 50s), but it should be looked at from two perspectives:

  • People tried to improve performance and economy to run multiple tasks on single machine, e.g. Dijsktra was bothered with problems described earlier :top: and pushed for structured programming and come up with the concept of semaphore.
  • There was a need for computer networks, and a way to communicate and execute some work between them. Message passing was one of the natural concurrency models in this systems.

I wouldn't touch first point, but will focus on second, as you can see it related to the topic. Computers still compute, but everything is distributed now—this gives lots of pain points. What if one computer fails? Network disconnection? Meanwhile you still need consistency and reliability. There were different approaches and models at that time, including objects and actor model, but IMHO we should discuss not Hewitt's definition of actor model, cause he was trying to create a model of a computation overall, but rather Erlang and its processes.

Erlang started as a research project at Ericsson labs to come up with reliable distributed system. Company had exact computer networks, but in the form of switches, and this system should be reliable to be able to handle massive number of calls. And for this system, rather than focusing on guarantees, they focused on what exactly is not guaranteed:

  1. One node is not enough.
  2. Node can fail.
  3. Message can fail.

and etc.[3] So they've come up with processes—lightweight abstractions with isolated state, which can only be changed by messages. Team was influenced by things like Prolog and Smalltalk, and lately realised they've basically rediscovered actor model, as you can see, but rather than focusing on just state isolation, they've also focused on errors. Fault tolerance also helped them to build concurrency-first language, and they've later realised that concurrency and fault tolerance come together.

I want go into details, but for someone interested suggest to check Joe Armstrong's thesis [3] or some other resources related to Erlang and how they've achieved reliable systems with this approach.


Ok, this ideas work fine in the context of distributed system, but one can ask that with Swift we're usually building single iOS/macOS apps, so regular Dijkstra semaphores should work? And answer as usual—it depends. Yes, mutex and semaphores are helpful. Especially if your app is not fully async, I would actually suggest to use them first. But as discussed, actors are not only about data racing, but also about handling errors, especially in concurrency context.

Think it's not about what to start first with, or how to write functions, it's all about right mindset. As soon as you have concurrency, several states and changes, and you're already feeling that something could fail (basically having throw somewhere)—probably it's a good idea to introduce actors to wrap this logic. Language gives you a good isolation tools for that.

This is not something new, and reliability topic is actually emphasised in Swift Concurrency Manifesto[4]. Note that it's discussing reliable actor, which never landed, but we have now Distributed module, which I really suggest to check (especially with Cluster System[6])


Now, talking about all actors vs only when needed, think @vns got a right insight about actors—in actor model everything should be an actor, as it was with objects in Smalltalk for example[7], so comparing with OOP is correct. On other hand Swift's implementation (and Erlang's) is more specific, and language is general purpose, so of course not everything will be an actor. But, when you already have actors, and you need additional logic—best solution in most cases is actually to add more actors. Remember was struggling with something in distributed actors, and @ktoso just suggest add more actors, which worked... well. :slightly_smiling_face: Especially when you can easily combine distributed with regular actors.


  1. This sequential machine/one thread model is important though, as it's basically the only way how we can model computation (basically Turing machine).
  2. In this regards, contrary to imperative approach, it's interesting to learn Haskell with its State# RealWorld and I/O Monad. Gives a good insight about computation.
  3. Making reliable distributed systems in the presence of software errors
  4. Swift Concurrency Manifesto. Part 3: Reliability through fault isolation
  5. Distributed | Apple Developer Documentation
  6. GitHub - apple/swift-distributed-actors: Peer-to-peer cluster implementation for Swift Distributed Actors
  7. Hewitt and Kay actually co-influenced each others ideas.
1 Like

I think it's not so much that everything should be an actor, but rather everything should be in an actor. There's no problem in having few actors (and I'd argue it's even better to limit the number of awaits in a program). Also note that the runtime cost of dealing with local actors is much higher than with distributed actors, so you'd probably want to adopt different strategies between the two.

2 Likes