[Concurrency] Actors and Audio Units

At first glance the proposed actor model seems like a great fit in the audio space, potentially also in gaming and other highly concurrent applications. There seems to be one caveat though (unless I'm missing something of course).

In certain applications, ordered messaging is not the only way actors can meaningfully and safely communicate. In audio for example, there's typically a high-priority thread that periodically asks actors (audio units) to generate, analyse or mutate sound. In many cases messages sent to the units from other threads, e.g. the UI thread, will be ignored and only the last one that the actor picks up before the next rendering cycle will take effect. In other situations it is necessary to send out-of-order messages (MIDI "panic reset" comes to mind though there may be others).

Suppose you have an EQ filter as an actor in your audio processing chain. The filter has certain parameters that can be changed from the UI: e.g. bypass, gain, or a whole structure eqParameters, etc. Audio units typically protect these properties with a sync primitive like a semaphore. There is no queue here since any changes to e.g. the bypass flag that happened between the rendering cycles don't matter to the unit; it will only pick up the last value before the start of a cycle.

What this means is, changing these parameters may look like you are sending messages to the actor whereas in fact there is no queue there, just a single slot for the latest message - not sure if there is a proper term for this in multithreading (mailbox?).

Similar scenarios I believe are possible in gaming, though I'm not experienced in this specific area.

So, I'm either not seeing how the proposed actor model can cover this use case, or the model in fact doesn't. Sending parameter changes via a serialized queue to this type of actors would be an overkill. On the other hand, the actor isolation rules will probably prevent the developers from implementing the "mailbox" properties manually.

Other than that, audio units are certainly actors. In some implementations the entire processing chain in your application may be wrapped in a single actor for efficiency, but it doesn't change the fact that the actor may need to bypass the serial queue, again, for efficiency reasons.

What am I missing?

(In any case I'm very much looking forward to the implementation of these proposals. Seems like a lot of things will be rewritten to look nicer and cleaner, which means the boundaries of maintainability of Swift projects will moved further to cover more complex applications. This is really exciting!)

1 Like

Maybe executors are the component of the system where logic could be implemented to re-order the message queues and/or drop expired messages from the queues?

To the eye of an executor, all partial tasks look the same though. We need a way to distinguish them if we want to meaningfully reorder the partial tasks.

Actors needn't be FIFO, and handling a high-priority task before a lower-priority task that was enqueued earlier is completely reasonable behavior. However, as Lantua says, you can't really tell what a partial task is going to do. In this case, I think either you need a custom executor which delivers parameter updates as a special sort of (possibly non-blocking?) message or you need to store parameters in non-actor-guarded atomic/locked storage that updates simply change and then return, and which the actor has to treat as "volatile".

1 Like

You're making me think that it'd be nice if there was a property-wrapper-like attribute you could attach to a function that'd give an executor some clues about how to schedule it.

Actors are somewhat inherently introducing latencies, at least all typical implementations do because they imply using queues for synchronization; and queues can grow, and queue growth impacts service time (via Little's Law: mean response time = mean items in queue / mean throughput).

One could do bounded mailboxes, so you can put a bound on how long items have to wait before they are processed, but that ends up hard to program with and not really what you want for all messages.

The "jump the queue" tactic John mentions is doable:

is still doable efficiently, when a high priority task arrives and we know it is higher priority than any currently pending work - you could efficiently "jump the queue" if needed.

(Or "just" make the actor higher priority for scheduling now, so it gets more time slices to churn through its mailbox -- but this still takes the hit from the queue length of course).

Any other form of "reorder work (in the queue)" though is very likely to end in sacrificing throughput by a lot. Specifically any form of "scan the queue and re-prioritize work" and maybe "scan the queue and remove work that does not need to be processed anymore" in theory sounds like a good idea, but then reality strikes and you can't really implement this efficiently. These queues really want to be simple multi-producer-single-consumer queues, and in those, any "poke around in the middle" is just tremendously hard to pull off.

What you seem to describe/want is really "avoid the queue" which is totally reasonable in the real world!

You could do this by punching a hole in the actor with an @actorIndependent(unsafe) property, which you'd atomically update. You would have to be super careful around access to it, but this way you could set the property without going through the mailbox/queue at all. It's the "danger zone" but that's fine and expected.


Calendar Queue does immediately come to mind, but then we have to synchronize accesses, making it tricky to scale with the number of actor :thinking:.

The more I think about it though, there's nothing in real-time rendering that could rely on message queues in a meaningful way, it seems. There are more complex scenarios where e.g. you play a file: you keep a certain number of file regions in memory ahead of time for the audio rendering thread to have at its disposal. The goal is to survive peak system load moments in the hope that it's temporary, i.e. the high priority task will have something to render for a while.

Again, there is no queue here, just a structure (say a safe linked list) with minimal possible locking time: you should never make the audio rendering thread wait longer than say an atomic pointer update.

So if unsafe access to actor's properties is everything the audio actors will have, then what's the point of having them as actors?

In fact properties with compare-and-swap and also just atomic swap semantics would cover a lot of use the cases in audio (and potentially elsewhere). In my audio engine it's all I have: there's no locking, no queues, just a whole bunch of atomic primitives. And it is in Swift!

So maybe the old good @atomic properties that will be extended with compareAndSwap(), swap() etc. methods, say allowed only in actors?

1 Like

I don't know why those would be restricted to actors at all, but yes, it does seem like actors don't add much to your problem.

1 Like
Terms of Service

Privacy Policy

Cookie Policy