Actor Races

The issue is that that's not always possible. Take the example of some camera hardware where start, stop calls should always be interleaved (start, stop, start) not (start, start, stop). Perhaps this is triggered by the user opening and closing an application and receiving notifications from the system. Here, the call site for the start, stop calls have to be split.

We're unable to group the calls in this case and therefore we need some way of dispatching jobs from one Actor to another, linearly.

1 Like

I assume there's a basic assumption that those calls would always originate from the same thread (or serial [dispatch] queue)?

I've run into similar issues in my own code to those discussed in this thread. I've blown off a few metaphorical feet before coming to grips with what guarantee actors actually provide.

Thank you for your patient explanation and elaboration on the problem. I find the whole discussion and the proposed solutions to be fascinating.

1 Like

Yes, the same Actor. In our camera example the system would always send its .appDidBecomeActive and .appWillResignActive events on the @MainActor. I think new Tasks are spawned to run 'on' the actor that called them (unless they're detached) but that doesn't guarantee order as one might intuit. And of course when they suspend, there's no guarantee which will finish first.

This new concurrency stuff is a brave new world, I'm still getting my head around it, but I think I like it...

Unless I'm missing something, it doesn't do that. Nothing seems to prevent enqueue() (and therefore runNext() which is spawned from enqueue()) to run while a previous task is suspended in await task().

Ah, you are right. Shows how easy it is to fall into logic traps with these things. Think this just demonstrates the original point I was trying to make.

Concurrency field is a bit new to me, so correct me if I’m wrong—actor model is all about concurrent messages, it doesn’t mean concurrent access to some shared data, and in this regards Swift implementation doesn’t differ from Erlang or others?
If yes—it means those queuing hacks are perfectly fine for now? But hope there will be some more clear solution in the future. And two more questions on top:
How others are solving this problem in the industry (Erlang, AKKA, Orleans)?
Could concurrent data access be solved by transactional memory (like Haskell STM)? :thinking:

2 Likes

It's been a long time since I played with Erlang, but if I remember correctly, whereas in Swift, Actor to Actor communication is facilitated exclusively through Tasks, each Actor in Erlang has a 'message box' that other Actors can message directly. They also don't have async/await (or, didn't, perhaps they do now?). That makes it easier to reason about, as there's no mid call suspension, but (I imagine) it won't give the opportunity for performance that the Swift concurrency runtime does/will.

I think it can, with the big caveat that, if the Actor itself contains any suspension points (await) within its method bodies, then the caller of the Actor may not have exclusive access during that operation. So if you're nesting some async resource (i.e a database) within an Actor, you'll need to give it a message queue.

However, what I'm still wrestling with is that, while we can create a serial queue to replicate the equivalent of Erlang's cast, it's not possible (or not recommended) to replicate Erlang's 'call'. i.e. 'we can't/shouldn't await on a queue'.

All of the solutions in this thread violate these rules. I'm struggling to see a way around this though, sometimes you really do need to wait on the result of an operation. (e.g. a payment transaction). So while it is possible to create a stream of events back to the callee, that puts an end to try and catch, and even callbacks.

I think it would end up more complex and harder to reason about than what we're trying to move away from. But maybe I haven't found it yet, I would love to see some examples.

What I'd really like is the ability to create a TaskSeries, that is owned by an Actor, and is guaranteed to be called in FIFO order, and has the same non-escaping closures as a regular Task. This would make it much easier for the calling actor to maintain some state on itself and synchronise its access to an external actor which might be all you need in a lot of cases. It doesn't solve the try/catch issue though.

No idea how possible that is though.

2 Likes

Thanks again Philippe. I think I have a robust (and tested) Semaphore implementation now.

EDIT: I added support for cancellation: wait() will now throw CancellationError if the task is cancelled before a needed signal occurs. Also, deallocating a Semaphore while there exist tasks suspended and waiting for a signal is a programmer error that triggers a precondition failure.

EDIT 2: For a nice introduction to semaphores, and how they can be useful, see The Beauty of Semaphores in Swift 🚩 | by Roy Kronenfeld | Medium. The article uses DispatchSemaphore, but the examples have been translated to Swift concurrency in this playground.

8 Likes

In terms of the SerialQueue and Reactor, for future reference here are my implementations. I'll try and keep them updated if anything changes but they're pretty simple – especially without the blocking calls:

As mentioned, It doesn't have any blocking calls, as I think I'm going to try and play within the rules. But if you did want to, you could probably do something like:

func fetchItem() async throws -> Item {
    await withUnsafeContinuation { continuation in
        reactor.send { database in
            do {
                let item = await database.fetchItem()
                continuation.resume(returning: item)
            }
            catch let error {
                continuation.resume(throwing: error)
            }
        }
    }
}

My guess is that this would admit deadlocks, but I'm not entirely sure.

2 Likes

Yes, guess this is what we want to achieve here in the topic.
But the question overall was—is actor model suitable for it, or it mostly for actions and state management inside actor, and for something a bit more complex, like concurrent database connection, you need other solution? :thinking:
The problem I see at the moment is for that case we need to manage concurrency ourselves inside an actor, which could be some level of complexity, rather than letting system to handle it. Also with the queues you still land in racing when several actors connecting to some shared cache, especially in distributed systems. Solution could be something like Database singleton actor, but you always need to keep this in mind.

Think that's what initial topic doubt is about, also why it's interesting for me to know how others are doing it, e.g. some abstraction like Haskel STM, where on surface level nothing changes while calling actions, as I understood.

P.S. Note I'm just thinking out loud, want to understand the problem better. Also new to Erlang and know just basics of Haskell, so could be wrong here. :slightly_smiling_face:

My hunch is that it's perfectly possible, but as it's state of the art, the design patterns are still being figured out. Erlang has a long history, so the way people structure their supervision trees (every actor has a Supervisor in Eralng/OTP) and various use cases are well documented.

Swift's Actor model is significantly different. There's not much literature on it. But in my limited experience so far, I'm not seeing anything that's actually stopping me from achieving what I want to achieve, in fact I like it a lot, I'm just having to exert a little bit more initial creative effort as I learn what is essentially a new tool – which is why I end up on here trying to work out how other people are doing things!

It should be perfectly possible to design something like a server which creates an actor for each client and operates concurrently. There's always going to be trade offs. Is it worse to have a stale cache, or throttled performance? It will depend on the use case.

There's nothing to stop a design which allows concurrent reads and serial writes – but we'll still need to structure that design for ourselves. That flexibility is a good thing I think.

1 Like

There is a bug in your implementation, you don't pass the buffer policy inside your custom extraction function.

1 Like

Thanks! Fixed. This should probably be an option for the serial queue, too.

I've also found the re-entrancy of actors to be a bit difficult to reason about.

In practice, most of my actor code does not await other tasks, but it can be tricky to remember that any actor method with an await behaves like this.

async actor methods have very different semantics from normal actor methods in this regard - they should almost be considered a different concept.

Something that might help is a @reentrant annotation on any async actor method (that particular spelling can be debated). The compiler could enforce that you add this annotation on any async method in an actor, so readers of the code remember that this particular actor method will exhibit re-entrant behavior.

Or perhaps an await in an actor method would have a different syntax to remind the reader to remember that the surrounding method is re-entrant.

5 Likes

I'm just thinking off the top of my head here, but is it possible for something like this to be handled with a wrapper at the property level?

@fifo var myDatatbase: [Items]()

Try using AsyncStream as the queue (with for await in combineLatest on the streams of different typed elements) and the continuations as the operations, passing them to all the different potential call-sites.

Doing it this way around means it doesn't matter that AsyncSequence can only have one client, or that async funcs with multiple awaits suffer a re-entrency problem, or that streams aren't type erased.

This inversion reminds me of how we had to change our thinking when using values vs objects. Here is an example:

import SwiftUI
import AsyncAlgorithms
import ConcurrencyPlus

struct AsyncChannelTest {
    struct ContentView: View {
        @State var subject1 = AsyncSubject<Int>()
        @State var subject2 = AsyncSubject<String>()
        
        var body: some View {
            CounterView(subject: subject1)
            AppendView(subject: subject2)
                .task {
                    for await (i, s) in combineLatest(subject1, subject2) {
                        print("\(i) \(s)")
                    }
                }
        }
    }
    
    struct CounterView: View {
        @State var counter = 0
        let subject: AsyncSubject<Int>
        
        var body: some View {
            VStack {
                Button("Incrememnt \(counter)") {
                    counter += 1
                    subject.send(counter)
                }
            }
        }
    }
    
    struct AppendView: View {
        @State var text = ""
        let subject: AsyncSubject<String>
        
        var body: some View {
            VStack {
                Button("Append \(text)") {
                    text.append("a")
                    subject.send(text)
                }
            }
        }
    }
}

AsyncSubject is convenience for getting a continuation out of a stream's init closure, a workaround for this problem: `AsyncStream` constructor which also returns its Continuation