PartialAsyncTask will have some kind of synchronous run() operation that should allow one to do this. For DispatchQueue, we will probably want to add some API to allow you to run an async operation on that particular queue. The details here will evolve as more of the pieces of the prototype implementation come together.
It doesn't sound that much different from converting it to first class function (including the call-once restriction) which seems to be contrary to what @John_McCall said earlier (quote below). Or do you plan to have some fast path for default execute implementation?
Regarding this (code) comment, from the Actor Isolation section (near the end):
// Safe: this operation is the only one that has access to the actor's local
// state right now, and there have not been any suspension points between
// the place where we checked for sufficient funds and here.
Does this mean that, if one actor method call suspends (because it calls an async function), then other method calls on that same actor could run while the original is suspended? That is, could separate method calls on an actor be interleaved?
From the rest of the proposal, I would have expected that a given actor method call would run completely before any others were allowed to run.
This design currently provides no way to prevent the current context from interleaving code while an asynchronous function is waiting for an operation in a different context. This omission is intentional: allowing for the prevention of interleaving is inherently prone to deadlock.
Yeah, I saw that for async functions. It seemed like Actors were meant to provide a level of serialization above plain async function calls.
For example, this bit:
If we wanted to make a deposit to a given bank account account , we could make a call to a method deposit(amount:), and that call would be placed on the queue. The executor would pull tasks from the queue one-by-one … and would eventually process the deposit.
It's not clear whether "task" above means the Task representing the entire method call, or the PartialAsyncTasks that make up its actual execution. To me, it seems to say that actor method calls would not be interleaved.
Edit: Seeing that the only method in the Actor protocol is enqueue(partialTask: PartialAsyncTask), it probably means the partial tasks.
That's not correct. Each async call is potentially a suspension point where other code could be interleaved on the actor. This prevents deadlocks. It's also why we consider it important to mark these in the code with await.
(I think we need to call this out specifically in the proposal)
I’m a big fan of the actor model so I’m happy to see this direction. Thanks to everyone who has been working on it!
I have implemented a library that includes an actor-based concurrency model that encodes the serialization context in the type system. In this design, a class is able to abstract over the serialization context and generic code is able to constrain a type parameter based on serialization context. This has been very useful in some parts of the library. As one example, a UI layer of the library constrains type parameters to the main serialization context.
It doesn’t look like this kind of abstraction is possible in the current proposal. Did you consider a solution that would support this? You include an Actor protocol. If this protocol included an associated type representing the actor’s queue this would be come possible. This would be an anonymous compiler-synthesized type by default and the actor’s global actor when one is specified.
I came here to express the same intuition, but I see that @Karl has already expressed it better.
I agree that being a reference type and satisfying AnyObject intuitively come as a package. In this language, where we have multiple different sorts of value types, it's perfectly understandable to have multiple different sorts of reference types. Why might that be useful here? Well:
Adopting @Karl's viewpoint allows us to decouple user expectations of reference types from user expectations of classes. That means we can more critically evaluate whether actors need to, for example, support inheritance or if instead it'd be nearly or just as powerful if they didn't.
But even if we don't make any changes to the design after such re-evaluation, the restrictions that apply to actors but not to "non-actor classes" or vice versa would feel natural in a design where actors aren't treated as "restricted" classes. Take, for example, the rule that actor classes can only inherit from actor classes and non-actor classes from non-actor classes: this would require no explanation at all if actors aren't classes, only reference types.
The default actor executor will be more efficient than enqueuing something as a block on a DispatchQueue. If that's all you're doing, you should endeavor to switch it to an actor. But if you do have a DispatchQueue you can't just eliminate, it's not unlikely that we could provide an adapter that, with the right OS support, could also do better than enqueuing something as a block.
My worry is that actors seem to be mostly encouraging a number of design patterns that are known to be problematic:
Actors encourage to go wide by default. Developers will create many actors which are backed by their own private queues. But we have learned that this is a mistake. Applying concurrency without care leads to terrible performance. The better approach seems to be to go serial first, then apply concurrency as needed with great care.
Actors encourage to protect shared state with queues instead of locks. Dispatching small tasks to queues is inefficient. It's unclear how actors will make the difference between say, a function that merely does an insertion into a dictionary, and a function that performs a long-running task. The first type of methods will be very inefficient to move into a queue.
Actors encourage to write more async methods. While async methods are fine, they also make programs more complex and introduce subtle out-of-order bugs. For example it is possible for actor methods to be called in an interleaved fashion in mid-execution while being suspended, which causes hard to debug bugs. It's also not obvious how such bugs should then be addressed once you only have async methods to call on other actors. This is usually the sign that too much asynchronocity exists in the program and that a lot of code should have probably been written synchronously in the first place and moved onto a background queue/thread at a higher level.
Async methods are also contaminating in that awaiting them requires to turn the caller into an async method itself which can rapidly turn the whole program into an async mess. Rather, some methods should really just be synchronous and use locks to protect state.
Actors encourage developers to not think about threads. Whether we like it or not, we cannot ignore the reality of the underlying OS and hardware that programs are running on. I have seen many developers throwing a lot of queues at the OS/hardware (and I've done it myself) with terrible results.
I'm worried because this feels like a reenactment of the problems that appeared following the introduction of the libdispatch as we were told to not worry about threads and that it was ok to create hundreds, even thousands, of queues. Much later we were told that, in fact, we should use a very limited number of queues, consider them as "execution contexts" in the program (which all of a sudden sounds like we should care about threads) and apply concurrency very sparingly. 10 years later we are still seeing developers making these same mistakes in their libdispatch code, this is deeply entrenched, for the worse. We need to be very careful here because once something like this is out, it will be used widely and without limit.
I understand that there may be optimizations/tricks that could help alleviate these problems but I haven't seen them explained yet. I'd love to hear more.
I feel using actor instead of actor class as one single keyword is better intuitively, even much shorter and simpler to write.
actor itself is class based, a restricted concurrency oriented isolated class which act as an Actor role. That's unnecessary to compose two consecutive keywords for one simple thing especially there is NOactor struct/enum things exists at all.
Furthermore, we can also simplify @UIActor class - attribute based annotation to uiactor keyword.
So overall, transform complicated actor class and @UIActor class into actor and uiactor respectively (same.. there's NO @UIActor struct/enum too); actor for all background scenarios and uiactor for UI main thread only.
Absolutely. As it stands the proposal is over-promising and under-delivering on data safety.
This proposal promises that actors protect their state against data races. It omits the disclaimer that "data race" is narrowly-defined: a single access to a single field is protected from concurrent access to that same field. These actors provide only weak protection against other data-corrupting race conditions because of the threat of re-entrancy at any await.
The proposal then uses a bank account example. It again fails to mention that the traditional bank account bugs go beyond data races narrowly-defined. It almost completely glosses over the other subtle things the code is doing to be safe from race conditions.
The BankAccount.transfer(amount:, to:) seems to be the only place where race conditions other than low-level data races are mentioned: a comment indirectly mentions the requirement that there be no suspension points between the statements that check and edit the balance. Even this comment is unclear: it doesn't say "await" anywhere and doesn't describe why there can't be any.
The BankAccount.close(distributingTo:) example is also more fragile than it lets on. Why does it use Task.runDetached? Because if it didn't there would be a race due to the await; it would then be possible to double-spend by closing an account and simultaneously withdrawing from it.
IMO actors should not be re-entrant by default. Fewer deadlocks but more data-corrupting race conditions is a bad trade-off. If you do allow actor re-entrancy (by default or not), you also need mechanisms to re-establish data safety other than "don't use await".
I'm not too sure wether being re-entrant by default is good or bad. It's good for reading operations, and it's an easy pitfalls for any write operation. Perhaps writing to the actor's memory (when mixed with await) should require some sort of explicit choice between "exclusive" vs. "concurrent" (or "interleaved"), but that'd be quite a burden.
I also think that actors should not be re-entrant by default. That would be the safer behavior out of the box. Of course, now you have the problems with deadlocks. But I see it somehow equivalent to the issue of retain cycles - devs have to think about the dependencies in their code which is always a helpful thing.
I'm also a bit concerned about this - I think we'll need implementation and usage experience in practice to see if it works. However, there are good reasons why the proposed direction is theoretically better - not only does it define away deadlocks, it composes to large scale designs better. You won't run into problems where API evolution introduces deadlocks, and the communication pattern isn't implicitly a part of the API.
This composability seems like a real win, but I agree that this is a big bet in the current proposal. In my opinion, this is the most researchy/unproven part of the actor model proposal, but could be a huge breakthrough if it works well.