[Second review] SE-0392: Custom Actor Executors

Hi everyone. The second review of SE-0392: Custom Actor Executors begins now and runs through April 17, 2023. Here are the previous review thread and language workgroup decision notes. Compared to the previous revision, this revision includes the following changes:

  • Job has been renamed to ExecutorJob, making it less likely to conflict with existing type names, and UnownedJob is typealiased as UnownedExecutorJob (however the old type remains for backwards compatibility).
  • assert/precondition/assume APIs are moved into extension methods on actor types, e.g. Actor/assertIsolated, DistributedActor/preconditionIsolated, MainActor/assumeIsolated { ... }
  • unownedExecutor invoked on a remote distributed actor returns an executor that fatalErrors only when an attempt is made to enqueue work onto it, rather than crashing immediately upon attempting to obtain the executor.

Reviews are an important part of the Swift evolution process. All review feedback should be either on this forum thread or, if you would like to keep your feedback private, directly to the review manager. When emailing the review manager directly, please keep the proposal link at the top of the message.

What goes into a review?

The goal of the review process is to improve the proposal under review through constructive criticism and, eventually, determine the direction of Swift. When writing your review, here are some questions you might want to answer in your review:

  • What is your evaluation of the proposal?
  • Is the problem being addressed significant enough to warrant a change to Swift?
  • Does this proposal fit well with the feel and direction of Swift?
  • If you have used other languages or libraries with a similar feature, how do you feel that this proposal compares to those?
  • How much effort did you put into your review? A glance, a quick reading, or an in-depth study?

More information about the Swift evolution process is available at


Thank you for helping advance Swift!

Joe Groff
Review Manager


I've missed the first review, but I'm really happy to see this unfolding! Once it gets implemented, I think I can finally drop other small hacks I've been using and fully embrace modern Swift concurrency.

There's one thing that I don't understand tho: will the new serial executors guarantee sequential execution regardless of nested suspension points?

In practice, given the example below:

class Counter {
    func iteration(_ i: Int) async throws {
        let waitTime = Double.random(in: 0 ..< 0.5)
        print("Entering \(i) waiting \(waitTime)s")
        try await Task.sleep(for: .seconds(waitTime))
        print("Exiting \(i) after waiting \(waitTime)s")

let counter = Counter()
for i in 0 ..< 100 {
    Task.detached {
        try await counter.iteration(i)
  1. Will the custom serial executors handle nested suspension points out of the box (in other words, will the exiting print statement be ordered w.r.t. the iteration index?)
  2. If not, is the API expressive enough for us to implement such behaviour in the custom executer ourselves as implementers (as far as I've seen, it doesn't seem to be the case (?))
1 Like

Thanks a lot for moving the question to the forums @valentinradu! It's always good to have all discussions of a proposal consolidated to those threads, for others to catch up on as well :slight_smile:

Sadly no, this proposal is not enough to prevent reentrancy in actors, nor are we able to get "don't wait but in specific order" calls on actors still. And that's for a number of reasons, including one that your example showcases well so thanks for bringing it up!

Specifically, the execution semantics of unstructured (Task{} and Task.detached{}) tasks throw a wrench into the ordering question here:

for i in 0 ..< 100 {
    Task.detached {
        try await counter.iteration(i)

What this actually executes as is:

  • schedule a task on global pool
  • hop to counter's executor

because of that... interleaving is possible at these points and we're not guaranteed to receive the calls in the "expected" (sequential) order...

One solution to this problem would be a "send" operation or keyword (prototype here), which enqueues directly on the target actor, without scheduling the task on the global pool at all:

// pseudo code; we could build this, or something similar, in the future 
for i in 0 ..< 100 {
  send counter.iteration(i)
} // guaranteed order 1,2,3,4; however cannot await for result of sent calls

ABI wise, we also document this fact in the proposal in ABI resilience section:

The design of SerialExecutor currently does not support non-reentrant actors, and it does not support executors for which dispatch is always synchronous (e.g. that just acquire a traditional mutex).

We'd have to build additional API to support non-reentrant actors in custom executors, if we ever would get them in Swift in general. First of all though, we don't have "non-reentrant actors", nor "send" at all in Swift, rendering the concern about custom executors not supporting this somewhat moot -- nothing in Swift concurrency today supports these concepts.


The assumeIsolated-style names don’t clearly communicate that the function can crash at runtime. I’d prefer a name like unsafeAssumeIsolated or withIsolationPrecondition.

The proposal looks good otherwise and I’m excited for more libraries to better integrate with Concurrency!

1 Like

This is not an "unsafe" API. Unsafe would imply we don't check, like with UnsafeContinuation or UnsafeTask etc. The closest API to this I guess is withCheckedContinuation with its "checked" wording.

The current assumeIsolated were done after the first review round from some language workgroup member feedback. Arguably though, we might still re-consider a with... shape for these, not sure if it trumps how nice to read the assume is. On the other hand, those methods should be called pretty rarely (one would hope) :thinking:

1 Like

To be used rarely, this API should better communicate that it can trap. It’s not like constructing a range where an upper bound is rarely smaller than the lower bound (which will crash). For this API, the precondition of being on the given actor is central to the function’s operation (it’s in the name). That’s what I tried to allude to by include “precondition” in the name withIsolationPrecondition.

Yeah the precondition word is accurate, we'll always check/precondition... the spelling does feel a bit clumsy, open to other ideas here still :thinking:

I think it would be good to have this. Actor reentrancy makes code much harder to understand in my opinion. I’d like to have a simple way to define an async function on an actor that’s non reentrant besides it having suspension points.

I find it also too difficult currently to define a queue that’s processing async work not just serially, but also in FIFO order. So that both enqueing new async work and processing it remain in order. It’s very easy with a serial DispatchQueue.

Processing events without order often makes things more complicated and testing more difficult due to permutations.

Of course, if something should be concurrent then there can’t be a deterministic order. But just because I have some async code I don’t want to be forced into concurrency or give up ordering.


because of that... interleaving is possible at these points and we're not guaranteed to receive the calls in the "expected" (sequential) order...

Fair, however, send would be just the icing on the cake. I'm thinking the bulk of the added value would be guaranteeing FIFO operations inside the serial actor, even if we can't control the hops up until queuing the jobs on its executor.

For example, an actor that uses a limited resource would like to guarantee all operations enqueued from detached contexts are FIFO, no matter the order, which is not currently possible.

Basically, non-reentrant support, which should perhaps be discussed separately since it seems a broader topic, touching things beyond this proposal.

The only question I'd still have related to the proposal would be, without guaranteed FIFO operations in the serial executor, what are some good use cases for it that are not already covered by regular actor/async/await semantics we already have?


Yeah, (non-)reentrency is a concern worth discussing but it won't be addressed in this proposal. We can probably take it to a separate thread. I'm personally convinced it is worth addressing, but we've not yet figured out the exact "how" of that. Some ideas were floated during the original actors proposal but we've not revisited those yet.

As the proposal discusses, getting work off the global shared threadpool. Some work must be performed on dedicated threads or just off the global pool because it'll do nasty blocking, so it is important to be able to dedicate a pool for those tasks.

1 Like

Apologies for the very late review.

I think the overall direction is uncontroversial, but I have some questions/concerns about the proposed design.


  • The call to ExecutorJob.runSynchronously(on:) must happen-after the call to enqueue(_:).

    I don't know what this means - I thought it meant the enqueued job must happen after the call to enqueue returns. But later in the proposal it says that you can execute jobs synchronously inside of enqueue (it's not recommended, but you can).

  • (Editing) The section about the Executor protocol (link) starts with:

    First, we introduce an Executor protocol, that serves as the parent protocol of all the specific kinds of executors we'll discuss next. It is the simplest kind of executor that does not provide any ordering guarantees about the submitted work.

    And ends with:

    Executors are required to follow certain ordering rules when executing their jobs

    It's difficult to follow and I would recommend that this gets tidied up for the benefit of future readers. The ordering rules for serial executors should be part of the serial executor section as it refines the looser Executor behaviour.


  • I'm not entirely happy with the design of the asUnownedSerialExecutor() requirement, and I wonder if it could be modelled better.

    It took a little while for me to understand what this is supposed to do; initially I thought this would basically exclude executors which needed to own (and clean up) their execution resources, but the critical piece of information is over in the UnownedSerialExecutor type:

    /// Generally there are extra constraints imposed on core operations
    /// in order to allow this. For example, keeping an actor alive must
    /// also keep the actor's associated executor alive; if they are
    /// different objects, the executor must be referenced strongly by the
    /// actor.

    IIUC this is essentially just Unmanaged.passUnretained on the executor (which any caller could easily do themselves), except that it stores an extra bit to tell the runtime how to perform same-context checks. That extra bit is all that this requirement really provides, isn't it?

    I don't think this is the most obvious design to express that. The word "as" makes it feel like I'm performing some kind of conversion, when I'm not, or that there's some kind of special process the executor needs to perform in order to create that unowned reference (like performing an unbalanced retain on itself), when there isn't - we just need a particular piece of data from it which isn't really related to the idea of an unowned reference.

    I think this could be improved by replacing it with a requirement which specifically returned the same-context checking bit, either as a Bool or enum. I think it would be more logical, and make it easier to ensure that it was correctly implemented. Consider that if I implement isSameExclusiveExecutionContext, I will also need to update my implementation of asUnownedSerialExecutor to use the complexEquality initialiser - that's a non-obvious change, but if that requirement was more clearly related to the context-checking function, it would be more obvious.

    Taking the initialiser call out of the SerialExecutor conformance also means we'd only need a single initialiser for UnownedSerialExecutor, which would read this bit (and any bits we introduce in the future), and set the appropriate internal flags:

    protocol SerialExecutor: Executor {
      var hasCustomSameExecutionContextCheck: Bool { get }
      func isSameExclusiveExecutionContext(as other: Self) -> Bool
    // or:
    enum ExecutionContextEqualityKind {
      case trivial
      case custom
    protocol SerialExecutor: Executor {
      /* static? */ var executionContextEquality: ExecutionContextEqualityKind { get }
      func isSameExclusiveExecutionContext(as other: Self) -> Bool
    struct UnownedSerialExecutor {
      init(_ executor: some SerialExecutor) {
        switch executor.executionContextEquality {
        case .trivial: // ... implementation detail of UnownedSerialExecutor ...
        case .custom:  // ... implementation detail of UnownedSerialExecutor ...
      // Only one initialiser needed

    (It's a bit of a shame that we need people to manually say whether they have a custom implementation of the same-context check. The language should provide a better way to model these kinds of requirements. In Obj-C you could make it optional and check respondsToSelector...)

  • isSameExclusiveExecutionContext(other executor: Self)

    I think the argument label (other) doesn't read very well; it should be isSameExclusiveExecutionContext(as other: Self).

    Also, I think the "exclusive" part could be clearer. Given this text in the proposal:

    When performing the "is this the same (or a compatible) serial execution context" checks, the Swift runtime first compares the raw pointers to the executor objects. If those are not equal and the executors in question have complexEquality , following some additional type-checks, the following isSameExclusiveExecutionContext(other:) method will be invoked:

    I think the term used here ("serial execution context") would more clearly express that it is known the underlying execution resource will serialise jobs from each given SerialExecutor and prevent their jobs from executing in parallel.

    I also wonder if this is a function where there is no natural self, and whether it should instead be a static function taking 2 parameters (as the == operator does). But that's a minor thing.


  • Does this type have any fixed values? Or do we just get the rawValue (a UInt8) and assume bigger values have a higher priority than smaller values?


  • Is ExecutorJob also CustomStringConvertible? Or only UnownedExecutorJob?

  • The proposal says:

    A job's description includes its job or task ID, that can be used to correlate it with task dumps as well as task lists in Instruments and other debugging tools

    Will we have to parse that out of the string (like <Job ID=...>? Can it be provided directly, maybe with a property named debugIdentifier (to mirror debugDescription from CustomDebugStringConvertible)?

  • (Future) Is it possible to inspect the task-locals associated with a job? I understand that it would have to account for non-task jobs, but I'd like to know if it's a possible future direction. I think it could be interesting to pass other metadata along to the executor besides the built-in priority levels.


  • Since the unowned version of runSynchronously does not validate its precondition (that the job has not already been executed), and violating that precondition can lead to undefined behaviour, I very strongly suggest that it include the word "unsafe" in its name.

    It is not enough to document that it can cause UB. The documentation should tell you how to use it correctly, but it must be abundantly clear at the call-site that extra consideration is required to audit the safety of code which uses it.


  • I'm really uncomfortable with the unownedExecutor requirement for actors. I know that it already exists, but I think it goes against everything we've tried to achieve in Swift with regards to safety. In the best case, it's a pointer and we just hope that keeping the actor alive is enough to keep the executor alive as well; and in the worst case, you can return different executors every time and defeat all of the data race safety that we hope to achieve with language-integrated concurrency.

    I think it it worth exploring safer designs here. For instance, distributed actors have a synthesised actorSystem stored property which must be set. I think a similar design could work for the executor of a regular actor:

    distributed actor Greeter {
      init(name: String, actorSystem: ActorSystem) {
        self.name = name
        self.actorSystem = actorSystem // required (!) initialization of implicit actorSystem property

    That way, we could guarantee the invariant that executors will live for as long as there are any actors which use them, that the executor cannot be changed after initialisation, and that all accesses to the synthesised property return the same executor instance. That's at least something.

    If an actor does not explicitly initialise its executor, it would be initialised to a default executor.

Actor assertions:

  • All fine, except that I think preconditionIsolated reads weirdly. assert and assume are verbs, so they read quite well, but precondition is not. Perhaps requireIsolated?

Actually, there is something else which occurred to me with regards to the "same serial execution context" check. The proposal says (link) it is only used for the assert/precondition APIs:

 /// This check is used when performing `preconditionTaskOnActorExecutor`, `preconditionTaskOnActorExecutor`,
 /// `assumeOnActorExecutor` and similar APIs which assert about the same "exclusive serial execution context".

And actually explicitly says that other APIs will be available to optimise task switching:

These checks are likely not enough to to completely optimize task switching, and other mechanisms will be provided for optimized task switching in the future (see Future Directions).

Moreover, in both trivial and complex equality, pointers are checked first, with the "complex equality" function only being used as a fallback if that fails and the executor opted-in to it.

So... why not just have a default implementation which returns false?

If the executor only uses trivial (pointer) equality, calling this function only happens when we're on the road to an assertion/precondition failure. It's part of the slow path and isn't worth optimising.

If we make that change, I think we can just remove the whole asUnownedSerialExecutor requirement altogether, as well as the bit in UnownedSerialExecutor which stores whether complex equality is needed.

Or am I missing something?

The proposal somewhat highlights this but let me put it here again, that this is a low level and specialized for performance API. Some tradeoffs were made in to serve that purpose, as well as existing ABI compatibility.

This is a way to talk about memory visibility happens-before / happens-after are ways to talk about visibility of memory writes. into simpler terms it means synchronize your executor code correctly such that effects of an enqueue are properly visible for the run when it executes.

This would require making multiple calls from the Swift runtime across to user code which is expensive. Since this is a core building block of the concurrency runtime, we want to avoid making many calls like this. @John_McCall may have more to say on this, but in my understanding we'd really want to avoid making multiple calls into user-provided code. For this specific equality it might have been fine, since we only need to check this lazily, that's true but I'm not sure if it is worth the change.

You can convert it to a TaskPriority and use those well defined priority values.

Generally it can contain arbitrary values and we do not define what they mean or how they must be interpreted. It's up to an executor to deal with this. We've implemented executors in other runtimes which have utilized wider priority space and this opaqueness / flexibility has been helpful in such runtimes.

It cannot be because current limitations in @_moveOnly types: a move only type cannot conform to protocols.

Once this limitation is lifted, this conformance could be added. It does have a description method you can call explicitly though.

Currently no, it could be something we could look into someday. I've not thought about it during this proposal so far though.

(No need for bolding text).

That may be a fair point worth discussing. It wasn't marked unsafe because we lifted existing API and the method already is using an unsafe type: UnownedJob. It is true though that the actual method is what may produce the undefined behavior so it may be worth calling it out again...

I'd like the language/core team to comment on this though. If we feel the SerialExecutor.runSynchronously(UnownedJob) should change to: SerialExecutor.runSynchronouslyUnsafe(UnownedJob), i.e. if doubling down on this unsafety is helpful, or we already consider the UnownedJob "signalling the unsafety".

Now, the move-only version would technically remain safe as: SerialExecutor.runSynchronously(consuming ExecutorJob) since it consumes the job.

Though note that it isn't actually safe, because one can make two ExecutorJobs (incorrectly) out of a single UnownedJob and attempt running the moveonly jobs, which would result in the same issue. So I think it's unclear how many scary names we should throw at those APIs, given the mere existence of UnownedJob makes everything that Job touches unsafe. Do we have to make every API unsafe, or acknowlage that jobs need the extra attention.

I'd kind of want @John_McCall @Joe_Groff or the core/language team to provide some guidance here to be honest, because it seems like a general question about such situations involving an unsafe type or, a thing obtainable from an unsafe type to then proceed abusing it.

That would be an interesting improvement and perhaps we could revisit this in the future.

I agree that distributed handles this rather nicely and the issue is quite similar to the one executors face, though their lifetimes (and the actor systems) is a bit inversed.

Having implemented this mechanism in distributed though I'll mention that this is a very complex effort and likely not possible in scope of this proposal in order to make it on time into Swift 5.9. Perhaps we could explore this in the future though -- it would ahve to store an ExecutorRef and we'd synthesize the returning of it.

I was quite torn on naming here as well... on one hand the parallel with assert/precondition is nice, but it doesn't read so well.

Summing up:

  • isSameExclusiveExecutionContext adding "as" label, we still could but coordinating this change may be tricky across adopters;
    • Avoiding making it static is AFAIR something we did on purpose in order to allow a superclass/subclass to share the implementation of equality based on a supertype. I don't think we'd want to make it static for that reason.
  • ExecutorJob and CustomStringConvertible - cannot be fixed until moveonly types can conform to protocols
  • flags rather than returning asUnownedExecutor likely problematic due to runtime overhead of using multiple function calls; The previous API also already exists in the ABI so it's not like we can remove it entirely.
  • distributed actor-like executor initialization, interesting but I don't think we'll be able to make this work in a short period of time; It could be a follow up in the future.
  • requireIsolated maybe worth the rename, we can do this safely

Yes, I understand what those terms mean - I just don't understand how they apply in this context; typically, they apply to concurrent operations on shared data, but in this case, there is only one operation, so it is unclear exactly which events should be synchronised with which other events.

The only thing I can think of is that it is telling me I cannot use the function parameter before the function is even called. But clearly that would be impossible anyway.

See the follow-up post: I now don't think this requirement is even needed at all.

The only information which the asUnownedSerialExecutor() requirement adds is setting the complex-equality bit. I've dug around a little bit (very little bit) in the implementation to see if there's anything not mentioned in the proposal, but AFAICT it is only read in one place (the runtime function _task_serialExecutor_isSameExclusiveExecutionContext), where it guards a call to the executor's implementation of isSameExclusiveExecutionContext.

But we don't even need to guard that call; we can just do it unconditionally. If the executor uses trivial equality, we'll make an extra function call, but because isSameExclusiveExecutionContext is documented to only be used for assert/precondition APIs, if the trivial equality check failed, we're going to terminate the process anyway. There is really no need to optimise this by storing the complex-equality bit, and it ends up making the API significantly more complex.

Can we assume that larger values have higher priority than smaller values? Otherwise, I'm not sure how the opacity helps - it means that a job with priority 200 might have a higher priority than a job with 199, but a lower priority than a job with 198.

I think we still need to be able to sort jobs by priority, even if the semantic meaning of each level is not specified.

UnownedJob does not even contain the word "unsafe", and "unowned" elsewhere in the language (e.g. unowned captures) is a perfectly safe construct. The name already exists in the ABI, so we can't change it, but I don't think it is clear enough about its lack of safety - especially because the unsafe operations aren't obviously tied to memory ownership.

I would recommend calling the unsafe version something like unsafeConsumeAndRunSynchronously (emphasising the fact that this operation consumes the UnownedJob). And I think that would tangibly improve clarity and guide developers towards correct code - with ExecutorJob, the compiler will tell them where the job is consumed; with UnownedJob, the code must be audited manually but it should still be obvious if we can make it so.

Even experts make mistakes. Why optimise for brevity here?

The proposal does not mention that it is possible to create an ExecutorJob from an UnownedJob, only the reverse. While I understand the need for the Safe -> Unsafe conversion given limitations on noncopyable types, I'm not sure there is an equally strong motivation for the reverse. Regardless, if it is to exist, I think it also needs to be labelled to make extremely clear that it is unsafe and consuming the unowned job.

Hopefully that will make things more obvious. If somebody does end up calling two "unsafeConsume..." APIs on the same UnownedJob, it should stick out in their code.

Hmm that is a shame. Is there such urgency to ship this in Swift 5.9? How can we make sure to leave room for this design?

I think there are some interesting things we could do with a synthesised stored property. For example, it could always be at a fixed offset within the actor. I don't know how that would work if we allowed users to write their own implementations now.

This mechanism leaves the door open for other flags that we may need to guard proactively. This API should not over-specialize for this specific case where we can "call anyway, because we'll need to a crash".

No, and I really depends on the runtime and I don't think Swift should guarantee anything here - especially no implication about ordering.

It's not Swift's business to define those for arbitrary executors. We have specified the few precise numbers we interpret a specific way, in TaskPriority but its non of our business to assume a specific ordering and impose it onto custom executors. Maybe you'd want to use some other system's priority numbers and interpret them as appropriate for that system; they may have "lower number = higher priority" meaning or they may have the opposite ordering. It's not up to Swift to impose on that.

It is up to the executor to impose and interpret that, if it were specifically written for such runtime.


Renaming this is not really possible so there's not much to discuss here. And typealiasing or pulling some nasty tricks is not worth the risk here either to be honest; We're already doing a lot of trickery in enabling the move to ExecutorJob and the additional renaming Job to ExecutorJob already was quite problematic to execute.


The methods I have already commented on, adding an "unsafe" somewhere in the run method name may be good thing to do but we should double check and come up with a policy about such naming -- it already involves an unsafe type technically speaking after all.

1 Like

If we do need to add flags later, we can add a requirement with default implementation. The language has support for protocol evolution; we don't need to add hooks and complicate the API preemptively, in case we ever use them.

(Also, I didn't suggest renaming UnownedJob. I realise that it's too late for that)

The guideline we usually follow is whether the unsafety leads to undefined behavior. If an API requires manual discipline to use correctly, but we can robustly detect misuse, and that detection is cheap enough to be enabled in release builds, then it's not "unsafe" in the meaning we usually use in the standard library. But if misuse of the API leads to unchecked undefined behavior, then it would be good to label it as unsafe.


In general, in semantics like this, the call refers to the moment of entry to the called function. It is not referring to the entire execution of the function as a whole. The corresponding moment of exit is referred to as the return. It is not generally possible for a function to order events after the moment of its return.

The semantics are saying that the execution of the enqueued job must be properly ordered after anything that is properly ordered before the call to enqueue(_:).

1 Like

Given that the executor receives the job as an argument to the enqueue function it implements, is it even possible for it to execute the job before anything that is ordered before the call to enqueue?

You are imagining that something not being properly ordered after the call means that it would somehow have to happen before the call. That is not correct.

If you want to understand concurrency at this level, you need to deeply internalize that happens-before is a partial order. That is, it is possible for events to not be ordered, and in fact that is the default state of the world. There are specific things — most importantly the normal execution order of a single thread of computation, but also any explicit inter-thread synchronizations that those threads might do — that create ordering relationships between events. But it is possible for data to flow between threads without creating such a relationship, and it is important to say that a relationship must ultimately be established.