Task executor preference and deterministic scheduling?

We've maintained a package to close some gaps in async/testing in Swift, and are pretty hopeful to deprecate many of them with this release!

  • nonisolated(let) deprecates our UncheckedSendable type
  • Mutex will hopefully deprecate our LockIsolated type, and
  • withTaskExecutorPreference should deprecate withMainSerialExecutor

On that final point, with the Xcode 16 beta I finally took task executor preference for a spin to see if we can stop depending on swift_task_enqueueGlobal_hook in withMainSerialExecutor, and things seem to work mostly as expected with this (quickly sketched-out) version:

public final class MainExecutor: SerialExecutor, TaskExecutor {
  public static let shared = MainExecutor()
  public func enqueue(_ job: consuming ExecutorJob) {
    job.runSynchronously(
      isolatedTo: MainActor.sharedUnownedExecutor,
      taskExecutor: asUnownedTaskExecutor()
    )
  }
}

@MainActor
public func withMainSerialExecutor(
  operation: @MainActor @Sendable () async throws -> Void
) async rethrows {
  if #available(iOS 18, *) {
    return try await withTaskExecutorPreference(MainExecutor.shared, operation: operation)
  }
  // Fall back to `swift_task_enqueueGlobal_hook`...
}

The one exception in our test suite is a task group test, which uses Task.yield() to partition the odd values before the even ones:

func testSerializedExecution_YieldEveryOtherValue() async {
  let xs = LockIsolated<[Int]>([])  // Mutex<[Int]>
  await withMainSerialExecutor {
    await withTaskGroup(of: Void.self) { group in
      for x in 1...1000 {
        group.addTask {
          if x.isMultiple(of: 2) { await Task.yield() }
          xs.withValue { $0.append(x) }
        }
      }
    }
  }
  XCTAssertEqual(
   (0...499).map { $0 * 2 + 1 } + (1...500).map { $0 * 2 },
    xs.value
  )
}

While swift_task_enqueueGlobal_hook makes this fully deterministic and accumulated an array prefixed by all the odd values in order, then all the even values in order, using withTaskExecutorPreference seems to schedule work less deterministically. Are we missing something crucial here in our implementation? Are the required tools not available yet?

8 Likes

Great to hear that task executors are useful for the things that @ktoso and I had in mind. The reason for your example not being deterministic is that Task.yield and similar Task.sleep are using runtime built-ins to enqueue back on the global executor and don't respect the task executor preference right now. This is a known issue and we haven't gotten around to make yield and sleep respect task executor preferences yet.

4 Likes

Thanks for confirming! If there's an issue/PR I can follow, let me know, or I can file an issue myself if that helps.

I don't think we have a tracking GH issue yet so feel free to open one!

Done!

3 Likes

We talked about this earlier and I requested to move the chat to the public forums, thanks for doing that @stephencelis.

I shared this privately but wanted to have this on the forums so other people are aware as well:

Yes, yield and sleep don't respect executor preferences, so that's probably what is causing the hiccup here.

Thanks for the issue as seems we didn't have one yet but it was a known problem indeed.

Glad task executors otherwise seem to be working out nicely for you. We'll do a bunch of followups with them still.


I'd also nitpick naming... it's not good to "hide" that you're using a task executor in a name like withMainSerialExecutor -- it should be clear that you've set a TASK executor as just a serial executor or task one have wildly different guarantees and impact on the rest of the code.

We've generally been a bit concerned with such "just throw everything" on the main actor and how this can be over-used, but yeah a trick like this with a shim executor will do the job.

It's definitely better than hooking unofficial hooks and especially rehooking that one at runtime during operation, so I'll count this as an improvement over the previous hack I guess :wink:

2 Likes