I'm trying to understand what I can rely on when it comes to the ordering of async job execution in Swift 5.10, particular on the Main Actor. Of course I can and have experimented, but there's a difference between "I observed this test passing 50,000 times" and "I know this test will always pass," hence the question:
If I yield from a lower priority task, am I guaranteed that a waiting higher priority job will be run? If there are two jobs of differing priorities both enqueued, is the higher one always run?
For instance, suppose I'm iterating through a series of "steps" in a test. I want to make sure that the code I'm testing is always given a chance to hit its next yield point before any test code is run. So if I'm testing code like this:
func whileVisible() async {
// note: My test code operates under the assumption that this never terminates, so we can't use purely structured concurrency to test
for await response in service.subscribe() {
title = response.title // I want to assert that title is set
}
}
I want to have begun the for-await
before the test code providing a fake response is run.
Simple code like this won't work:
await withCheckedContinuation { continuation in
Task {
continuation.resume()
await whileVisible() // A
}
}
Task.yield()
service.sendFakeResponse()
Because while the continuation ensures that the task started, there's a yield point (A
), and when Swift hits it, it defers back to the test code. And the Task.yield()
at the end does not actually cause Swift to yield execution. This kind of makes sense based on the documentation; apparently it yields to lower priority tasks.
After a bit of time trying to figure out why just giving the unstructured Task
a userInitiated
priority didn't work, I realized that this was probably capped by the parent Task (which I don't think is documented, though perhaps it's implied by the base and current priorities).
So this code seems to work:
await withCheckedContinuation { continuation in
Task(
priority: .userInitiated
) {
Task.detached(
priority: .low
) {
continuation.resume()
}
await whileVisible()
}
service.sendFakeResponse()
After 50,000 runs, it always starts listening to the network request before we send the fake response.
More broadly, here is pseudocode for an example "test runner" that would attempt to handle this ordering internally:
// A strawman for a way to make ViewModels testable.
// This is a terrible design, but it's super simple, a better one would leverage
// features of `Observable` or `ObservedObject` to avoid individual objects
// needing to implement this.
@MainActor
protocol TestableViewModel {
// the implementor runs the first element of this array whenever something changes.
var assertions: [(Id, (Any) -> ())]
// Called by the test to wait for the next assertion before continuing.
// Because we're isolated to main, no matter how many emissions
// happen we would always handle them in sequence so it should be
// okay to force this ordering.
func wait(for continuation: CheckedContinuation<(), Never>)
}
enum Step {
case fn(() async -> ())
case assertion(Id, Any -> ())
case sendFakeData(() -> ())
}
struct Timeline {
var steps: [Step]
var assertions: [(Id, (Any) -> ())]
}
func test(vm: some TestableViewModel, steps: Timeline) async {
vm.assertions = steps.assertions
for step in steps {
switch step {
case fn(let fn):
await withCheckedContinuation { continuation in
Task {
Task.detached(priority: .low) {
continuation.resume()
}
await fn()
} // In real code we'd use a task group so we could cancel at the end
}
case assertion(let Id, _):
await withCheckedContinuation { continuation in
vm.wait(for: continuation)
}
case sendFakeData(let fn):
// in my real code, these were necessary, so I'm including them
// as a curiosity, b/c I cannot figure out why
await Task.yield()
fn()
await Task.yield()
}
}
// we'd cancel all the tasks here in real code, and collect the results
// b/c we can't reach here till all assertions are run, so it's safe to do so
}
I'm curious if folks think this is a good solution, or if there are problems I'm not seeing. One that I am aware of is that if the number of times vm
publishes changes is less than the number of assertions, the test would hang till it timed out, and I don't think there is any way to detect that.
As an aside, I'm aware of some existing discussions and this potential solution, but I'm hoping to avoid needing it; the only code that I want to ensure the ordering of is the code in the test that sends fake data and asserts the results, not my systems under test.