I'm coming to Swift from Rust. I've enjoyed using it at a surface level for a couple years, but I want to understand how it actually works.
Both languages have a concept of "async functions," but I couldn't find much info online on how Swift's are actually executed. The official documentation felt pretty lacking too (in fact, the only documentation for ExecutorJob
is "you don’t generally interact with jobs directly"). So, how do Swift's async functions work under the hood in comparison? (I'm about to make lots of references to Rust's async system, so I briefly explained it at the end)
Specifically, do tasks need to be preempted to switch contexts, or can they work cooperatively like Rust's state machine model?
If it's the former, does that mean there's a thread for every task? That seems a bit unlikely based on what I've read online.
If it's the latter, is there some sort of equivalent to Rust's task.poll()
? Does Swift's type system even have a representation of a partially completed async function like this?
Also, is it possible to use Tasks in an embedded context at all? Is it possible to do so without needing to pull in an RTOS dependency (similarly to how I describe Rust's futures in the last couple sentences of this post)?
Thanks for humoring me.
Explanation of Rust's async functions for context
In Rust, async functions are syntax sugar for state machine enums with associated values and a poll()
method.
The same thing can be done in Swift. For example, under Rust logic, a simple async function is transformed like this:
func hello() async {
print("Hello")
await doSomething()
print("Done")
}
// equivalent to:
enum hello: Future, Sendable {
case initial
case waitingOnDoSomethingCall(doSomething)
case completed
init() {
self = .initial
}
mutating func poll() -> Poll<()> {
switch self {
case .initial:
print("Hello")
self = .waitingOnDoSomethingCall(doSomething())
return .pending
case .waitingOnDoSomethingCall(var future):
if case .finished(_) = future.poll() {
print("Done")
self = .completed
return .finished(())
}
return .pending
case .completed:
fatalError("This async function has already finished")
}
}
}
On multi-core computers an executor might send this struct to another thread for concurrent execution. Meanwhile, on single-core systems, executors would probably just hold a global variable queue of every task and iterate through it, repeatedly calling task.poll()
on every item and then re-appending it. Spawning a task would just mean appending it to this queue.