Good to know, thanks.
Sure, everyone should use correct terminology, even mere mortals of us, who are not concurrency experts. Thanks for pointing out.
There is no doubt that Swift concurrency is great and all, but to be honest I still not confident about using it. When I create a thread, I know what I'm doing. When I use async/await
, I... well, not quite. This thing is cooperative, but creates an illusion of parallelism. I even initially (and mistakenly) thought that this thing is purely thread based.
Maybe I lived under a rock, as Swift concurrency is here for several years already, and only now I started to evaluate it. Maybe that's because majority of my programming work is not Swift related and I had no time to learn new things in Swift.
And - about new things - waiting for Swift 6, not without a bit of fear
It is not entirely cooperative, it's mixed. There are real threads, normally as many as there are CPU cores, and there's cooperative switching too. This is similar to many modern server architectures like that of nginx
's (and I think nginx
was one of the pioneers of this architecture in the mainstream).
Basically Swift's structured concurrency frees you from thinking when to use cooperative vs. real switching, it does it for you automatically, but like with any software abstraction, you won't be able to fully benefit from it without knowing how it works under the hood. Plus it's too easy to abuse it too without knowing the inner mechanisms. The general principle is that Swift knows when things should be serialized and executes them cooperatively (e.g. within the same actor/isolation), whereas in all other cases it will try to use true parallelism. That's in a few words.
Sure. I oversimplified things for brevity.
Here are some interesting posts: 1 by @aetherealtech and 2 by @David_Smith.
Here is another one, by @Karl, which talks about microcontrollers.
Microcontrollers are concurrent systems, too. That's the point I was making.
Pre-emptive multitasking is all about code being interrupted so other code can run. Even if you don't have an operating system, you'll still have hardware interrupts (e.g. a button was pressed, a hardware timer triggered, or some buffer is empty and needs to be filled).
You might find it enlightening, especially if you are not familiar with how code runs deep down on the hardware.
Here is another one, by @layoutSubviews
I am not a Darwin kernel engineer, so take my answer with a grain of salt.
Thread Switching involves the kernel, which requires costly context switches and a bunch of busywork which may or may not be needed, but has to be done.
Async/Await Tasks are scheduled & switched without having to hop into the kernel, and therefore are much faster & efficient in terms of CPU cycles & memory usage.
Here is more stuff from Swift concurrency: Behind the scenes.
With Swift, we want to change the execution model of apps from the following model, which has lots of threads and context switches, to this. Here you see that we have just two threads executing on our two-core system and there are no thread context switches. All of our blocked threads go away and instead we have a lightweight object known as a continuation to track resumption of work. When threads execute work under Swift concurrency they switch between continuations instead of performing a full thread context switch. This means that we now only pay the cost of a function call instead. So the runtime behavior that we want for Swift concurrency is to create only as many threads as there are CPU cores, and for threads to be able to cheaply and efficiently switch between work items when they are blocked. We want you to be able to write straight-line code that is easy to reason about and also gives you safe, controlled concurrency. In order to achieve this behavior that we are after, the operating system needs a runtime contract that threads will not block, and that is only possible if the language is able to provide us with that. Swift's concurrency model and the semantics around it have therefore been designed with this goal in mind.
This is quite interesting if you are curious about the world under the hood.
I have started to compile a partial list of evolution proposals related to concurrency.
SE-0176 - Enforce Exclusive Access to Memory (Law of Exclusivity)
SE-0414 - Region based Isolation
SE-0430 - Sending parameter and result values
When threads execute work under Swift concurrency they switch between continuations instead of performing a full thread context switch. This means that we now only pay the cost of a function call instead.
Is this accurate though? How can it be just a "cost of a function call" if you at least need to dequeue task information from a thread-safe queue before calling a function (and passing the closure context too)?
How can it be just a "cost of a function call"
I think they mean it for async functions running in a given task when those functions suspend and resume.
I hope that someone from the Darwin Runtime
team clarifies this.
Unfortunately, the official documentation for concurrency comes in the form of videos and their transcripts.
Quoting from How to avoid cascading async functions?
Would something like this work?
func sync_meets_async () { ... let s = DispatchSemaphore (value: 0) Task { await async_world () s.signal() } ... print ("waiting for async_world to finish...") s.wait() print ("finished") } func async_world () async { try! await Task.sleep(until: .now + .seconds(7)) }
@j-f1 explains nicely why the above attempt is an anti pattern.
Thatâs a risky antipattern. If your
sync_meets_async
function is ever called from Swiftâs global concurrent pool (for example, from an actor or async function, even indirectly), you will be blocking one of the finite number of threads in that pool on the expectation that another one of the threads will perform some computation later. While you might not notice any negative effects of this when using the pattern only occasionally, it can deadlock your app or server. Unlike GCD, which expects that people may do this kind of thing and allows a concurrent queue to have more threads than there are logical cores on the system, Swift concurrency requires that you never block one task on the expectation of future work occurring on another task (but offersawait
as a way for you to do so safely without blocking a thread). So if you fill all of the threads in the global concurrent pool with tasks that are blocking on future async work using a semaphore, none of that async work can ever be scheduled.
But, is there really no practical solution for this that can be used when the sync world
meets the async world
in real life?
Update: There is a safe solution
But, is there really no practical solution for this that can be used when the
sync world
meets theasync world
in real life?
Yes, there is no correct way to synchronously wait for the result of an async computation. It turns out that this was also true before Swift Concurrency, but the symptoms of the incorrectness were somewhat mitigated, so instead of "it very likely fails" it was "occasionally, in weird edge cases, it will fail, and usually it will just be less efficient".
Which is preferable is a topic of considerable debate: is it best to make it clear something is wrong early, allowing the developer to notice and fix it, or is it best to try to work around it, risking the failure shipping to customers but perhaps never being an issue in practice? I could probably make a half dozen reasonable arguments for either side, although my personal preference is the stricter approach.
Yes, there is no correct way to synchronously wait for the result of an async computation. It turns out that this was also true before Swift Concurrency, but the symptoms of the incorrectness were somewhat mitigated, so instead of "it very likely fails" it was "occasionally, in weird edge cases, it will fail, and usually it will just be less efficient".
I think this statement is overly broad. I am guessing the âweird edge caseâ refers to a higher-priority thread blocking on a lower-priority thread, and that thread being such low priority that it never gets a chance to run.
Some programs control all the threads in their process and can therefore ensure the blocking thread runs by being the only runnable thread.
But even in more realistic cases, an app can opt some threads into a scheduler that guarantees all threads will run by calling pthread_setschedparam(SCHED_RR)
. Lots of games do this for their render and work threads, and we even recommended it at WWDC a few years ago.
And thereâs no technical reason the normal Darwin scheduler could not implement priority inversion avoidance for dynamically blocked threads. It just hasnât yet.
But, is there really no practical solution for this that can be used when the
sync world
meets theasync world
in real life?
The more I think about an ability to make async code sync, the more it feels wrong to the whole (at least in Swift) concurrency model: while the story is about non-blocking solutions, it will allow to introduce blocking behavior to the systems, including APIs that arenât meant to be such. I think there can be some solution in technical terms, but should this solution exist in the first place?
I am guessing the âweird edge caseâ refers to a higher-priority thread blocking on a lower-priority thread, and that thread being such low priority that it never gets a chance to run.
That, but also hitting the thread limit. Whether thatâs the default dispatch pool (64ish), the overcommit pool (512), or xnuâs âsomething is wrong, kill -9 the processâ limit (I think it was 1024 about 20 years ago, havenât looked lately).
Connecting the sync world
with the async world
.
@Gero posted an example in this thread, which does away with the need to use semaphores when connecting the sync world
with the async world
.
I have adapted the following example from it to demonstrate its usefulness.
@main
enum sync_async {
static func main () throws {
let sa = SyncAsyncAdaptor ()
sa.enqueue (f)
sa.enqueue (g)
sa.enqueue (h)
#if false
Thread.sleep (until: .now + 19)
#endif
dispatchMain()
}
}
func f () async {
print (#function, "begin...")
try! await Task.sleep (until: .now + .seconds(7))
DispatchQueue.main.async {
print (#function, 1)
}
print (#function, "end.")
}
func g () async {
print (#function, "begin...")
try! await Task.sleep (until: .now + .seconds(10))
DispatchQueue.main.async {
print (#function, 2)
}
print (#function, "end.")
}
func h () async {
DispatchQueue.main.async {
// cause the program to exit
print (#function, "exit...")
exit (0)
}
}
final class SyncAsyncAdaptor {
private let continuation: AsyncStream<() async -> Void>.Continuation
init() {
let (stream, cont) = AsyncStream<() async -> Void>.makeStream()
continuation = cont
Task.detached {
for await workItem in stream {
await workItem ()
}
}
}
deinit {
continuation.finish()
}
func enqueue (_ workItem: @escaping () async -> Void) {
continuation.yield (workItem)
}
}
Produces the output:
f() begin...
f() end.
g() begin...
f() 1
g() end.
g() 2
h() exit...
Program ended with exit code: 0
That, but also hitting the thread limit. Whether thatâs the default dispatch pool (64ish), the overcommit pool (512), or xnuâs âsomething is wrong, kill -9 the processâ limit (I think it was 1024 about 20 years ago, havenât looked lately).
Yes, blocking on anything from a concurrent dispatch queue can cause thread explosion. I donât think you can hit the hard thread limit that way, though. Youâd have to be writing code that effectively reimplements concurrent queues with overcommit, at which point you really ought to know what youâre doing :)
Youâd have to be writing code that effectively reimplements concurrent queues with overcommit, at which point you really ought to know what youâre doing :)
You'd think, but serial queues default to overcommit, so it's easy-ish to do this if you do the "each object has a serial queue" model that was popular for a while (and don't retarget them to a non-overcommit queue).
After my naive attempt at defining an array of async let bindings
:
let fv: [() async -> Int] = [f, g, h]
let uv: [async Int] = [] // <--- not possible
for f in fv {
async let u = f ()
uv.append (u)
}
for u in uv {
await print (u)
}
I asked the question "Will array of async values be possible?" here .
The answer was not a definite no.
However, @crontab has a utility (Zip) in his personal library, which provides a good workable solution.
Example use:
let fv: [() async -> Int] = [f, g, h] let results = await Zip(actions: fv) .result results.forEach { print($0) }
It uses a task group underneath, which simplifies the nesting of the task groups, resulting in clean looking code.
@main
enum AsyncZip {
static func main () async throws {
@Sendable func n () -> Int {
let v = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37]
return v [Int.random (in: 0..<v.count)]
}
@Sendable func f () async throws -> Int {n () + 1}
let u = Zip (actions: [f, f, f])
try await print (u.result)
@Sendable func p () async throws -> [Int] {
let u = Zip (actions: [f, f, f, f, f])
return try await u.result
}
var v = Zip <[Int]> ()
v.add (p)
v.add (p)
v.add (p)
try await print (v.result)
}
}
Possible output:
[18, 30, 30]
[[3, 32, 38, 24, 8], [8, 3, 30, 12, 8], [38, 6, 24, 3, 38]]