Incremental migration to Structured Concurrency

I was referring to the lock APIs we have today - pthread_mutexes, os_unfair_locks, NSLock, etc - and in general when I said "using a lock", I meant a lock implementation that is typically used by clients in the following manner in synchronous code:

Thread 1:
lock()
<critical section>
unlock()

while with a semaphore, I was referring to the likes of DispatchSemaphore or DispatchGroup which typically are used in client code in the following manner:

Thread 1:
<do some work>
if (!condition) 
   semaphore.wait() 

Thread 2:
<do other work to satisfy condition>
condition = true; 
semaphore.signal()

A lock can be implemented with a semaphore but that's the internal implementation of the lock and not of interest to clients who are using the lock in async code. While such an internal implementation of a lock allows for a thread that is not the one which called lock() to call unlock(), I consider that to be (a) undefined behavior (b) not the 99.9% use cases of how people use locks or mutexes when using it as clients of these APIs.

OK, this is good, thanks. Presumably this doesn't apply to code known to be in the @MainActor , though, since it is locked to a single thread?

The main actor is tied to the main dispatch queue. The main queue is tied to the main thread but that tie can be broken if your application calls dispatch_main(). dispatch_main() does a bit of bookkeeping and exits the main thread at which point, the main queue is no longer thread bound to a main thread. It will be serviced by a thread on demand from the dispatch's worker thread pool when there is work on the main queue.

So you could try to make the case that you have some freedom to hold locks across await if your code executes on the @MainActor but I think that is fragile and requires additional knowledge about whether or not the application has called dispatch_main(). Relying on auxiliary knowledge like this to use locks in async code on the MainActor, is not how I'd recommend someone write code with async.

OK, do all the special rules for interop with Swift async have to do with thread-hopping, or are there others?

It's about thread-hopping and also about using primitives that assume a minimum number of threads. A semaphore assumes at least 2 threads being vended to you - the thread which will wait and another one which will signal. A lock doesn't have this requirement - it is perfectly possible, albeit redundant - to use a lock for code that is entirely single threaded. This ties back into thinking about the guarantee of forward progress as being able to finish the workload on a single thread if that's what the runtime decides it can vend to you.

  • Is this true even if the task that will unblock the primitive is itself going to block?

How is that possible? You have a thread running a task, if the task is using a primitive that causes it to block, you are now blocking the thread as well. How can you guarantee that the task will unblock itself if the thread that is executing it, is blocked?

  • When you say "has already run" I suppose you mean that the async function that will unblock the primitive has started, is suspended, and is guaranteed to unblock the primitive before exiting?

I meant that it has already unblocked the primitive and so your thread doesn't have to block on the primitive at all when it is trying to acquire it.

If the Task that will unblock the primitive is suspended and hasn't yet unblocked the primitive, once the Task becomes runnable, there is no guarantee that you will get an additional thread to execute that task - the cooperative pool may be at its limit and it may not give you another thread.

This is a very fragile guarantee to be able to uphold as a developer because you are now relying on the scheduling order between tasks, and that can change.

  • Aside from this rule, and the caveats about thread-hopping, are there any others?

The main thing I'd advise, is to be able to make sure your workload can complete with a single thread using the environment variable. If you are able to run to completion reliably in that environment, you are safe and will be able to handle multiple threads running your workload.

The Swift concurrency runtime reserves the right to make different scheduling decisions, including optimizing the size of the thread pool based on global information on what is happening in the system. Therefore relying on specific scheduling order between tasks and threads is discouraged.

11 Likes