SE-0433: Synchronous Mutual Exclusion Lock

Hi Swift community,

The review of SE-0433: Synchronous Mutual Exclusion Lock :lock: begins now and runs through 24th of April, 2024.

Reviews are an important part of the Swift evolution process. All review feedback should be either on this forum thread or, if you would like to keep your feedback private, directly to the review manager. When emailing the review manager directly, please keep the proposal link at the top of the message.

What goes into a review?

The goal of the review process is to improve the proposal under review through constructive criticism and, eventually, determine the direction of Swift. When writing your review, here are some questions you might want to answer in your review:

  • What is your evaluation of the proposal?
  • Is the problem being addressed significant enough to warrant a change to Swift?
  • Does this proposal fit well with the feel and direction of Swift?
  • If you have used other languages or libraries with a similar feature, how do you feel that this proposal compares to those?
  • How much effort did you put into your review? A glance, a quick reading, or an in-depth study?

More information about the Swift evolution process is available at

Thank you,

Steve Canon
Review Manager


This is something I have been needing and wanting for Swift to have for a good while. The current proposal totally replaces almost every use case I have had for needing to write my own locks in; Observation, AsyncStream, swift-async-algorithms (except perhaps one usage...), swift-corelibs-foundation, and many other projects ive worked on in Swift. For the places where it doesn't fully work that is perfectly reasonable to leave that to a specialized solution since this hits the requirements of the fundamentals so well...

10/10 no notes.


Is a property wrapper in the cards as a future direction if we ever get support for lets? @Mutex let globalCache: [MyKey: MyValue] = [:] would read so much nicer.

With Embedded Swift and other ports under active development, it might be helpful if reviews addressed the portability implications.

e.g., Is this something that will be required by the runtime or standard library or other Synchronization features? If a platform had no mutex (with these semantics), does that mean

  • The API could have a platform specific availability annotation
  • The Synchronization package could not be ported
  • The stdlib could not be ported

It might be possible, but I wouldn't recommend doing this, for the same reason that using similar mechanisms like synchronized properties in Java or @synchronized in Objective-C is not generally a good idea. A property wrapper would at best hold the lock during the get and again during the set of the property, but in practice, it's often necessary to hold a lock over a longer transaction. Wrapping the accessors gives the illusion of thread safety while still allowing for logical data races to occur, but you no longer have the ability to detect these races with tools like TSan because, technically, you're getting what you asked for using the too-narrow lock spans.


Given the ambiguity around behavior of lock reentrancy (trying to get a lock the thread already has), it might help to discuss the techniques required to avoid reentrant calls in complex code (for we poor souls exiled from Java), or the possibility of reliably throwing an error instead.

Similarly, Java expressly supports lock coarsening (merging critical sections without changing happens-before). That doesn't change the API, but affects how people use locks. I assume Swift makes no guarantees it might coarsen, but if it will never coarsen, that might change how people build API's on the mutex. (But perhaps that's a separate topic.)


I am so happy to see this moving forward. SwiftNIO has been providing its own lock since the very beginning and we have seen this lock being copied into numerous other packages. I can't wait until we can all move towards this new type!

I particularly like how we solved the region isolation problem. Mutexs forming their own region makes total sense and is in my opinion easy to understand conceptually.

One thing that surprised me in the proposed API is the requirement for the Result of both with methods to be Sendable.

  public borrowing func withLock<Result: ~Copyable & Sendable, E: Error>(
    _ body: (transferring inout State) throws(E) -> Result
  ) throws(E) -> Result

Isn't it enough if we mark the return value as transferring?

  public borrowing func withLock<Result: ~Copyable, E: Error>(
    _ body: (transferring inout State) throws(E) -> transferring Result
  ) throws(E) -> transferring Result

Great news!

It is said in proposal: Mutex as proposed will be a new @frozen struct which means we cannot change its layout in the future on ABI stable platforms

What are the reasons for declaring it @frozen? Is it possible to not declare it as @frozen and keep a space for possible future improvements?


+1, great addition

If an embedded platform does not have support for mutexes, then we can simply say this type is not available on those platforms while keeping the rest of the stdlib and synchronization module available (atomics are widely available on embedded platforms for example).


Hmm this might make sense. @hborla does this make sense to you?

It doesn't look like the proposal mentions reentrancy at all (but I may've just missed it), but given the interactions with the ownership model, I don't think there can be any ambiguity; the API as designed can't allow a thread to get a lock it already has (at least for any non-Void State), since that would violate exclusivity of access to the guarded value. So the implementation should explicitly either trap or throw an error if a thread tries to acquire a lock it already has.


At least on Darwin platforms and on Linux, the implementation does trap at runtime if you try to recursively attempt to acquire the lock. Windows isn't too specific, but does specify that:

Exclusive mode SRW locks cannot be acquired recursively. If a thread tries to acquire a lock that it already holds, that attempt will fail (for TryAcquireSRWLockExclusive) or deadlock (for AcquireSRWLockExclusive)

Enforcing that all platforms trap in the implementation would require extra storage overhead on top of the platform mutex + value as well as runtime overhead. Currently the API documentation (in the proposal) states that:

/// - Warning: Recursive calls to withLock within the
/// closure parameter has behavior that is platform dependent.
/// Some platforms may choose to panic the process, deadlock,
/// or leave this behavior unspecified.

"Panic the process" or "deadlock" sounds reasonable to me, since that at least ensures that no forward progress is made in face of programmer error, and even if it were free, throwing a catchable error seems like a burden for the vast majority of code that should have no reason to attempt a recursive acquisition of the lock to begin with. What would the "unspecified" behavior be otherwise, though? Can we at least guarantee that the lock will never be successfully acquired?




There's a lot of really nice layout performance benefits we get from declaring the type @frozen like static initialization. Clients don't know about the size of the mutex for example, which requires extra overhead on initialization to, potentially, allocate the metadata for the type, get its size, and allocate stack/heap (I'm pretty sure we can stack allocate resilient types cc @Joe_Groff ) space for the lock. With regard to static initialization, if the type is @frozen we can statically allocate space in the binary to store the mutex and initialize it at compile time (assuming the value it holds is also known to be statically initializable). This avoids the need to go through swift_once on every access to the mutex (which on some platforms may take a lock! take a lock to then take a lock would be unfortunate). If the type were resilient, we would have to go through swift_once to initialize the value because like I mentioned, clients don't know how big the type is until runtime. (Note that right now the optimizer doesn't know how to statically initialize Mutex or Atomic, but this is something that can be fixed)

It does seem possible to me however to retroactively change the underlying lock implementation on ABI stable platforms IFF the new system primitive shares the same layout as the existing primitive. Let's say a new os_unfair_lock_2 comes out which is super fast and way better than os_unfair_lock. If it's MemoryLayout is exactly the same, we may be able to get away with moving folks over to this new thing. Again, we don't guarantee fairness or any other behavior other than once acquired, the lock _must_ have exclusive access to the underlying data.


If it threw an identifiable exception on attempted recursion, a pseudo-recursive lock could be implemented atop it (by catching that exception and adjusting behaviour appropriately).

I expect folks to disagree on whether that's a feature or a misfeature. It would of course be better, in any case, to have an actual recursive lock type.

I think a recursive or coalescible lock would have to be a different type from what's proposed here, with a different interface, since it would no longer be able to present an exclusive view (as Swift generally interprets the term) to the guarded state.


Just clarifying … is this a drop-in stdlib replacement for OSAllocatedUnfairLock? It lacks the explicit lock()/unlock() but otherwise seems so … in which case two thumbs up from me!

It should mostly be drop in yeah.

Yep, this is intentional. These are not safe as described here: swift-evolution/proposals/ at main Β· apple/swift-evolution Β· GitHub