Thread-Safe Pool management with callers from synchronous contexts

I'm new to Swift Concurrency (but have read a lot) and would greatly appreciate any help. I'm finding myself in a sticky situation I'm not sure how to handle.

Background Context:
I'm building a SwiftUI app using Metal, so I have MTKViewDelegate which has a draw() method isolated to the MainActor. This draw() method is called for every frame so it's important to do any heavier work async in a background task (in fact the Metal debugger raises warnings if you allocate Metal objects from within the draw loop). Within my render loop I need access to some preallocated Metal Textures. Textures are expensive to create, so I want to implement a Pool to manage them as an optimization.

The problem is that there the number of Textures needed for my app can change dynamically depending on the app's state, so I need to dynamically allocate/purge the Textures in the Pool. This allocation/purging should happen asynchronously in the background to avoid interfering with the rendering. Textures should also be able to be placed back into the pool to recycle them. So it seems all of the texture state should be managed in a thread-safe manner.

Attempted Solution #1:
My initial idea was to use an Actor to implement my ThreadPool. But if I do that, I can't access the actor from my draw/render loop b/c from there I'm in a synchronous context. I can't just wrap access in a Task{} b/c the requested Texture needs to be available immediately for the render pipeline to work.

Attempted Solution #2:
I tried making the TexturePool a normal class isolated to the MainActor, but then I have the problem that the expensive Texture allocation is all running on the main UI thread, which is exactly what I'm trying to avoid. I tried putting the allocations in a detached Task, but I can't create my textures and update the pool like that either b/c I get errors saying MTLTexture is non-sendable (which I cannot control).

Another approach that seems to work is using the older system of locks, but as I understand it, this is the "old way" and should be avoided.

I am thoroughly confused.

How would you go about solving this problem?

1 Like

Take a look at this post, where you might find something useful. :slight_smile:

1 Like

Thanks for the pointer! :pray:
Looks like a good place to start. I'll return after I read thru everything and run more tests.

In the meantime, any additional tips/advice is appreciated.

For such tasks, I would highly advise implementing a Sendable class (or noncopyable struct if exclusive ownership suits you better) data structure protected by a mutex instead.

The only major operations that you have are insertions/deletions/swapping of textures in the pool, which are not long-running and don't have any innate asynchrony (such as network I/O) that would warrant suspending the caller — as you mentioned, using an actor would require you to await on them.

The loading, allocation of texture objects etc. are part of their initialization technically, so this can be done on the generic executor ("in the background").

That MTLTexture is not sendable can be worked around in the 6.0 Swift version via region-based isolation and the sending keyword.

Consider this sketch design:

import Foundation
import os

final class Pool<Item>: @unchecked Sendable {
    private let lock = OSAllocatedUnfairLock()
    private var items = [Item]()
    
    func reuse(_ item: sending Item) {
        lock.lock()
        defer { lock.unlock() }
        
        items.append(item)
    }
    
    func dequeueOrCreate(_ create: () -> sending Item) -> sending Item {
        
        let item: Item?
        
        do {
            lock.lock()
            defer { lock.unlock() }
            
            item = items.popLast()
        }
        
        if let item {
            return item
        }
        
        let newItem = create()
        
        do {
            lock.lock()
            defer { lock.unlock() }
            
            items.append(newItem)
        }
        
        return newItem
    }
}

You now should be able to call this from any non-isolated function in the following way:

let pool = Pool<MTLTexture>()

func test() async {
    let texture = pool.dequeueOrCreate {
        let tex = device.makeTexture(...)
        // further setup...
        return tex 
    }
}


Some notes:

  1. I wasn't able to use the new Mutex from the stadard library because some checks fail when passing the sending Item value into the withLock closure, but I'm pretty sure that this is safe and should raise no errors.
  2. The dequeueOrCreate function doesn't block on the create call, so you could end up creating too many textures because of the TOCTOU phenomenon, but it doesn't necessarily strike me as majorly dangerous as is. However, this can break pretty soon: if you need to use a dictionary as the storage instead, creating two textures for the same key can overwrite or leak one of them.
  3. The create closure itself is synchronous. It's possible to have it async, but this would require a couple more workarounds, so let me know if you need these complications.

This is absolutely not the case by the way, the standard library just recently has gotten implementations for Mutex as well as native atomics support. There are very legitimate cases when good old locking is the preferred way to go, specifically when you're implementing plain data structures that have to be callable from synchronous functions.

3 Likes

Apologies for the late reply, I've been away the last weeks.

@nkbelov Thank you very much for the detailed information. So kind of you. This is tremendously helpful.