here’s a basic width-limited TaskGroup i am using to accept incoming HTTP connections via the new NIO async APIs.
// In an extension to `TaskGroup` ...
var iterator:Inbound.AsyncIterator = inbound.makeAsyncIterator()
for _:Int in 0 ..< width
{
guard
let element:Inbound.Element = try await iterator.next()
else
{
return
}
self.addTask { await body(element) }
}
for try await _:Void in self
{
guard
let element:Inbound.Element = try await iterator.next()
else
{
return
}
self.addTask { await body(element) }
}
now, i need to bolt on an additional component - a whitelist periodically refreshed from googlebot.json.
if i go all-out and protect it with an actor, then every incoming request needs to await on the actor to become available. which feels suboptimal, because this is data that is rarely written-to and only from one concurrent context, whereas it is read frequently from many concurrent contexts.
what’s the best way to add some shared state to a TaskGroup without forcing readers to suspend? there is no requirement that the readers access ‘the’ most-recent version of the data, as long as it turns over eventually.
You basically have two options, either actors or locks. So that’s about it. Reader writer locks are sometimes a thing but not sure if worth it, as always with performance: measure and then chance and then measure again.
Remember that modifying a shared Array variable is bad, but simply depending on CoW across threads/tasks is fine (and same for the other CoW collections). So maybe every task can just snapshot the latest state, and it’ll get updates regularly? Or if they’re long-running tasks, they can ask for updates when it’s convenient to do so.
if it’s not already obvious, all i’m really trying to achieve here is an immutable, concretely-typed [(UInt8, [IP.V6: KnownPeer])] array that also conforms to AtomicReference, since Array itself is a struct. the whitelist type itself has yet to grow additional stored arrays that would justify a third layer of indirection.
To be clear, my original idea was that you don’t need AtomicReference, but also you could make this AtomicReference with an immutable OnHeap box around your Array type and that would be fine too.
This is what I meant, though you might not consider it good enough. (I am not very experienced with Swift-on-Server!)
You do have an actor, like you mentioned in the original post, but you don't have to wait on that actor for every subtask, and because the updates to the actor are throttled the access will usually be uncontended anyway. And if you have to share the actor on the read side, it's still okay because the accesses are still spread out. (Though it's too bad there aren't "reader-writer actors"…but reader-writer locks don't play nicely with priority donation.)
actually, that’s a really good idea, it had not occurred to me that the task group could read the value lazily. it’s a little weird that there are now two levels of propagation, but that’s not really important for slow-changing data like this.