Thanks @ktoso, that is also great news. Agree it doesn't make much sense on (current) Apple devices, but typical deployment for us would be a minimum 32 core (non HT), usually quite a bit more - so plenty of room for getting creative (with no real requirements of caring about power efficiency).
This also should play rather well with Swift's "actor hop avoidance" where we can avoid hops if we know both actors are on the exact same, serial, and willing to switch, executor etc.
That also sounds great if it would work, fundamentally there are many use cases of logically partitioned code (actors) that want to run on the same executor, and willing to switch etc. It would just be super nice to be able to do it (while pinned and avoiding "hops") with a high level mental model basically out of the box.It seems then that implementing a serialExecutor
for actors that returns a pinned executor and doing that carefully for various actor instances will fundamentally give the flexibility needed (instead of relying on the default serial executor).
So, fundamentally, this addresses many of the usability concerns I've had, there is just one final thing I'm curios about which is related:
What are the possibilities of tweaking wait strategies for an executor when waiting for work, and how would that work? Let's take the example below if I have a custom serial executor implemented that returns a pinned executor, presumably, if trying to call it and its busy so "switching" would be required, it'd be enqueued on the executor and run when the executor is not busy any more.
But how would it look if calling "cross executors" where two actor instances are having serialExecutors that returns two different pinned ones?
Here comes the question, presumably a default implementation would block waiting for a queue in such a case (or along those lines at least), but for a PinnedExecutor in a latency sensitive environment without power considerations, just (properly) busy looping on our own CPU would be of interest to cut down thread wakeup latency for the first event. It seems from reading @John_McCall draft of customer-executors that this also would be possible if that comes into play?
Fundamentally, we've seen a handful of issues that are critical to us, that all seems to be sorted (hooks provided for addressing, or built-in):
- Excessive thread count due to blocking
- Context switching (and locking) overhead
- Thread wakeup latency
Overall, very happy where all of this seems to go (including structured concurrency, actors, async/await, custom executors, ...), really awesome.