First off, I would like to congratulate everyone involved in bringing all the concurrency features to the table - it looks like an awesome first step. I think the model where we trade context switches and potential thread explosions (and blocking) for heap allocator pressure is great - will be very interesting to see how it performs.
I did view the behind-the-scenes talk and have tried to keep up with the discussions in various threads, but there are a few questions I have:
- It is still is unclear to me how the default executor keeps its pool of threads around and how threads are both created and woken up to get started to work. Is this fundamentally just a pool of pthreads which are woken up with the usual mechanisms, or something else? I understand the non-blocking (possible) out-of-order continuation execution, but how are things bootstrapped at a lower level? Where can I find out more about that (source pointers are fine :-) ) - I am curious as there was a comment about the performance characteristics being unknown on e g. Linux and I’d like to understand more and see if there is some work that needs to be done there. (Possible that it would require support for custom executors)
- Same goes for thread shutdown when no more work is around (so basically, how are the threads managed in the default executor)
I care quite a bit about the initial latency of getting more threads running, not only efficient execution when under load, thus wanting to understand those characteristics better. (I believe the new model has potential to work very nice under load)
Overall, thread-per-core and non-blocking, non-context-switching by default is just awesome.