Loom is indeed quite a hot topic in JVM land since quite some time... It's shaping up quite well, but the "weight" of (virtual) threads remains a bit unclear, as John alludes to.
There's an interesting circle the JVM has walked here; Way back then it hat green threads (eons ago... in 1.1), however those were M:1
mapped, so all java.lang.Tread
would share the same underlying OS thread. Obviously, this was limiting for real parallelism (and multi core CPUs), so Java made the switch to mapping its Thread 1:1 to OS threads. That has a nice benefit of mapping "directly" to calling native code etc. It's quite heavy though 500~1000k per Thread used to be the rough estimate, though AFAIR things have improved in JDK11 which I've not used in anger. But in any case, Loom's fibers/virtual-threads are definitely going to be "light" at least as compared to present day j.l.Thread
Needless to say that relying on today's Thread directly is too heavy for reactive frameworks, so runtimes like Netty, Akka, Reactor, Reactive Streams impls (anything really), end up scheduling in user-land, many fine grained tasks onto those heavy threads, i.e. scheduling M:N
(M entities onto N real threads). All reactive or async libraries effectively do this today.
Loom, is interesting since it flips the mappings around again; i.e. what libraries used to have to do because Thread is too heavy, loom does (basically it will do exactly the same thing in terms of scheduling as those reactive libs do today), and map M "virtual" threads onto N real threads. So... it's going back to green threading, but with M:N (and not M:1 like it historically had).
I remain a bit iffy about the "weight" question for Loom... perhaps they'll figure it out somehow with VM trickery though. The simple thing about stream or actor runtimes on the JVM is that they simply "give up the thread", and when they're scheduled again they start afresh, there's no need to keep around any stack for those models to work well. So I wonder how stackful (threads) will lean themselves to such lighter execution models. Yet another library scheduler on top of virtual threads sounds a bit silly -- 2 layers of user land scheduling seem a bit weird, yet leaving it plain as "lib contepts : directly virtual thread" mapping will be interesting to see if it really is light enough... (One could argue the shape of such APIs will change dramatically though )
// Thanks for the paper @Joe_Groff, that's a topic I'd love to learn more about, will dig into it!