This is really great to see, thank you for your hard work (and the hard work of others!) on this roadmap.
I'm not sure if this is answerable or even known, but is there an estimate of the language version the first phase is targeting, and if so, the calendar timeline for that Swift version? Again - understandable if that's not public or answerable :)
Has there been any thought about actors as an interface for heterogenous computing? The idea of having actor-isolated functions running on actor-isolated data seems like it should scale to modelling some GPU kernel running on data that must be copied to/from the GPU's own memory.
I hope people don't mind too much, but I've moved (and will continue to move) some discussions that seemed topic-specific over to the dedicated thread for those topics. Please feel free to post general questions and feedback here, and if we think they should be moved, we'll move 'em.
I think it would be very natural to express that as an async API. Whether it's a good fit for an actor is an interesting question. To me, something can be logically be an "actor" — that is, it fills a similar role as a language-supported actor — without necessarily being a good fit for the language-supported actor features. The difference between a well-encapsulated actor class and an ordinary class with primarily async methods is very little, basically just the semantics of extension methods (i.e. whether they logically execute on the actor's executor or not).
The main problem I see with this is that it seems to encourage anti-patterns that have the potential to do much more harm than good. This document seems already to be recommending anti-patterns like using queues instead of locks to protect shared state and using private queues to protect access to properties (which is what actors are). The maintainer of the libdispatch at Apple has repeatedly explained that developers should not do this. I have compiled together a page that sums up all this, including many references at the bottom of the document:
Multithreading is a complex thing, developers need to think very hard about it, about threads and how to use them wisely in their programs. The libdispatch maintainer explained that going wide by default is a mistake. Developers need to start serially and only apply concurrency when needed and with great care. My worry is that developers will misuse this just as they are misusing the libdispatch, they will make actors out of everything, have way too many queues, dispatch way too small execution blocks and get terrible performance out of it.
The inner implementation will implement a thread pool? if yes, with how many threads it will start?
If no, so it will dispatch a new thread for each time?
What about thread affinity and resources that need to assure they run in the same thread it runs before?
The actors pattern is cool and all but in my experience at least they are no the best pattern for every multithreading scenario. So forcing this directly in the language, of course it will make it easy for most people, but as i 've said before if they were built with the current language constructions it would be even better..
Edit: to make it clear.. ? for instance point to optional<> what about an async<> than a sugar could be used of course, but async<> could than be optmized for different cases.. IDK, just an idea..
Right. I think the libdispatch has done a fair amount of harm to the platform because it made it feel like multithreading was easy, developers started using it without understanding what was going on under the hood and terrible performance ensued. This is the same thing but on steroids.
Having personally gone the way of the libdispatch since it was introduced in Snow Leopard, having written a lot of heavily async code, and eventually having understood the hard way that it could not work well without carefully thinking about threads, I am quite afraid of the long-running consequences of something like this.
My first advice to developers about multithreading today is: don't do it, unless you know you need to introduce concurrency and if you do, apply it with care. Most programs don't need a great deal of concurrency and most are not good candidate for it (because they lack long enough non-contended tasks to execute concurrently). This proposal will make everyone use concurrency and multitrhreading with no understanding of the implications.