[Concurrency] async/await + actors


(Pierre Habouzit) #41

I would argue that given:

    foo()
    await bar()
    baz()

That foo and baz should run on the same queue (using queue in the GCD sense) but bar should determine which queue it runs on. I say this because:
foo and baz are running synchronously with respect to each other (though they could be running asynchronously with respect to some other process if all the lines shown are inside an async function).
bar is running asynchronously relative to foo and baz, potentially on a different queue.

This isn't true, in the code above, foo() bar() and baz() all execute serially (synchronously is a weird word to use here IMO).
Serial code that you write with or without async/await in the middle will run serially independently from where it physically executes.

And for the record I do agree with Chris that by default foo() and baz() should execute on the same context. This is quite tricky if the caller is a pthread though, and the three possibilities I see for this on a manually made thread are:
- we assert at runtime
- await synchronously blocks in that case
- baz() doesn't execute on the thread

I think the 3rd one is a non starter, that (1) would be nice but may prove impractical. a (4) would be to require for people making manual threads and using async/await to drain some thing themselves from that thread through an event loop of theirs. But the danger or (4) is that if the client doesn't do it, then the failure mode is silent.

···

On Sep 7, 2017, at 1:04 AM, Howard Lovatt via swift-evolution <swift-evolution@swift.org> wrote:

I say bar is potentially on a different queue because the user of bar, the person who wrote these 3 lines above, cannot be presumed to be the writer of foo, baz, and particularly not bar and therefore have no detailed knowledge about which queue is appropriate.

Therefore I would suggest either using a Future or expanding async so that you can say:

    func bar() async(qos: .userInitiated) { ... }

You also probably need the ability to specify a timeout and queue type, e.g.:

   func bar() async(type: .serial, qos: .utility, timeout: .seconds(10)) throws { ... }

If a timeout is specified then await would have to throw to enable the timeout, i.e. call would become:

   try await bar()

Defaults could be provided for qos (.default works well), timeout (1 second works well), and type (.concurrent works well).

However a Future does all this already :).

  -- Howard.

On 7 September 2017 at 15:13, David Hart via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

> On 7 Sep 2017, at 07:05, Chris Lattner via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
>
>
>> On Sep 5, 2017, at 7:31 PM, Eagle Offshore via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
>>
>> OK, I've been watching this thing for a couple weeks.
>>
>> I've done a lot of GCD network code. Invariably my completion method starts with
>>
>> dispatch_async(queue_want_to_handle_this_on,....)
>>
>> Replying on the same queue would be nice I guess, only often all I need to do is update the UI in the completion code.
>>
>> OTOH, I have situations where the reply is complicated and I need to persist a lot of data, then update the UI.
>>
>> So honestly, any assumption you make about how this is supposed to work is going to be wrong about half the time unless....
>>
>> you let me specify the reply queue directly.
>>
>> That is the only thing that works all the time. Even then, I'm very apt to make the choice to do some of the work off the main thread and then queue up the minimal amount of work onto the main thread.
>
> I (think that I) understand what you’re saying here, but I don’t think that we’re talking about the same thing.
>
> You seem to be making an argument about what is most *useful* (being able to vector a completion handler to a specific queue), but I’m personally concerned about what is most *surprising* and therefore unnatural and prone to introduce bugs and misunderstandings by people who haven’t written the code. To make this more concrete, shift from the “person who writes to code” to the “person who has to maintain someone else's code”:
>
> Imagine you are maintaining a large codebase, and you come across this (intentionally abstract) code:
>
> foo()
> await bar()
> baz()
>
> Regardless of what is the most useful, I’d argue that it is only natural to expect baz() to run on the same queue/thread/execution-context as foo and bar. If, in the same model, you see something like:
>
> foo()
> await bar()
> anotherQueue.async {
> baz()
> }

Couldn’t it end up being:

foo()
await bar()
await anotherQueue.async()
// on another queue

> Then it is super clear what is going on: an intentional queue hop from whatever foo/bar are run on to anotherQueue.
>
> I interpret your email as arguing for something like this:
>
> foo()
> await(anotherQueue) bar()
> baz()
>
> I’m not sure if that’s exactly the syntax you’re arguing for, but anything like this presents a number of challenges:
>
> 1) it is “just sugar” over the basic model, so we could argue to add it at any time (and would argue strongly to defer it out of this round of discussions).
>
> 2) We’d have to find a syntax that implies that baz() runs on anotherQueue, but bar() runs on the existing queue. The syntax I sketched above does NOT provide this indication.
>
> -Chris
>
>
> _______________________________________________
> swift-evolution mailing list
> swift-evolution@swift.org <mailto:swift-evolution@swift.org>
> https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


#42

What about using do {} to hop back to the initial queue?

Say I have a download() async throws method which would return on some background queue (like URLSession does today). Then I could use this queue to do additional processing on this queue without first jumping back to the original queue and then jumping from there into another background queue.

func downloadAndProcess() async throws -> Thing {
  // on callers queue
  do {
    let data = try await download()
    // on background queue 
    return try await process(data)
    // returns on callers queue 
  } catch {
    // on callers queue again 
    throw error
  }
}

To allow do to do its magic in case the async methods don’t throw, catch would have to be optional then.

Cheers Marc


(James Froggatt) #43

Wouldn’t it be reasonable to want to catch errors without switching queue? To me, this would seem unexpected.

On the plus side, do {} can already be standalone, in which case it just creates a scope, so it’s not unreasonable to extend the syntax to ‘do’ something more. Perhaps it could be parameterised, to explicitly work on a queue?

I vaguely remember the idea of a parameterised async keyword coming up. This could perhaps fit in with a more general async do syntax, as something like async(.main) do.


#44

If the catch body wouldn’t run in the original queue, how could we otherwise switch to it if needed for handling the error? OK, you could rethrow and catch it in another do outside, but that’s not straight forward IMHO.


(James Froggatt) #45

Sorry, my post may have been unclear. When I said ‘Wouldn’t it be reasonable to want to catch errors without switching queue?’, I meant that in the case of using a do block to catch errors, it may be desirable to have the do's contents run on the same queue. For example, when writing a function intended to run on a specific worker queue.


(Max Desiatov) #46

Is there any chance this proposal gets reviewed any time soon for Swift 5? If not, are there any specific roadblocks?


(Erik Little) #47

Introducing first class concurrency to Swift is most definitely not going to be in Swift 5, nor 6. It’ll most likely be a few years before we see this.


(Max Desiatov) #48

Thanks for the info. Did I miss any specific statement from the dev team on this? What’s the main motivation for delaying this after Swift 6?


(Chris Lattner) #49

As far as I know, there is about a zero percent chance of any of this making progress in Swift 5. There is no blocker, but there is also no one working on it. Swift 6 isn’t scoped though, so I’m optimistic that at least something will make progress in this space next year.

-Chris


(Max Desiatov) #50

If someone were able to start working on it, what would be the requirements? Anything other than rebasing the initial PR, resolving conflicts and refining the proposal based on feedback so that it’s ready for review?


(Erik Little) #51

Okay, I’m being a little pessimistic on Swift 6. It’s not out of the realm of possibility that some work on this might make it into Swift 6. But a fully mature Actor model is pretty ambitious.


(Max Desiatov) #52

I’m mainly interested in concurrency, coroutines and async/await syntax. Not having this available is a huge problem in server-side code, especially given that most of the major languages have supported it in some form for several years already (Python, C#, TypeScript, Go etc).


(Erik Little) #53

IIRC the initial PR was just getting in the syntax for this, which is a good starting point, but a lot of careful thought will be needed before all the internal stuff is figured out. This will most likely involve multiple proposals, discussions on the compiler infrastructure/stdlib forums, etc. So much so that I think it should probably be a focused goal of a major release.

I agree! GCD is great for some small things. But trying to use it in large code bases on the server side is cumbersome, doesn’t scale well, and does not reflect the current state-of-the-art.


#54

Probably just use a Promise lib in the meantime - there are several around - even this new one from Google: https://github.com/google/promises/blob/master/README.md

Cheers


(Xiaodi Wu) #55

Yup, likely this will require several releases. Probably will have to mesh well with additions to the language about ownership too, and the user-facing parts of that haven’t been nailed down yet. I think optimistically we’re looking at Swift 7 for any initial implementation results on concurrency, probably closer to Swift 8/9.


(Georgios Moschovitis) #56

think optimistically we’re looking at Swift 7 for any initial implementation results on concurrency, probably closer to Swift 8/9.

Wow, that’s considerably more pessimistic than I hoped.


(Xiaodi Wu) #57

I’m still contributing implementations and bugfixes to bits and pieces that were part of proposals approved for Swift 3. Meanwhile, it’s a stretch for any part of this major tentpole feature even to be proposed for Swift 6. I’d think that a realistic estimate would be for a new concurrency model to be in place by Swift 10.


(Georgios Moschovitis) #58

I’d think that a realistic estimate would be for a new concurrency model to be in place by Swift 10.

Swift is really the new C++ 😉

Seriously now, If a prolonged gestation period leads to a better design, I am all for it. Still, I believe (hope) that Swift 10 is overly pessimistic.


(Hooman Mehr) #59

I am surprised that Apple is not committing more resources for advancing Swift. Considering the amount of work needed, Swift development team needs to be much larger than what it is right now. I am talking about full time developers, not volunteer contributors.


(Goffredo Marocchi) #60

One one side it is a setback, but on the other side Swift developers are probably seeing how reinventing the wheel/upending the tea table on the fundamental way people tackle concurrency is better than speeding solutions out like what is happening with Node and JS in general in a way.

My hope is that more and more people are taking a look at what Rust is doing, stopped considering a joke language which it is not, and how thinking about ownership and multi threaded safety by default allows it to tackle the concurrency problem in a novel way and still get good performance out of it.

It is a goal to want client side and application side written in the same language (see JS on browser and Node.JS backends), but it is not the only approach.