Question about async await

My earlier response to this thread both linked to a previous thread about this and explained how C# does it. It will require some library support, but it can be done, and IMO should be done. As I’ve stressed repeatedly, async/await without this behavior will be very difficult to use correctly. I really hope we don’t settle for that.

···

On Sep 25, 2017, at 3:04 PM, Jean-Daniel via swift-evolution <swift-evolution@swift.org> wrote:

Le 25 sept. 2017 à 21:42, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> a écrit :

This doesn't have to be the case, actually. The intrinsics as Chris described them wouldn't be sufficient, but you could require a "current queue" to be provided when kicking off an async function from scratch, as well as any other "async-local" context information you wanted (e.g. QoS and the other things that Dispatch tracks with attributes/flags that are generally supposed to persist across an entire async operation).

My response was about the ‘implicitly’ part. I hope we will get a rich API that let us specify return queue, QoS and more, but how do you plan to fulfill the « current queue » requirement implicitly ?

Based on that thread and others it appears that Chris and Joe are both
alert to the danger that comes with queue-hopping inside a single scope, so
hopefully we won't end up in that world.

···

On Mon, Sep 25, 2017 at 3:13 PM, Adam Kemp via swift-evolution < swift-evolution@swift.org> wrote:

On Sep 25, 2017, at 3:04 PM, Jean-Daniel via swift-evolution < > swift-evolution@swift.org> wrote:

Le 25 sept. 2017 à 21:42, John McCall via swift-evolution < > swift-evolution@swift.org> a écrit :

This doesn't have to be the case, actually. The intrinsics as Chris
described them wouldn't be sufficient, but you could require a "current
queue" to be provided when kicking off an async function from scratch, as
well as any other "async-local" context information you wanted (e.g. QoS
and the other things that Dispatch tracks with attributes/flags that are
generally supposed to persist across an entire async operation).

My response was about the ‘implicitly’ part. I hope we will get a rich API
that let us specify return queue, QoS and more, but how do you plan to
fulfill the « current queue » requirement implicitly ?

My earlier response to this thread both linked to a previous thread about
this and explained how C# does it. It will require some library support,
but it can be done, and IMO should be done. As I’ve stressed repeatedly,
async/await without this behavior will be very difficult to use correctly.
I really hope we don’t settle for that.

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

--
Functional Programmer, iOS Developer, Surfs Poorly
http://twitter.com/n8gray

In C#, the model is far simple as there is not concept of a single dispatch queue that can execute work on any thread. You can easily use TLS to store a default context. Each UI thread can have a context that dispatch completion on the message queue, but AFAIK, there is not DispatchQueue Local Storage yet. Even something as simple as getting the current queue is not reliable (see dispatch_get_current_queue man page for details).

That’s why I’m saying it will be difficult to define a reasonable default context that can be used implicitly.

···

Le 26 sept. 2017 à 00:13, Adam Kemp <adam_kemp@apple.com> a écrit :

On Sep 25, 2017, at 3:04 PM, Jean-Daniel via swift-evolution <swift-evolution@swift.org> wrote:

Le 25 sept. 2017 à 21:42, John McCall via swift-evolution <swift-evolution@swift.org> a écrit :

This doesn't have to be the case, actually. The intrinsics as Chris described them wouldn't be sufficient, but you could require a "current queue" to be provided when kicking off an async function from scratch, as well as any other "async-local" context information you wanted (e.g. QoS and the other things that Dispatch tracks with attributes/flags that are generally supposed to persist across an entire async operation).

My response was about the ‘implicitly’ part. I hope we will get a rich API that let us specify return queue, QoS and more, but how do you plan to fulfill the « current queue » requirement implicitly ?

My earlier response to this thread both linked to a previous thread about this and explained how C# does it. It will require some library support, but it can be done, and IMO should be done. As I’ve stressed repeatedly, async/await without this behavior will be very difficult to use correctly. I really hope we don’t settle for that.

This doesn't have to be the case, actually. The intrinsics as Chris described them wouldn't be sufficient, but you could require a "current queue" to be provided when kicking off an async function from scratch, as well as any other "async-local" context information you wanted (e.g. QoS and the other things that Dispatch tracks with attributes/flags that are generally supposed to persist across an entire async operation).

My response was about the ‘implicitly’ part. I hope we will get a rich API that let us specify return queue, QoS and more, but how do you plan to fulfill the « current queue » requirement implicitly ?

My earlier response to this thread both linked to a previous thread about this and explained how C# does it. It will require some library support, but it can be done, and IMO should be done. As I’ve stressed repeatedly, async/await without this behavior will be very difficult to use correctly. I really hope we don’t settle for that.

In C#, the model is far simple as there is not concept of a single dispatch queue that can execute work on any thread. You can easily use TLS to store a default context. Each UI thread can have a context that dispatch completion on the message queue, but AFAIK,

there is not DispatchQueue Local Storage yet.

There is, see dispatch_queue*_specific()

Even something as simple as getting the current queue is not reliable (see dispatch_get_current_queue man page for details).

This is a sharp construct for clients, but not for the runtime / compiler that can be taught how not to fall in the traps of this API.

Just to debunk myths, dispatch_get_current_queue() is VERY WELL defined, but has two major issues: nesting & refcounting.

Nesting

Nesting refers to the fact that when you call code that takes a queue and a callback, you may observe *another* queue:

run_something_and_call_me_back(arg1, arg2, on_queue, ^{
    assert(dispatch_get_current_queue() == on_queue); // may crash
    ... my stuff ...
});

The reason is that run_something_and_call_me_back() may create a queue that targets `on_queue` and then this private queue is what is returned which is both unexpected and exposing internals of the implementation of run_something_and_call_me_back() which is all wrong.

A corollary is that people attempting to implement recursive locking (which is a bad idea in general anyway) with dispatch_get_current_queue() will fail miserably.

Refcounting

Because dispatch has a notion of internal refcount, in ARC world, this will crash most of the time:

dispatch_async(dispatch_queue_create_with_target("foo", NULL, NULL), ^{
    __strong dispatch_queue cq = dispatch_get_current_queue(); // will usually crash with a resurrection error
});

These two edges is why we deprecated this interface for humans.

1) A compiler though is not affected by the first issue because the context it would capture would have to not be programatically accessible to clients
2) The Swift runtime can know to take "internal" refcounts when capturing this hidden pointer and is not affected by the second problem either.

tl;dr: what is badly defined is allowing clients to get a pointer to the current queue with a real +1, but that is WAY stronger than what the language runtime needs.

That’s why I’m saying it will be difficult to define a reasonable default context that can be used implicitly.

This is just not true. This is both easy and reasonable.

-Pierre

···

On Sep 26, 2017, at 11:22 AM, Jean-Daniel via swift-evolution <swift-evolution@swift.org> wrote:

Le 26 sept. 2017 à 00:13, Adam Kemp <adam_kemp@apple.com> a écrit :

On Sep 25, 2017, at 3:04 PM, Jean-Daniel via swift-evolution <swift-evolution@swift.org> wrote:

Le 25 sept. 2017 à 21:42, John McCall via swift-evolution <swift-evolution@swift.org> a écrit :

Pierre responded to the rest of your comments, but I wanted to briefly touch on this:

···

On Sep 26, 2017, at 11:22 AM, Jean-Daniel via swift-evolution <swift-evolution@swift.org> wrote:

In C#, the model is far simple as there is not concept of a single dispatch queue that can execute work on any thread.

I don’t think this is true in general. The purpose of the SynchronizationContext abstraction is that it allows for different kinds of threading models, and I think the GCD model could work as well. It may be true that in a typical C# application using, say, WPF you don’t have that situation. In most C# frameworks you basically have the UI thread’s context and then you have the generic “thread pool” context. But the way that SynchronizationContext works should allow for you to create a GCD-like system. The way it would work is that when you enter a queue then you would push a SynchronizationContext for that queue, and when you exit the queue you would pop it (restore the previous context). The SynchronizationContext for the queue would implement the Post and Send methods to dispatch_async and dispatch_sync, respectively.

Again, it may be true that a typical C# application doesn’t need this, but I don’t think there’s anything blocking a GCD-like implementation on C# using their system. It’s pretty flexible. I believe a similar system could work for Swift.

I’m glade to be wrong about that point ;-)
One issue I still see is what should be the default when running on a bare pthread outside of any queue context. Or is there a queue associated with any thread ?

···

Le 26 sept. 2017 à 22:38, Pierre Habouzit <phabouzit@apple.com> a écrit :

On Sep 26, 2017, at 11:22 AM, Jean-Daniel via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Le 26 sept. 2017 à 00:13, Adam Kemp <adam_kemp@apple.com <mailto:adam_kemp@apple.com>> a écrit :

On Sep 25, 2017, at 3:04 PM, Jean-Daniel via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Le 25 sept. 2017 à 21:42, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> a écrit :

This doesn't have to be the case, actually. The intrinsics as Chris described them wouldn't be sufficient, but you could require a "current queue" to be provided when kicking off an async function from scratch, as well as any other "async-local" context information you wanted (e.g. QoS and the other things that Dispatch tracks with attributes/flags that are generally supposed to persist across an entire async operation).

My response was about the ‘implicitly’ part. I hope we will get a rich API that let us specify return queue, QoS and more, but how do you plan to fulfill the « current queue » requirement implicitly ?

My earlier response to this thread both linked to a previous thread about this and explained how C# does it. It will require some library support, but it can be done, and IMO should be done. As I’ve stressed repeatedly, async/await without this behavior will be very difficult to use correctly. I really hope we don’t settle for that.

In C#, the model is far simple as there is not concept of a single dispatch queue that can execute work on any thread. You can easily use TLS to store a default context. Each UI thread can have a context that dispatch completion on the message queue, but AFAIK,

there is not DispatchQueue Local Storage yet.

There is, see dispatch_queue*_specific()

Even something as simple as getting the current queue is not reliable (see dispatch_get_current_queue man page for details).

This is a sharp construct for clients, but not for the runtime / compiler that can be taught how not to fall in the traps of this API.

Just to debunk myths, dispatch_get_current_queue() is VERY WELL defined, but has two major issues: nesting & refcounting.

Nesting

Nesting refers to the fact that when you call code that takes a queue and a callback, you may observe *another* queue:

run_something_and_call_me_back(arg1, arg2, on_queue, ^{
    assert(dispatch_get_current_queue() == on_queue); // may crash
    ... my stuff ...
});

The reason is that run_something_and_call_me_back() may create a queue that targets `on_queue` and then this private queue is what is returned which is both unexpected and exposing internals of the implementation of run_something_and_call_me_back() which is all wrong.

A corollary is that people attempting to implement recursive locking (which is a bad idea in general anyway) with dispatch_get_current_queue() will fail miserably.

Refcounting

Because dispatch has a notion of internal refcount, in ARC world, this will crash most of the time:

dispatch_async(dispatch_queue_create_with_target("foo", NULL, NULL), ^{
    __strong dispatch_queue cq = dispatch_get_current_queue(); // will usually crash with a resurrection error
});

These two edges is why we deprecated this interface for humans.

1) A compiler though is not affected by the first issue because the context it would capture would have to not be programatically accessible to clients
2) The Swift runtime can know to take "internal" refcounts when capturing this hidden pointer and is not affected by the second problem either.

tl;dr: what is badly defined is allowing clients to get a pointer to the current queue with a real +1, but that is WAY stronger than what the language runtime needs.

That’s why I’m saying it will be difficult to define a reasonable default context that can be used implicitly.

This is just not true. This is both easy and reasonable.

-Pierre

This doesn't have to be the case, actually. The intrinsics as Chris described them wouldn't be sufficient, but you could require a "current queue" to be provided when kicking off an async function from scratch, as well as any other "async-local" context information you wanted (e.g. QoS and the other things that Dispatch tracks with attributes/flags that are generally supposed to persist across an entire async operation).

My response was about the ‘implicitly’ part. I hope we will get a rich API that let us specify return queue, QoS and more, but how do you plan to fulfill the « current queue » requirement implicitly ?

My earlier response to this thread both linked to a previous thread about this and explained how C# does it. It will require some library support, but it can be done, and IMO should be done. As I’ve stressed repeatedly, async/await without this behavior will be very difficult to use correctly. I really hope we don’t settle for that.

In C#, the model is far simple as there is not concept of a single dispatch queue that can execute work on any thread. You can easily use TLS to store a default context. Each UI thread can have a context that dispatch completion on the message queue, but AFAIK,

there is not DispatchQueue Local Storage yet.

There is, see dispatch_queue*_specific()

Even something as simple as getting the current queue is not reliable (see dispatch_get_current_queue man page for details).

This is a sharp construct for clients, but not for the runtime / compiler that can be taught how not to fall in the traps of this API.

Just to debunk myths, dispatch_get_current_queue() is VERY WELL defined, but has two major issues: nesting & refcounting.

Nesting

Nesting refers to the fact that when you call code that takes a queue and a callback, you may observe *another* queue:

run_something_and_call_me_back(arg1, arg2, on_queue, ^{
    assert(dispatch_get_current_queue() == on_queue); // may crash
    ... my stuff ...
});

The reason is that run_something_and_call_me_back() may create a queue that targets `on_queue` and then this private queue is what is returned which is both unexpected and exposing internals of the implementation of run_something_and_call_me_back() which is all wrong.

A corollary is that people attempting to implement recursive locking (which is a bad idea in general anyway) with dispatch_get_current_queue() will fail miserably.

Refcounting

Because dispatch has a notion of internal refcount, in ARC world, this will crash most of the time:

dispatch_async(dispatch_queue_create_with_target("foo", NULL, NULL), ^{
    __strong dispatch_queue cq = dispatch_get_current_queue(); // will usually crash with a resurrection error
});

These two edges is why we deprecated this interface for humans.

1) A compiler though is not affected by the first issue because the context it would capture would have to not be programatically accessible to clients
2) The Swift runtime can know to take "internal" refcounts when capturing this hidden pointer and is not affected by the second problem either.

tl;dr: what is badly defined is allowing clients to get a pointer to the current queue with a real +1, but that is WAY stronger than what the language runtime needs.

That’s why I’m saying it will be difficult to define a reasonable default context that can be used implicitly.

This is just not true. This is both easy and reasonable.

I’m glade to be wrong about that point ;-)
One issue I still see is what should be the default when running on a bare pthread outside of any queue context. Or is there a queue associated with any thread ?

My thinking is that we need to have a notion of "current place to run swift bullshi^Wclosures".
It could be the current queue if there's one, or the current CFRunLoop (if there's one already made)
It could be the current "libfoobar event loop", ...

and if there's neither, then I think it would be completely appropriate to crash at runtime because it means you made a thread without the Swift runtime knowledge without any setup to allow running swift actors/asyncs/... on it and then called into Swift code that needs it.

In less joking tones, what I was thinking about is that the runtime should be able to have a way to get to the "current actor/async/.... context" for a thread which is an object that implements a given protocol that has the necessary methods to receive asyncs/actors/...

I hope I'm making sense.

-Pierre

···

On Sep 26, 2017, at 1:57 PM, Jean-Daniel <mailing@xenonium.com> wrote:

Le 26 sept. 2017 à 22:38, Pierre Habouzit <phabouzit@apple.com <mailto:phabouzit@apple.com>> a écrit :

On Sep 26, 2017, at 11:22 AM, Jean-Daniel via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Le 26 sept. 2017 à 00:13, Adam Kemp <adam_kemp@apple.com <mailto:adam_kemp@apple.com>> a écrit :

On Sep 25, 2017, at 3:04 PM, Jean-Daniel via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Le 25 sept. 2017 à 21:42, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> a écrit :

Awesome. That sounds like what I’ve been describing. :)

···

On Sep 26, 2017, at 10:17 PM, Pierre Habouzit <phabouzit@apple.com> wrote:

In less joking tones, what I was thinking about is that the runtime should be able to have a way to get to the "current actor/async/.... context" for a thread which is an object that implements a given protocol that has the necessary methods to receive asyncs/actors/…