[Concurrency] async/await + actors

async/await is a primitive you can build these high-level features on top of.

If you have async/await, you can temporarily handle timeout, cancel, and thread control manually until we have time to design features to address those. You can also ignore our features if you don't like them and use your own designs instead. Or you can substitute features more appropriate to your platform—imagine if Swift were a Linux language and you were writing the Mac port, and pthreads were so deeply baked into futures that our entire concurrency system couldn't be used with GCD.

You cannot design the entire world at once, or you'll end up with a huge, complicated, inflexible mess.

···

On Aug 25, 2017, at 10:12 PM, Howard Lovatt via swift-evolution <swift-evolution@swift.org> wrote:

I think we would be better off with a future type rather than async/await since they can offer timeout, cancel, and control over which thread execution occurs on.

--
Brent Royal-Gordon
Architechies

Then isn’t the example functionally equivalent to:

    func doit() {
        DispatchQueue.global().async {
            let dataResource = loadWebResource("dataprofile.txt")
            let imageResource = loadWebResource("imagedata.dat")
            let imageTmp = decodeImage(dataResource, imageResource)
            let imageResult = dewarpAndCleanupImage(imageTmp)
            DispatchQueue.main.async {
                self.imageResult = imageResult
            }
        }
    }

if all of the API were synchronous? Why wouldn’t we just exhort people to write synchronous API code and continue using libdispatch? What am I missing?

-Kenny

···

On Sep 8, 2017, at 2:33 PM, David Hart <david@hartbit.com> wrote:

On 8 Sep 2017, at 20:34, Kenny Leung via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Hi All.

A point of clarification in this example:

func loadWebResource(_ path: String) async -> Resource
func decodeImage(_ r1: Resource, _ r2: Resource) async -> Image
func dewarpAndCleanupImage(_ i : Image) async -> Image

func processImageData1() async -> Image {
    let dataResource = await loadWebResource("dataprofile.txt")
    let imageResource = await loadWebResource("imagedata.dat")
    let imageTmp = await decodeImage(dataResource, imageResource)
    let imageResult = await dewarpAndCleanupImage(imageTmp)
    return imageResult
}

Do these:

await loadWebResource("dataprofile.txt")
await loadWebResource("imagedata.dat")

happen in in parallel?

They don’t happen in parallel.

If so, how can I make the second one wait on the first one? If not, how can I make them go in parallel?

Thanks!

-Kenny

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

First off, I’m still catching up with all those (very welcome :-) threads about concurrency, so bear with me if I’m commenting on topics that have been settled in the meantime.

Hi Chris & swift-evo,

(Given the already lengthy thread I tried to separate my points and keep them reasonably short to allow people to skip points they don't care about. I'm very happy to expand on the points.)

Thanks very much for writing up your thoughts/proposal, I've been waiting to see the official kick-off for the concurrency discussions :).

I) Let's start with the async/await proposal. Personally I think this is the right direction for Swift given the reality that we need to interface with incredibly large existing code-bases and APIs. Further thoughts:

- :question: GCD: dispatching onto calling queue, how?
GCD doesn't actually allow you to dispatch back to the original queue, so I find it unclear how you'd achieve that. IMHO the main reason is that conceptually at a given time you can be on more than one queue (nested q.sync{}/target queues). So which is 'the' current queue?

- ⊥ first class coroutine model => async & throws should be orthogonal
given that the proposal pitches to be the beginning of a first class coroutine model (which I think is great), I think `async` and `throws` do need to be two orthogonal concepts. I wouldn't want automatically throwing generators in the future ;). Also I think we shouldn't throw spanner in the works of people who do like to use Result<E, T> types to hold the errors or values. I'd be fine with async(nothrow) or something though.

I, too, would like to keep async & throws orthogonal for these reasons. Even in the case of async meaning asynchronous or parallel execution I would expect that throwing is not implied as long as we are not using distributed execution or making things cancellable. As long as I am on the same machine just executing something in parallel on another CPU (but within the same runtime) does not make it failable, does it?

- what do we do with functions that invoke their closure multiple times? Like DispatchIO.read/write.

II) the actor model part

- :dancing_women: Erlang runtime and the actor model go hand in hand
I really like the Erlang actor model but I don't think it can be separated from Erlang's runtime. The runtime offers green threads (which allow an actor to block without blocking an OS thread) and prevents you from sharing memory (which makes it possible to kill any actor at any point and still have a reliable system). I don't see these two things happening in Swift. To a lesser extend these issues are also present in Scala/Akka, the mitigate some of the problems by having Akka Streams. Akka Streams are important to establish back-pressure if you have faster producers than consumers. Note that we often can't control the producer, they might be on the other side of a network connection. So it's often very important to not read the available bytes to communicate to the kernel that we can't consumes bytes that fast. If we're networking with TCP the kernel can then use the TCP flow-control to signal to the other side that they better slow down (or else packets will be dropped and then need to be resent later).

- :boom: regarding fatal failure in actors
in the server world we need to be able to accept hundreds of thousands (millions) of connections at the same time. There are quite a few cases where these connections are long-lived and paused for most of the the time. So I don't really see the value in introducing a 'reliable' actor model where the system stops accepting new connections if one actor fatalError'd and then 'just' finishes up serving the existing connections. So I believe there are only two possible routes: 1) treat it like C/C++ and make sure your code doesn't fatalError or the whole process blows up (what we have right now) 2) treat it like Erlang and let things die. IMHO Erlang wouldn't be successful if actors couldn't just die or couldn't be linked. Linking propagates failures to all linked processes. A common thing to do is to 1) spawn a new actor 2) link yourself to the newly spawned actor 3) send a message to that actor and at some point eventually await a reply message sent by the actor spawned earlier. As you mentioned in the writeup it is a problem if the actor doesn't actually reply which is why in Erlang you'd link them. The effect is that if the actor we spawned dies, any linked actor will die too which will the propagate the error to an appropriate place. That allows the programmer to control where an error should propagate too. I realise I'm doing a poor job in explaining what is best explained by documentation around Erlang: supervision [1] and the relationship between what Erlang calls a process (read 'actor') and errors [2].

I also think that being able to link processes (i.e. actors) an get reliably notified of their death is a crucial aspect of Erlang’s success! This is something that should definitely be part of Swift’s actor model (and extended to distributed actors when they are tackled in the future).

-Thorsten

···

Am 18.08.2017 um 17:13 schrieb Johannes Weiß via swift-evolution <swift-evolution@swift.org>:

- :hotsprings: OS threads and actors
as you mention, the actor model only really works if you can spawn lots of them, so it's very important to be able to run hundreds of thousands of them on a number of OS threads pretty much equal to your number of cores. That's only straightforward if there are no (OS thread) blocking operations or at least it's really obvious what blocks and what doesn't. And that's not the case in Swift today and with GCD you really feel that pain. GCD does spawn threads for you and has a rather arbitrary limit of 64 OS threads (by default on macOS). That is too many for a very scalable server application but too few to just tolerate blocking APIs.

[1]: http://erlang.org/documentation/doc-4.9.1/doc/design_principles/sup_princ.html
[2]: Errors and Processes | Learn You Some Erlang for Great Good!

-- Johannes

On 17 Aug 2017, at 11:25 pm, Chris Lattner via swift-evolution <swift-evolution@swift.org> wrote:

On Aug 17, 2017, at 3:24 PM, Chris Lattner <clattner@nondot.org> wrote:

Hi all,

As Ted mentioned in his email, it is great to finally kick off discussions for what concurrency should look like in Swift. This will surely be an epic multi-year journey, but it is more important to find the right design than to get there fast.

I’ve been advocating for a specific model involving async/await and actors for many years now. Handwaving only goes so far, so some folks asked me to write them down to make the discussion more helpful and concrete. While I hope these ideas help push the discussion on concurrency forward, this isn’t in any way meant to cut off other directions: in fact I hope it helps give proponents of other designs a model to follow: a discussion giving extensive rationale, combined with the long term story arc to show that the features fit together.

Anyway, here is the document, I hope it is useful, and I’d love to hear comments and suggestions for improvement:
Swift Concurrency Manifesto · GitHub

Oh, also, one relatively short term piece of this model is a proposal for adding an async/await model to Swift (in the form of general coroutine support). Joe Groff and I wrote up a proposal for this, here:
Concrete proposal for async semantics in Swift · GitHub

and I have a PR with the first half of the implementation here:
Async await prototype by lattner · Pull Request #11501 · apple/swift · GitHub

The piece that is missing is code generation support.

-Chris

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

This is fantastic! Thanks for taking the time to write down your thoughts. It’s exciting to get a glimpse at the (possible) road ahead.

Happy to.

In the manifesto you talk about restrictions on passing functions across an actor message. You didn’t discuss pure functions, presumably because Swift doesn’t have them yet. I imagine that if (hopefully when) Swift has compiler support for verifying pure functions these would also be safe to pass across an actor message. Is that correct?

Correct. The proposal is specifically/intentionally designed to be light on type system additions, but there are many that could make it better in various ways. The logic for this approach is that I expect *a lot* of people will be writing mostly straight-forward concurrent code, and that goal is harmed by presenting significant type system hurdles for them to jump over, because that implies a higher learning curve.

This is why the proposal doesn’t focus on a provably memory safe system: If someone slaps “ValueSemantical” on a type that doesn’t obey, they will break the invariants of the system. There are lots of ways to solve that problem (e.g. the capabilities system in Pony) but it introduces a steep learning curve.

I haven’t thought a lot about practically getting pure functions into Swift, because it wasn’t clear what problems it would solve (which couldn’t be solved another way). You’re right though that this could be an interesting motivator.

I can provide a concrete example of why this is definitely and important motivator.

My current project uses pure functions, value semantics and declarative effects at the application level and moves as much of the imperative code as possible (including effect handling) into library level code. This is working out really well and I plan to continue with this approach. The library level code needs the ability to schedule user code in the appropriate context. There will likely be some declarative ability for application level code to influence the context, priority, etc, but it is the library that will be moving the functions to the final context. They are obviously not closure literals from the perspective of the library.

Pure functions are obviously important to the semantics of this approach. We can get by without compiler verification, using documentation just as we do for protocol requirements that can't be verified. That said, it would be pretty disappointing to have to avoid using actors in the implementation simply because we can't move pure functions from one actor to another as necessary.

To be clear, I am talking in the context of "the fullness of time". It would be perfectly acceptable to ship actors before pure functions. That said, I do think it's crucial that we eventually have the ability to verify pure functions and move them around at will.

The async / await proposal looks very nice. One minor syntax question - did you consider `async func` instead of placing `async` in the same syntactic location as `throws`? I can see arguments for both locations and am curious if you and Joe had any discussion about this.

I don’t think that Joe and I discussed that option. We discussed several other designs (including a more C# like model where async functions implicitly return Future), but he convinced me that it is better to focus language support on the coroutine transformation (leaving futures and other APIs to the library), then you pretty quickly want async to work the same way as throws (including marking etc). Once it works the same way, it follows that the syntax should be similar - particularly if async ends up implying throws.

That said, if you have a strong argument for why this is perhaps the wrong choice, lets talk about it!

I don't necessarily have a *strong* argument and don't want to see us get to sidetracked by a bikeshedding exercise at this point either. :)

Briefly, here is my perspective.

In favor of the proposed syntax:
* From the point of view of the caller, it is the result that is async.
* As you noted, if `async` implies `throws` it is the only reasonable choice.
* It provides consistency with `throws` and perhaps establishes a standard syntactic location for additional effect specifiers down the road.

In favor of `async func`:
* `async` is arguably a much more significant modifier to the behavior of the function than `throws`, influencing the entire execution model, not just an alternate return path.
* Given the above, a case can be made for moving it to the front of the declaration to highlight this significant difference and ensure it isn't missed when reading code (especially when reading quickly).
* It aligns with the vocabulary we naturally use: "an async function". (The same can be said for "throwing function" of course, so this really hinges on the importance you place on the first point)

···

Sent from my iPad

On Aug 17, 2017, at 11:53 PM, Chris Lattner <clattner@nondot.org> wrote:

On Aug 17, 2017, at 7:39 PM, Matthew Johnson <matthew@anandabits.com> wrote:

-Chris

Thanks for the quick response!

For instance, say you’re handling a button click, and you need to do a network request and then update the UI. In C# (using Xamarin.iOS as an example) you might write some code like this:

private async void HandleButtonClick(object sender, EventArgs e) {
    var results = await GetStuffFromNetwork();
    UpdateUI(results);
}

This event handler is called on the UI thread, and the UpdateUI call must be done on the UI thread. The way async/await works in C# (by default) is that when your continuation is called it will be on the same synchronization context you started with. That means if you started on the UI thread you will resume on the UI thread. If you started on some thread pool then you will resume on that same thread pool.

I completely agree, I would love to see this because it is the most easy to reason about, and is implied by the syntax. I consider this to be a follow-on to the basic async/await proposal - part of the Objective-C importer work, as described here:
Concrete proposal for async semantics in Swift · GitHub

Maybe I’m still missing something, but how does this help when you are interacting only with Swift code? If I were to write an asynchronous method in Swift then how could I do the same thing that you propose that the Objective-C importer do? That is, how do I write my function such that it calls back on the same queue?

In my mind, if that requires any extra effort then it is already more error prone than what C# does.

Another difference between the C# implementation and this proposal is the lack of futures. While I think it’s fair to be cautious about tying this proposal to any specific futures implementation or design, I feel like the value of tying it to some concept of futures was somewhat overlooked. For instance, in C# you could write a method with this signature:

...

The benefit of connecting the async/await feature to the concept of futures is that you can mix and match this code freely. The current proposal doesn’t seem to allow this.

The current proposal provides an underlying mechanism that you can build futures on, and gives an example. As shown, the experience using that futures API would work quite naturally and fit into Swift IMO.

I feel like this is trading conceptual complexity in order to gain compiler simplicity. What I mean by that is that the feature feels harder to understand, and the benefit seems to be that this feature can be used more generally for other things. I’m not sure that’s a good tradeoff.

The other approach, which is to build a specific async/await feature using compiler transformations, may be less generic (yield return would have to work differently), but it seems (to me) easier to understand how to use.

For instance, this code (modified from the proposal):

@IBAction func buttonDidClick(sender:AnyObject) {
    doSomethingOnMainThread();
    beginAsync {
        let image = await processImage()
        imageView.image = image
    }
    doSomethingElseOnMainThread();
}

Is less straightforward than this:

@IBAction async func buttonDidClick(sender:AnyObject) {
    doSomethingOnMainThread();
    let imageTask = processImage()
    doSomethingElseOnMainThread();
    imageView.image = await imageTask
}

It’s clearer from reading of the second function what order things will run in. The code from the proposal has a block of code (the callback from beginAsync) that will run in part before the code that follows, but some of it will run after buttonDidClick returns. That’s confusing in the same way that callbacks in general are confusing. The way that async/await makes code clearer is by making it more WYSIWYG: the order you see the code written in is the order in which that code is run. The awaits just mark breaks.

···

On Aug 18, 2017, at 1:15 PM, Chris Lattner <clattner@nondot.org> wrote:
On Aug 18, 2017, at 12:34 PM, Adam Kemp <adam.kemp@apple.com <mailto:adam.kemp@apple.com>> wrote:

(Also, I notice that a fire-and-forget message can be thought of as an `async` method returning `Never`, even though the computation *does* terminate eventually. I'm not sure how to handle that, though)

Yeah, I think that actor methods deserve a bit of magic:

- Their bodies should be implicitly async, so they can call async methods without blocking their current queue or have to use beginAsync.
- However, if they are void “fire and forget” messages, I think the caller side should *not* have to use await on them, since enqueuing the message will not block.

I think we need to be a little careful here—the mere fact that a message returns `Void` doesn't mean the caller shouldn't wait until it's done to continue. For instance:

   listActor.delete(at: index) // Void, so it doesn't wait
   let count = await listActor.getCount() // But we want the count *after* the deletion!

Perhaps we should depend on the caller to use a future (or a `beginAsync(_:)` call) when they want to fire-and-forget? And yet sometimes a message truly *can't* tell you when it's finished, and we don't want APIs to over-promise on when they tell you they're done. I don't know.

I agree. That is one reason that I think it is important for it to have a (non-defaulted) protocol requirement. Requiring someone to implement some code is a good way to get them to think about the operation… at least a little bit.

I wondered if that might have been your reasoning.

That said, the design does not try to *guarantee* memory safety, so there will always be an opportunity for error.

True, but I think we could mitigate that by giving this protocol a relatively narrow purpose. If we eventually build three different features on `ValueSemantical`, we don't want all three of those features to break when someone abuses the protocol to gain access to actors.

I also worry that the type behavior of a protocol is a bad fit for `ValueSemantical`. Retroactive conformance to `ValueSemantical` is almost certain to be an unprincipled hack; subclasses can very easily lose the value-semantic behavior of their superclasses, but almost certainly can't have value semantics unless their superclasses do. And yet having `ValueSemantical` conformance somehow be uninherited would destroy Liskov substitutability.

Indeed. See NSArray vs NSMutableArray.

OTOH, I tend to think that retroactive conformance is really a good thing, particularly in the transition period where you’d be dealing with other people’s packages who haven’t adopted the model. You may be adopting it for their structs afterall.

An alternate approach would be to just say “no, you can’t do that. If you want to work around someone else’s problem, define a wrapper struct and mark it as ValueSemantical”. That design could also work.

Yeah, I think wrapper structs are a workable alternative to retroactive conformance.

What I basically envision (if we want to go with a general `ValueSemantical`-type solution) is that, rather than being a protocol, we would have a `value` keyword that went before the `enum`, `struct`, `class`, or `protocol` keyword. (This is somewhat similar to the proposed `moveonly` keyword.) It would not be valid before `extension`, except perhaps on a conditional extension that only applied when a generic or associated type was `value`, so retroactive conformance wouldn't really be possible. You could also use `value` in a generic constraint list just as you can use `class` there.

A modifier on the type feels like the right approach to specifying value semantics to me. Regardless of which approach we take, it feels like something that needs to be implicit for structs and enums where value semantics is trivially provable by way of transitivity. When that is not the case we could require an explicit `value` or `nonvalue` annotation (specific keywords subject to bikeshedding of course).

···

Sent from my iPad
On Aug 19, 2017, at 12:29 AM, Brent Royal-Gordon via swift-evolution <swift-evolution@swift.org> wrote:

On Aug 18, 2017, at 12:35 PM, Chris Lattner <clattner@nondot.org> wrote:

I'm not totally sure how to reconcile this with mutable subclasses, but I have a very vague sense it might be possible if `value` required some kind of *non*-inheritable initializer, and passing to a `value`-constrained parameter implicitly passed the value through that initializer. That is, if you had:

   // As imported--in reality this would be an NS_SWIFT_VALUE_TYPE annotation on the Objective-C definition
   value class NSArray: NSObject {
       init(_ array: NSArray) { self = array.copy() as! NSArray }
   }

Then Swift would implicitly add some code to an actor method like this:

   actor Foo {
       actor func bar(_ array: NSArray) {
           let array = NSArray(array) // Note that this is always `NSArray`, not the dynamic subclass of it
       }
   }

Since Swift would always rely on the static (compile-time) type to decide which initializer to use, I *think* having `value` be non-inheritable wouldn't be a problem here.

It would be a perfectly valid design approach to implement actors as a framework or design pattern instead of as a first class language feature. You’d end up with something very close to Akka, which has provides a lot of the high level abstractions, but doesn’t nudge coders to do the right thing w.r.t. shared mutable state enough (IMO).

I agree that the language should nudge people into doing the right thing; I'm just not sure it shouldn't do the same for *all* async calls. But that's the next topic.

However, this would move the design of the magic protocol forward in the schedule, and might delay the deployment of async/await. If we *want* these restrictions on all async calls, that might be worth it, but if not, that's a problem.

I’m not sure it make sense either given the extensive completion handler based APIs, which take lots of non value type parameters.

Ah, interesting. For some reason I wasn't thinking that return values would be restricted like parameters, but I guess a return value is just a parameter to the continuation.

I guess what I'd say to that is:

1. I suspect that most completion handlers *do* take types with value semantics, even if they're classes.

2. I suspect that most completion handlers which *do* take non-value types are transferred, not shared, between the actors. If the ownership system allowed us to express that, we could carve out an exception for it.

3. As I've said, I also think there should be a way to disable the safety rules in other situations. This could be used in exceptional cases.

But are these three escape valves enough to make safe-types-only the default on all `async` calls? Maybe not.

To that end, I think failure handlers are the right approach. I also think we should make it clear that, once a failure handler is called, there is no saving the process—it is *going* to crash eventually. Maybe failure handlers are `Never`-returning functions, or maybe we simply make it clear that we're going to call `fatalError` after the failure handler runs, but in either case, a failure handler is a point of no return.

(In theory, a failure handler could keep things going by pulling some ridiculous shenanigans, like re-entering the runloop. We could try to prevent that with a time limit on failure handlers, but that seems like overengineering.)

I have a few points of confusion about failure handlers, though:

1. Who sets up a failure handler? The actor that might fail, or the actor which owns that actor?

I imagine it being something set up by the actor’s init method. That way the actor failure behavior is part of the contract the actor provides. Parameters to the init can be used by clients to customize that behavior.

Okay, so you imagine something vaguely like this (using a strawman syntax):

   actor WebSupervisor {
       var workers: [WebWorker] =
       
       func addWorker() -> WebWorker {
           let worker = WebWorker(supervisor: self)
           workers.append(worker)
           return worker
       }
       
       actor func restart(afterFailureIn failedWorker: WebWorker) {
           stopListening()
           launchNewProcess()
           
           for worker in workers where worker !== failedWorker {
               await worker.stop()
           }
       }
       
       …
   }
   
   actor WebWorker {
       actor init(supervisor: WebSupervisor) {
           …
           
           beforeFatalError { _self in
               await _self.supervisor.restart(afterFailureIn: self)
           }
       }
       
       …
   }

I was thinking about something where `WebSupervisor.addWorker()` would register itself to be notified if the `WebResponder` crashed, but this way might be better.

--
Brent Royal-Gordon
Architechies

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

For instance, has Array<UIView> value semantics?

By the commonly accepted definition, Array<UIView> does not provide value semantics.

You might be tempted to say that it does not because it contains class references, but in reality that depends on what you do with those UIViews.

An aspect of the type (“does it have value semantics or not”) should not depend on the clients. By your definition, every type has value semantics if none of the mutating operations are called :-)

No, not mutating operations. Access to mutable memory shared by multiple "values" is what breaks value semantics. You can get into this situation using pointers, object references, or global variables. It's all the same thing in the end: shared memory that can mutate.

For demonstration's sake, here's a silly example of how you can give Array<Int> literally the same semantics as Array<UIView>:

  // shared UIView instances in global memory
  var instances: [UIView] =

  extension Array where Element == Int {

    // append a new integer to the array pointing to our UIView instance
    func append(view: UIView) {
      self.append(instances.count)
      instances.append(newValue)
    }

    // access views pointed to by the integers in the array
    subscript(viewAt index: Int) -> UIView {
      get {
        return instances[self[index]]
      }
      set {
        self[index] = instances.count
        instances.append(newValue)
      }
    }
  }

And now you need to worry about passing Array<Int> to other thread. ;-)

It does not really matter whether the array contains pointers or wether it contains indices into a global table: in both cases access to the same mutable memory is accessible through multiple copies of an array, and this is what breaks value semantics.

Types cannot enforce value semantics. Its the functions you choose to call that matters. This is especially important to realize in a language with extensions where you can't restrict what functions gets attached to a type.

This gets deeper into the territory of the conversation Dave A and I had a while ago. I think this conflates value semantics with pure functions, which I think is a mistake.

I agree that if you assume away reference counting a function that takes Array<UIView> but never dereferences the pointers can still be a pure function. However, I disagree that Array<UIView> has value semantics.

The relationship of value semantics to purity is that value semantics can be defined in terms of the purity of the "salient operations" of the type - those which represent the meaning of the value represented by the type. The purity of these operations is what gives the value independence from copies in terms of its meaning. If somebody chooses to add a new impure operation in an extension of a type with value semantics it does not mean that the type itself no longer has value semantics. The operation in the extension is not "salient".

This still begs the question: what operations are "salient"? I think everyone can agree that those used in the definition of equality absolutely must be included. If two values don't compare equal they clearly do not have the same meaning. Thread safety is also usually implied for practical reasons as is the case in Chris's manifesto. These properties are generally considered necessary for value semantics.

While these conditions are *necessary* for value semantics I do not believe they are *sufficient* for value semantics. Independence of the value is also required. When a reference type defines equality in terms of object identity copies of the reference are not truly independent.

This is especially true in a language like Swift where dereference is implicit. I argue that when equality is defined in terms of object identity copies of the reference are *not* independent. The meaning of the reference is inherently tied up with the resource it references. The resource has to be considered "salient" for the independence to be a useful property. On the other hand, if all you really care about is the identity and not the resource, ObjectIdentifier is available and does have value semantics. There is a very good reason this type exists.

I'm happy to see this topic emerging again and looking forward to seeing value semantics and pure functions eventually receive language support. There are a lot of subtleties involved. Working through them and clearly defining what they mean in Swift is really important.

···

Sent from my iPad

On Aug 19, 2017, at 8:16 AM, Michel Fortin via swift-evolution <swift-evolution@swift.org> wrote:

If you treat the class references as opaque pointers (never dereferencing them), you preserve value semantics. You can count the elements, shuffle them, all without dereferencing the UIViews it contains. Value semantics only end when you dereference the class references. And even then, there are some exceptions.

I agree with you that the model could permit all values to be sent in actor messages, but doing so would give up the principle advantages of mutable state separation. You’d have to do synchronization, you’d have the same bugs that have always existed, etc.

What the compiler should aim at is enforcing useful rules when it comes to accessing shared mutable state.

--
Michel Fortin
https://michelf.ca

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

In fact this will just work. Because both messages happen on the actor's internal serial queue, the "get count" message will only happen after the deletion. Therefore the "delete" message can return immediately to the caller (you just need the dispatch call on the queue to be made).

Thomas

···

On 19 Aug 2017, at 07:30, Brent Royal-Gordon via swift-evolution <swift-evolution@swift.org> wrote:

On Aug 18, 2017, at 12:35 PM, Chris Lattner <clattner@nondot.org <mailto:clattner@nondot.org>> wrote:

(Also, I notice that a fire-and-forget message can be thought of as an `async` method returning `Never`, even though the computation *does* terminate eventually. I'm not sure how to handle that, though)

Yeah, I think that actor methods deserve a bit of magic:

- Their bodies should be implicitly async, so they can call async methods without blocking their current queue or have to use beginAsync.
- However, if they are void “fire and forget” messages, I think the caller side should *not* have to use await on them, since enqueuing the message will not block.

I think we need to be a little careful here—the mere fact that a message returns `Void` doesn't mean the caller shouldn't wait until it's done to continue. For instance:

  listActor.delete(at: index) // Void, so it doesn't wait
  let count = await listActor.getCount() // But we want the count *after* the deletion!

`beginAsync(_:)` is a sort of poor man's `Future`—it guarantees that the
async function will start, but throws away the return value, and *might*
throw away the error unless it happens to get thrown early. Given that its
ability to return information from the body is so limited, I frankly don't
think it's worth making this function rethrow only some errors. I would
instead make it accept only a non-throwing `async` function, and if you
need to call something that throws, you can pass an async closure with a
`do`/`catch` block.

I agree. I think `rethorws` for `beginAsync` is problematic.

For example, what happens when the `foo` in the following code throws an
`Error` asynchronously?

func foo() async throws { ... }
beginAsync(foo)

`foo` is acceptable as `beginAsync`'s `body` by its type. However its error
might be thrown asynchronously and it is impossible to rethrow it. So the
error must be ignored or treated as an universal error by untyped
propagation. It breaks type safety about error handling.

So I think the signature of `beginAsync` should be the following one.

func beginAsync(_ body: () async -> Void) -> Void

···

--
Yuta

Then isn’t the example functionally equivalent to:

    func doit() {
        DispatchQueue.global().async {
            let dataResource = loadWebResource("dataprofile.txt")
            let imageResource = loadWebResource("imagedata.dat")
            let imageTmp = decodeImage(dataResource, imageResource)
            let imageResult = dewarpAndCleanupImage(imageTmp)
            DispatchQueue.main.async {
                self.imageResult = imageResult
            }
        }
    }

if all of the API were synchronous? Why wouldn’t we just exhort people to write synchronous API code and continue using libdispatch? What am I missing?

There are probably very good optimisations for going asynchronous, but I’m not the right person for that part of the answer.

But I can give another answer: once we have an async/await pattern, we can build Futures/Promises on top of them and then we can await on multiple asynchronous calls in parallel. But it won’t be a feature of async/await in itself:

func doit() async {
  let dataResource = Future({ loadWebResource("dataprofile.txt”) })
  let imageResource = Future({ loadWebResource("imagedata.dat”) })
  let imageTmp = await decodeImage(dataResource.get, imageResource.get)
        self.imageResult = await dewarpAndCleanupImage(imageTmp)
}

···

On 10 Sep 2017, at 00:40, Kenny Leung via swift-evolution <swift-evolution@swift.org> wrote:

-Kenny

On Sep 8, 2017, at 2:33 PM, David Hart <david@hartbit.com <mailto:david@hartbit.com>> wrote:

On 8 Sep 2017, at 20:34, Kenny Leung via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Hi All.

A point of clarification in this example:

func loadWebResource(_ path: String) async -> Resource
func decodeImage(_ r1: Resource, _ r2: Resource) async -> Image
func dewarpAndCleanupImage(_ i : Image) async -> Image

func processImageData1() async -> Image {
    let dataResource = await loadWebResource("dataprofile.txt")
    let imageResource = await loadWebResource("imagedata.dat")
    let imageTmp = await decodeImage(dataResource, imageResource)
    let imageResult = await dewarpAndCleanupImage(imageTmp)
    return imageResult
}

Do these:

await loadWebResource("dataprofile.txt")
await loadWebResource("imagedata.dat")

happen in in parallel?

They don’t happen in parallel.

If so, how can I make the second one wait on the first one? If not, how can I make them go in parallel?

Thanks!

-Kenny

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I have been writing a lot of fully async code over the recent years (in objc) and this all seems to fit well with what we're doing and looks like it would alleviate a lot of the pain we have writing asyc code.

# Extending the model through await

I'm a bit worried about the mention of dispatch_sync() here (although it may just be there to illustrate the deadlock possibility). I know the actor runtime implementation is not yet defined, but just wanted to mention that dispatch_sync() will lead to problems such as this annoying thing called thread explosion. This is why we currently cannot use properties in our code (getters would require us to call dispatch_sync() and we want to avoid that), instead we are writing custom async getters/setters with callback blocks. Having async property getters would be pretty awesome.

Another thing: it is not clearly mentionned here that we're getting back on the caller actor's queue after awaiting on another actor's async method.

# Scalable Runtime

About the problem of creating too many queues. This is something that has annoyed me at this year's wwdc. It used to be back when the libdispatch was introduced in 10.6 that we were told that queues are very cheap, we could create thousands of them and not worry about threads, because the libdispatch would do the right thing internally and adjust to the available hardware (the number of threads would more or less match the number of cores in your machine). Somehow this has changed, now we're being told we need to worry about the threads behind the queues and not have too many of them. I'm not sure if this is something inevitable due to the underlying reality of the system but the way things were presented back then (think in term of queues, don't worry about threads) was very compelling.

# Entering and leaving async code

Certainly seems like the beginAsync(), suspendAsync() primitives would be useful outside of the stdlib. The Future<T> example makes use of suspendAsync() to store the continuation block and call it later, other codes would do just as well.

Shouldn't this:

let imageTmp = await decodeImage(dataResource.get(), imageResource.get())

rather be:

let imageTmp = await decodeImage(await dataResource.get(), await imageResource.get())

Thomas

I can’t speak to the more low-level implications, but to the extent this is
essentially “syntactic sugar for completion handlers,” I can base my
opinion on making iOS apps in different contexts for about 5 years. (Long
enough to remember when Objective-C Blocks as completion handlers were the
great new thing that was going to perfectly solve all our problems). I
especially appreciated the clear goals section of the proposal, which I
thought was on-target.

I have some uncertainty about how async/await will work in
practice—literally, how to use it. Some of this is just needing to see many
more examples and examples in common scenarios at the site of use and in
the creation of async functions.

Most of the questions I had I answered myself in the course of writing this
up, but I include them for confirmation and to call attention to aspects
that may not be clear. (I also tend to use block and closure
interchangeably when it reads better, especially where I learned them as
objective-C completion blocks, please do correct if necessary.) I hope the
write up will provide some clarity for others (assuming it's basically
correct or correctly labeled where it may not be) and provide feedback on
where it could be more clear.

*Starting at beginAsync*

@IBAction func buttonDidClick(sender:AnyObject) {

    // 1

    beginAsync {

        // 2

        let image = await processImage()

        imageView.image = image

    }

    // 3

}

After entering a beginAsync block, code will continue to execute inside the
block (like do blocks now) until it encounters an await-marked function. In
this sense it’s more beginAsyncContext not begin-being-asynchronous, which
is how it could be read. Although the end of this block will be where
synchronicity will resume once execution goes async (suspends?) inside it.

At the point of an await function, two things can happen: 1) If the
function needs time to execute -- if "it suspends" is how to describe it?
-- code execution will jump to outside the beginAsync block. 2) If the
function has its result already, code execution will continue inside the
block, or jump to catch error if it exists? (This would not be different
from now in that a receiver of a completion block can invoke it
immediately, before returning).

This is important to clarify because after the async block (or any
functions with completion blocks now) code afterwards can’t assume what
might have happened. But this proposal doesn’t solve the problem seen in
the @IBAction example, of order 1, 3, 2, which is actually worse because if
you can have immediate completion you aren’t sure if it is 1, 3, 2, or 1,
2, 3. This is actually an easy enough situation to handle if you’re aware
of it.

*Use of suspendAsync*

suspendAsync is the point at which the waiting is actually triggered? For a
while this threw me at first (no pun intended). Since it looks like the
more common transitive verb form of “suspend”, I read this as
“suspending-the-async”, therefore, resuming. The primitives looked like
beginAsync started doing something asynchronous and suspendAsync resumed
synchronous execution? Maybe the better order would be asyncSuspend (paired
with asyncBegin--or even better for that, asyncContext) would be less
likely to be confused this way? (More on the primitives just below)

However, even inside the block passed to suspendAsync the code is not
asynchronous yet. The code in the block passed to suspendAsync is still
executed at the time the block is passed in. That code is responsible for
taking the continuation block and error block, escaping with them and
storing them somewhere, and invoking either when the value or error is
ready. Is it correct that those blocks will be called on thread that
suspendAsync was called on?

It was also somewhat unclear what happens then the block passed to
suspendAsync reaches the end. Since suspendAsync is itself an async function
called with await, it looks like control now passes back to the end of the
original beginAsync block, wherever that is. That the getStuff() async wrapper
example returns the result of the call to suspendAsync in one line obscured
what was going on. That was

func getStuff() async -> Stuff {

    return await suspendAsync { continuation in

        getStuff(completion: continuation)

    }

}

What's going on would be more clear over two lines. For example if we
wanted to do further processing after getting our async result before
returning:

func getStuff() async -> Stuff {

    let rawStuff = await suspendAsync { continuation in

        getStuff(completion: continuation)

    }

    return processed(rawStuff)

}

Where exactly execution is paused and will resume is more clear.

In fact, the show the full async/await life cycle, it’s possible to
demonstrate in the same scope before introducing the semantics of async
functions:

beginAsync {

    do {

        let stuff = try await suspendAsync { continuation, error in

            //perform long-running task on other queue then call
continuation, error as appropriate

        }

        //Continuation block resumes here with `stuff`

        doSomething(with: stuff)

    } catch {

        //error block resumes here

        handleGettingStuffError(error)

    }

}

This is correct? While as the comments state it may be true that eventually
many users won’t need to interact with suspendAsync (though I think
beginAsync will remain common, such as the @IBAction example) it’s familiar
the key method that breaks familiar procedural execution and creates the
blocks that will allow it to resume. During the transition it will be
especially important for those adapting their own or others’ code. It
should be prominent.

One opinion point that I do want to mention though about the last example:
There should probably be just one suspendAsync method, the one with an
error continuation. First, if there's the no-error version then that will
probably proliferate in examples (seen already) coders will get learn how
to use await/async ignoring errors, a habit we’ll have to break later. But
if there's only one suspend method, then await would always include try,
forcing programmers who want to ignore them to have an empty catch block
(or use the space for a comment with your legitimate reason!). An
empty/comment only catch block would also be the case for legacy APIs with
no completion error parameter, though maybe these could be imported as
throwing Error.nil or Error.false, etc. So all uses would look like beginAsync
{ … } catch { … }.

beginAsync {

    let stuff = await suspendAsync { continuation, error in

        //perform long-running task on other queue then call continuation,
error as appropriate

    }

    //Continuation block resumes here with `stuff`

    doSomething(with: stuff)

} catch {

    //error block resumes here

    handleGettingStuffError(error)

}

I don’t lightly suggest trying to use syntax to force good habits, but in
this case it would be the cleaner API. The alternative is forcing code that
does handle errors to look like the last example above, obviously more
ungainly than code that ignores errors. Handling errors is already too easy
to ignore.

There are more substantive points that I want to touch on later, largely
went really well, even when working with UIKit, AppDelegate,
NSNotification. (I also worked on a project where the last guy had rolled
his own promises/futures system; you can guess how that went.)

But I wanted to clarify the basics of use. Thanks for this proposal and
everyone’s comments.

Mike Sanderson

···

from my use of a promise/future frameworks as part of a production app that

On Fri, Aug 18, 2017 at 5:09 PM, Adam Kemp via swift-evolution < swift-evolution@swift.org> wrote:

Thanks for the quick response!

On Aug 18, 2017, at 1:15 PM, Chris Lattner <clattner@nondot.org> wrote:

On Aug 18, 2017, at 12:34 PM, Adam Kemp <adam.kemp@apple.com> wrote:

For instance, say you’re handling a button click, and you need to do a
network request and then update the UI. In C# (using Xamarin.iOS as an
example) you might write some code like this:

private async void HandleButtonClick(object sender, EventArgs e) {
    var results = await GetStuffFromNetwork();
    UpdateUI(results);
}

This event handler is called on the UI thread, and the UpdateUI call must
be done on the UI thread. The way async/await works in C# (by default) is
that when your continuation is called it will be on the same
synchronization context you started with. That means if you started on the
UI thread you will resume on the UI thread. If you started on some thread
pool then you will resume on that same thread pool.

I completely agree, I would love to see this because it is the most easy
to reason about, and is implied by the syntax. I consider this to be a
follow-on to the basic async/await proposal - part of the Objective-C
importer work, as described here:
https://gist.github.com/lattner/429b9070918248274f25b714dcfc76
19#fix-queue-hopping-objective-c-completion-handlers

Maybe I’m still missing something, but how does this help when you are
interacting only with Swift code? If I were to write an asynchronous method
in Swift then how could I do the same thing that you propose that the
Objective-C importer do? That is, how do I write my function such that it
calls back on the same queue?

In my mind, if that requires any extra effort then it is already more
error prone than what C# does.

Another difference between the C# implementation and this proposal is the
lack of futures. While I think it’s fair to be cautious about tying this
proposal to any specific futures implementation or design, I feel like the
value of tying it to some concept of futures was somewhat overlooked. For
instance, in C# you could write a method with this signature:

...

The benefit of connecting the async/await feature to the concept of
futures is that you can mix and match this code freely. The current
proposal doesn’t seem to allow this.

The current proposal provides an underlying mechanism that you can build
futures on, and gives an example. As shown, the experience using that
futures API would work quite naturally and fit into Swift IMO.

I feel like this is trading conceptual complexity in order to gain
compiler simplicity. What I mean by that is that the feature feels harder
to understand, and the benefit seems to be that this feature can be used
more generally for other things. I’m not sure that’s a good tradeoff.

The other approach, which is to build a specific async/await feature using
compiler transformations, may be less generic (yield return would have to
work differently), but it seems (to me) easier to understand how to use.

For instance, this code (modified from the proposal):

@IBAction func buttonDidClick(sender:AnyObject) {
    doSomethingOnMainThread();
    beginAsync {
        let image = await processImage()
        imageView.image = image
    }
    doSomethingElseOnMainThread();
}

Is less straightforward than this:

@IBAction async func buttonDidClick(sender:AnyObject) {
    doSomethingOnMainThread();
    let imageTask = processImage()
    doSomethingElseOnMainThread();
    imageView.image = await imageTask
}

It’s clearer from reading of the second function what order things will
run in. The code from the proposal has a block of code (the callback from
beginAsync) that will run in part before the code that follows, but some of
it will run after buttonDidClick returns. That’s confusing in the same way
that callbacks in general are confusing. The way that async/await makes
code clearer is by making it more WYSIWYG: the order you see the code
written in is the order in which that code is run. The awaits just mark
breaks.

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

For instance, say you’re handling a button click, and you need to do a network request and then update the UI. In C# (using Xamarin.iOS as an example) you might write some code like this:

private async void HandleButtonClick(object sender, EventArgs e) {
    var results = await GetStuffFromNetwork();
    UpdateUI(results);
}

This event handler is called on the UI thread, and the UpdateUI call must be done on the UI thread. The way async/await works in C# (by default) is that when your continuation is called it will be on the same synchronization context you started with. That means if you started on the UI thread you will resume on the UI thread. If you started on some thread pool then you will resume on that same thread pool.

I completely agree, I would love to see this because it is the most easy to reason about, and is implied by the syntax. I consider this to be a follow-on to the basic async/await proposal - part of the Objective-C importer work, as described here:
Concrete proposal for async semantics in Swift · GitHub

Maybe I’m still missing something, but how does this help when you are interacting only with Swift code? If I were to write an asynchronous method in Swift then how could I do the same thing that you propose that the Objective-C importer do? That is, how do I write my function such that it calls back on the same queue?

You’re right: if you’re calling something written in Swift, the ObjC importer isn’t going to help you.

However, if you’re writing an async function in Swift, then it is reasonable for us to say what the convention is and expect you to follow it. Async/await doesn’t itself help you implement an async operation: it would be turtles all the way down… until you get to GCD, which is where you do the async thing.

As such, as part of rolling out async/await in Swift, I’d expect that GCD would introduce new API or design patterns to support doing the right thing here. That is TBD as far as the proposal goes, because it doesn’t go into runtime issues.

Another difference between the C# implementation and this proposal is the lack of futures. While I think it’s fair to be cautious about tying this proposal to any specific futures implementation or design, I feel like the value of tying it to some concept of futures was somewhat overlooked. For instance, in C# you could write a method with this signature:

...

The benefit of connecting the async/await feature to the concept of futures is that you can mix and match this code freely. The current proposal doesn’t seem to allow this.

The current proposal provides an underlying mechanism that you can build futures on, and gives an example. As shown, the experience using that futures API would work quite naturally and fit into Swift IMO.

I feel like this is trading conceptual complexity in order to gain compiler simplicity. What I mean by that is that the feature feels harder to understand, and the benefit seems to be that this feature can be used more generally for other things. I’m not sure that’s a good tradeoff.

The other approach, which is to build a specific async/await feature using compiler transformations, may be less generic (yield return would have to work differently), but it seems (to me) easier to understand how to use.

For instance, this code (modified from the proposal):

@IBAction func buttonDidClick(sender:AnyObject) {
    doSomethingOnMainThread();
    beginAsync {
        let image = await processImage()
        imageView.image = image
    }
    doSomethingElseOnMainThread();
}

Is less straightforward than this:

@IBAction async func buttonDidClick(sender:AnyObject) {
    doSomethingOnMainThread();
    let imageTask = processImage()
    doSomethingElseOnMainThread();
    imageView.image = await imageTask
}

This isn’t a fair transformation though, and isn’t related to whether futures is part of the library or language. The simplification you got here is by making IBAction’s implicitly async. I don’t see that that is possible, since they have a very specific calling convention (which returns void) and are invoked by objc_msgSend. OTOH, if it were possible to do this, it would be possible to do it with the proposal as outlined.

-Chris

···

On Aug 18, 2017, at 2:09 PM, Adam Kemp <adam.kemp@apple.com> wrote:

On Aug 18, 2017, at 12:34 PM, Adam Kemp <adam.kemp@apple.com <mailto:adam.kemp@apple.com>> wrote:

The reason we're discussing value semantics here is because they are useful making concurrency safer. If we define the meaning of value semantics as "a type where a subset of the member functions (the important ones) can be used with concurrency" then that definition of value semantics lose quite a bit of its value for solving the problem at hand. It's too vague.

I'm not actually that interested in the meaning of value semantics here. I'm debating the appropriateness of determining whether something can be done in another thread based on the type a function is attached to. Because that's what the ValueSemantical protocol wants to do. ValueSemantical, as a protocol, is whitelisting the whole type while in reality it should only vouch for a specific set of safe functions on that type.

···

Le 19 août 2017 à 11:38, Matthew Johnson <matthew@anandabits.com> a écrit :

Sent from my iPad

On Aug 19, 2017, at 8:16 AM, Michel Fortin via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

For instance, has Array<UIView> value semantics?

By the commonly accepted definition, Array<UIView> does not provide value semantics.

You might be tempted to say that it does not because it contains class references, but in reality that depends on what you do with those UIViews.

An aspect of the type (“does it have value semantics or not”) should not depend on the clients. By your definition, every type has value semantics if none of the mutating operations are called :-)

No, not mutating operations. Access to mutable memory shared by multiple "values" is what breaks value semantics. You can get into this situation using pointers, object references, or global variables. It's all the same thing in the end: shared memory that can mutate.

For demonstration's sake, here's a silly example of how you can give Array<Int> literally the same semantics as Array<UIView>:

  // shared UIView instances in global memory
  var instances: [UIView] =

  extension Array where Element == Int {

    // append a new integer to the array pointing to our UIView instance
    func append(view: UIView) {
      self.append(instances.count)
      instances.append(newValue)
    }

    // access views pointed to by the integers in the array
    subscript(viewAt index: Int) -> UIView {
      get {
        return instances[self[index]]
      }
      set {
        self[index] = instances.count
        instances.append(newValue)
      }
    }
  }

And now you need to worry about passing Array<Int> to other thread. ;-)

It does not really matter whether the array contains pointers or wether it contains indices into a global table: in both cases access to the same mutable memory is accessible through multiple copies of an array, and this is what breaks value semantics.

Types cannot enforce value semantics. Its the functions you choose to call that matters. This is especially important to realize in a language with extensions where you can't restrict what functions gets attached to a type.

This gets deeper into the territory of the conversation Dave A and I had a while ago. I think this conflates value semantics with pure functions, which I think is a mistake.

I agree that if you assume away reference counting a function that takes Array<UIView> but never dereferences the pointers can still be a pure function. However, I disagree that Array<UIView> has value semantics.

The relationship of value semantics to purity is that value semantics can be defined in terms of the purity of the "salient operations" of the type - those which represent the meaning of the value represented by the type. The purity of these operations is what gives the value independence from copies in terms of its meaning. If somebody chooses to add a new impure operation in an extension of a type with value semantics it does not mean that the type itself no longer has value semantics. The operation in the extension is not "salient".

This still begs the question: what operations are "salient"? I think everyone can agree that those used in the definition of equality absolutely must be included. If two values don't compare equal they clearly do not have the same meaning. Thread safety is also usually implied for practical reasons as is the case in Chris's manifesto. These properties are generally considered necessary for value semantics.

While these conditions are *necessary* for value semantics I do not believe they are *sufficient* for value semantics. Independence of the value is also required. When a reference type defines equality in terms of object identity copies of the reference are not truly independent.

This is especially true in a language like Swift where dereference is implicit. I argue that when equality is defined in terms of object identity copies of the reference are *not* independent. The meaning of the reference is inherently tied up with the resource it references. The resource has to be considered "salient" for the independence to be a useful property. On the other hand, if all you really care about is the identity and not the resource, ObjectIdentifier is available and does have value semantics. There is a very good reason this type exists.

--
Michel Fortin
https://michelf.ca

Suppose `delete(at:)` needs to do something asynchronous, like ask a server to do the deletion. Is processing of other messages to the actor suspended until it finishes? (Maybe the answer is "yes"—I don't have experience with proper actors.)

···

On Aug 19, 2017, at 2:25 AM, Thomas <tclementdev@free.fr> wrote:

I think we need to be a little careful here—the mere fact that a message returns `Void` doesn't mean the caller shouldn't wait until it's done to continue. For instance:

  listActor.delete(at: index) // Void, so it doesn't wait
  let count = await listActor.getCount() // But we want the count *after* the deletion!

In fact this will just work. Because both messages happen on the actor's internal serial queue, the "get count" message will only happen after the deletion. Therefore the "delete" message can return immediately to the caller (you just need the dispatch call on the queue to be made).

--
Brent Royal-Gordon
Architechies

There is no such thing as "trivially provable by way of transitivity". This type is comprised of only value types, and yet it has reference semantics:

  struct EntryRef {
    private var index: Int
    
    var entry: Entry {
      get { return entries[index] }
      set { entries[index] = newValue }
    }
  }

This type is comprised of only reference types, and yet it has value semantics:

  struct OpaqueToken: Equatable {
    class Token {}
    private let token: Token
    
    static func == (lhs: OpaqueToken, rhs: OpaqueToken) -> Bool {
      return lhs.token === rhs.token
    }
  }

I think it's better to have types explicitly declare that they have value semantics if they want to make that promise, and otherwise not have the compiler make any assumptions either way. Safety features should not be *guessing* that your code is safe. If you can somehow *prove* it safe, go ahead—but I don't see how that can work without a lot of manual annotations on bridged code.

···

On Aug 19, 2017, at 7:41 AM, Matthew Johnson <matthew@anandabits.com> wrote:

Regardless of which approach we take, it feels like something that needs to be implicit for structs and enums where value semantics is trivially provable by way of transitivity. When that is not the case we could require an explicit `value` or `nonvalue` annotation (specific keywords subject to bikeshedding of course).

--
Brent Royal-Gordon
Architechies

This is the only part of the proposal that i can't concur!

^async^ at call side solve this nicely! And Pierre also showed how common
people are doing it wrong! And will make this wrong using Futures too.

func doit() async {
let dataResource = async loadWebResource("dataprofile.txt”)
let imageResource = async loadWebResource("imagedata.dat”)
let imageTmp = await decodeImage(dataResource, imageResource)
self.imageResult = await dewarpAndCleanupImage(imageTmp)
}

Anyway, we have time to think about it.

···

Em sáb, 9 de set de 2017 às 20:30, David Hart via swift-evolution < swift-evolution@swift.org> escreveu:

On 10 Sep 2017, at 00:40, Kenny Leung via swift-evolution < > swift-evolution@swift.org> wrote:

Then isn’t the example functionally equivalent to:

    func doit() {
        DispatchQueue.global().async {
            let dataResource = loadWebResource("dataprofile.txt")
            let imageResource = loadWebResource("imagedata.dat")
            let imageTmp = decodeImage(dataResource, imageResource)
            let imageResult = dewarpAndCleanupImage(imageTmp)
            DispatchQueue.main.async {
                self.imageResult = imageResult
            }
        }
    }

if all of the API were synchronous? Why wouldn’t we just exhort people to
write synchronous API code and continue using libdispatch? What am I
missing?

There are probably very good optimisations for going asynchronous, but I’m
not the right person for that part of the answer.

But I can give another answer: once we have an async/await pattern, we can
build Futures/Promises on top of them and then we can await on multiple
asynchronous calls in parallel. But it won’t be a feature of async/await in
itself:

func doit() async {
let dataResource = Future({ loadWebResource("dataprofile.txt”) })
let imageResource = Future({ loadWebResource("imagedata.dat”) })
let imageTmp = await decodeImage(dataResource.get, imageResource.get)
        self.imageResult = await dewarpAndCleanupImage(imageTmp)
}

-Kenny

On Sep 8, 2017, at 2:33 PM, David Hart <david@hartbit.com> wrote:

On 8 Sep 2017, at 20:34, Kenny Leung via swift-evolution < > swift-evolution@swift.org> wrote:

Hi All.

A point of clarification in this example:

func loadWebResource(_ path: String) async -> Resourcefunc decodeImage(_ r1: Resource, _ r2: Resource) async -> Imagefunc dewarpAndCleanupImage(_ i : Image) async -> Image
func processImageData1() async -> Image {
    let dataResource = await loadWebResource("dataprofile.txt")
    let imageResource = await loadWebResource("imagedata.dat")
    let imageTmp = await decodeImage(dataResource, imageResource)
    let imageResult = await dewarpAndCleanupImage(imageTmp)
    return imageResult
}

Do these:

await loadWebResource("dataprofile.txt")

await loadWebResource("imagedata.dat")

happen in in parallel?

They don’t happen in parallel.

If so, how can I make the second one wait on the first one? If not, how
can I make them go in parallel?

Thanks!

-Kenny

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Not really certain what async/await adds, using this library (note self
promotion) which is built on top of GCD:

You can write:

    func doit() {
        AsynchronousFuture { // Executes in background and therefore does
not block main
            let dataResource = loadWebResource("dataprofile.txt") //
Returns a future and therefore runs concurrently in background.
            let imageResource = loadWebResource("imagedata.dat") // Future
therefore concurrent.
            let imageTmp = decodeImage(dataResource.get ??
defaultText, imageResource.get ?? defaultData) // Handles errors with
defaults easily, including timeout.
            let imageResult = dewarpAndCleanupImage(imageTmp)

            Thread.executeOnMain {
                self.imageResult = imageResult
            }
        }
    }

So why bother with async/await?

PS I also agree with the comments that there is no point writing the 1st
two lines of the example with async and then calling them with await - you
might as well write serial code.

  -- Howard.

···

On 10 September 2017 at 10:33, Wallacy via swift-evolution < swift-evolution@swift.org> wrote:

This is the only part of the proposal that i can't concur!

^async^ at call side solve this nicely! And Pierre also showed how common
people are doing it wrong! And will make this wrong using Futures too.

func doit() async {
let dataResource = async loadWebResource("dataprofile.txt”)
let imageResource = async loadWebResource("imagedata.dat”)
let imageTmp = await decodeImage(dataResource, imageResource)
self.imageResult = await dewarpAndCleanupImage(imageTmp)
}

Anyway, we have time to think about it.

Em sáb, 9 de set de 2017 às 20:30, David Hart via swift-evolution < > swift-evolution@swift.org> escreveu:

On 10 Sep 2017, at 00:40, Kenny Leung via swift-evolution < >> swift-evolution@swift.org> wrote:

Then isn’t the example functionally equivalent to:

    func doit() {
        DispatchQueue.global().async {
            let dataResource = loadWebResource("dataprofile.txt")
            let imageResource = loadWebResource("imagedata.dat")
            let imageTmp = decodeImage(dataResource, imageResource)
            let imageResult = dewarpAndCleanupImage(imageTmp)
            DispatchQueue.main.async {
                self.imageResult = imageResult
            }
        }
    }

if all of the API were synchronous? Why wouldn’t we just exhort people to
write synchronous API code and continue using libdispatch? What am I
missing?

There are probably very good optimisations for going asynchronous, but
I’m not the right person for that part of the answer.

But I can give another answer: once we have an async/await pattern, we
can build Futures/Promises on top of them and then we can await on multiple
asynchronous calls in parallel. But it won’t be a feature of async/await in
itself:

func doit() async {
let dataResource = Future({ loadWebResource("dataprofile.txt”) })
let imageResource = Future({ loadWebResource("imagedata.dat”) })
let imageTmp = await decodeImage(dataResource.get, imageResource.get)
        self.imageResult = await dewarpAndCleanupImage(imageTmp)
}

-Kenny

On Sep 8, 2017, at 2:33 PM, David Hart <david@hartbit.com> wrote:

On 8 Sep 2017, at 20:34, Kenny Leung via swift-evolution < >> swift-evolution@swift.org> wrote:

Hi All.

A point of clarification in this example:

func loadWebResource(_ path: String) async -> Resourcefunc decodeImage(_ r1: Resource, _ r2: Resource) async -> Imagefunc dewarpAndCleanupImage(_ i : Image) async -> Image
func processImageData1() async -> Image {
    let dataResource = await loadWebResource("dataprofile.txt")
    let imageResource = await loadWebResource("imagedata.dat")
    let imageTmp = await decodeImage(dataResource, imageResource)
    let imageResult = await dewarpAndCleanupImage(imageTmp)
    return imageResult
}

Do these:

await loadWebResource("dataprofile.txt")

await loadWebResource("imagedata.dat")

happen in in parallel?

They don’t happen in parallel.

If so, how can I make the second one wait on the first one? If not, how
can I make them go in parallel?

Thanks!

-Kenny

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I have been writing a lot of fully async code over the recent years (in objc) and this all seems to fit well with what we're doing and looks like it would alleviate a lot of the pain we have writing asyc code.

Great,

# Extending the model through await

I'm a bit worried about the mention of dispatch_sync() here (although it may just be there to illustrate the deadlock possibility). I know the actor runtime implementation is not yet defined, but just wanted to mention that dispatch_sync() will lead to problems such as this annoying thing called thread explosion. This is why we currently cannot use properties in our code (getters would require us to call dispatch_sync() and we want to avoid that), instead we are writing custom async getters/setters with callback blocks. Having async property getters would be pretty awesome.

I think that awaiting on the result of an actor method ends up being pretty similar (in terms of implementation and design tradeoffs) as dispatch_sync. That said, my understanding is that thread explosion in GCD happens whenever something blocks a GCD thread, not when it politely yields control back to GCD. Am I misunderstanding what you mean.

Another thing: it is not clearly mentionned here that we're getting back on the caller actor's queue after awaiting on another actor's async method.

I omitted it simply because that is related to the runtime model, which I’m trying to leave unspecified. I agree with you that that is the most likely answer.

# Scalable Runtime

About the problem of creating too many queues. This is something that has annoyed me at this year's wwdc. It used to be back when the libdispatch was introduced in 10.6 that we were told that queues are very cheap, we could create thousands of them and not worry about threads, because the libdispatch would do the right thing internally and adjust to the available hardware (the number of threads would more or less match the number of cores in your machine). Somehow this has changed, now we're being told we need to worry about the threads behind the queues and not have too many of them. I'm not sure if this is something inevitable due to the underlying reality of the system but the way things were presented back then (think in term of queues, don't worry about threads) was very compelling.

I don’t know why the messaging changed, but I agree with you: the ideal is to have a simple and predictable model.

# Entering and leaving async code

Certainly seems like the beginAsync(), suspendAsync() primitives would be useful outside of the stdlib. The Future<T> example makes use of suspendAsync() to store the continuation block and call it later, other codes would do just as well.

Shouldn't this:

let imageTmp = await decodeImage(dataResource.get(), imageResource.get())

rather be:

let imageTmp = await decodeImage(await dataResource.get(), await imageResource.get())

As designed (and as implemented in the PR), “await” distributes across all of the calls in a subexpression, so you only need it at the top level. This is one of the differences from the C# design.

-Chris

···

On Aug 18, 2017, at 6:17 AM, Thomas via swift-evolution <swift-evolution@swift.org> wrote:

dispatch_sync isn't quite the ideal way of thinking about it, since it will block the calling context, and as you note this would potentially deadlock the current thread if an actor invokes one of its own methods. This isn't a desirable or fundamentally necessary pitfall, since really, the caller actor is suspending in wait simultaneously with the callee actor taking control. This is more like a "tail-dispatch_async" kind of operation, where you don't really want the calling context to block, but you still want to be able to reuse the current thread since it'll be immediately freed up by blocking on the async operation. That's something we could conceivably build into runtime support for this model.

-Joe

···

On Aug 18, 2017, at 11:57 AM, Chris Lattner via swift-evolution <swift-evolution@swift.org> wrote:

I think that awaiting on the result of an actor method ends up being pretty similar (in terms of implementation and design tradeoffs) as dispatch_sync. That said, my understanding is that thread explosion in GCD happens whenever something blocks a GCD thread, not when it politely yields control back to GCD. Am I misunderstanding what you mean.