SE-0297: Concurrency Interoperability with Objective-C

Hi Swift Evolution!

The review of Concurrency Interoperability with Objective-C begins now and runs through December 22, 2020.

Reviews are an important part of the Swift evolution process. All review feedback should be either on this forum thread or, if you would like to keep your feedback private, directly to the review manager (via direct message in the Swift forums).

What goes into a review of a proposal?

The goal of the review process is to improve the proposal under review through constructive criticism and, eventually, determine the direction of Swift.

When reviewing a proposal, here are some questions to consider:

  • What is your evaluation of the proposal?
  • Is the problem being addressed significant enough to warrant a change to Swift?
  • Does this proposal fit well with the feel and direction of Swift?
  • If you have used other languages or libraries with a similar feature, how do you feel that this proposal compares to those?
  • How much effort did you put into your review? A glance, a quick reading, or an in-depth study?

Thanks,
Chris Lattner
Review Manager

13 Likes

To see the effect of the heuristics in this proposal on Apple Objective-C APIs across macOS / iOS / watchOS / tvOS, please see the API diffs in this pull request.

Doug

9 Likes

I'm curious how this should interact with APIs that take both a completion handler and a dispatch queue to be used for that completion handler. My understanding is that current best practices are for asynchronous APIs to provide an argument for which queue the completion should run on to avoid the common pattern of needing to hop from queue to queue. For instance, if you want the completion to run on the main queue then it's wasteful to have to do a dispatch async onto the main queue in your completion block instead of just having that completion blocked put on the main queue to start with.

That said, I couldn't find many examples in the API diff linked above where the method actually takes both a completion block and a queue. One example is this method from the (deprecated) GLKTextureLoader class:

  func texture(withContentsOfFile path: String, options: [String : NSNumber]? = nil, queue: DispatchQueue?, completionHandler block: @escaping GLKTextureLoaderCallback)
  func texture(withContentsOfFile path: String, options: [String : NSNumber]? = nil, queue: DispatchQueue?) async throws -> GLKTextureInfo

That new await now takes a queue. So how does code like this behave?

let texture = await try loader.texture(withContentsOfFile:path options:nil queue:someQueue)
// Which queue am I on here?

Consider the answers for cases like when you start on the main queue and when you start on some other GCD queue.

Part of the answer might depend on the structured concurrency spec, but I'm not sure if this ObjC interop also affects the answer.

Also, should the interop try to smartly handle APIs like this where maybe they can use the task scheduling features from the structured concurrency spec to provide a good queue automatically. For instance, maybe instead we reflect this method like so:

  func texture(withContentsOfFile path: String, options: [String : NSNumber]? = nil, queue: DispatchQueue?, completionHandler block: @escaping GLKTextureLoaderCallback)
  func texture(withContentsOfFile path: String, options: [String : NSNumber]? = nil) async throws -> GLKTextureInfo

And then a call like this:

// Start on the main queue
let texture = await try loader.texture(withContentsOfFile:path options:nil) // main queue is implicitly passed in?
// Needs to be back on the main queue (as per structured concurrency spec)

Ideally you want this to not hop queues unnecessarily. If the reflected API exposes the queue parameter then the developer has to provide one, and they have to remember to pass in the main queue. Could that instead be inferred?

The fact that there aren't more examples like this (that I found) might imply that it's not something that has to be handled, but is that because we're not providing good APIs? Is this just going to make that worse, or are we hoping that as more people adopt Swift-native APIs using structured concurrency features things like this will just end up working out?

Or am I just completely off in my understanding of best practices for async APIs with GCD?

1 Like

If your function cares about the context it executes on---say, because it's a method on an actor and we need to be in the actor's context---then after the await you'll come back to that context. So you'll end up in the right place, although it might cost you an extra hop.

I think this was floated as best practice early on in GCD, but I did the same API crawl as you did, and the number of APIs that follow this practice is very, very small (I think it was a dozen or so). Most even take optional dispatch queues, so you can pass nil.

Because of the small number, and the fact that functions that care about their context will automatically return to it after the await, I don't consider it worth pursuing special import rules for APIs that have completion handlers and a dispatch queue.

Doug

3 Likes

For API surface that may be, but other code people write is not similar.
asking your client where to run code is very good practice.

let texture = await try loader.texture(withContentsOfFile:path options:nil queue:someQueue)
// more code

@Douglas_Gregor are you saying that if you await that way the continuation more code is running on someQueue ? wow that's... more than a little confusing that code that looks imperative/sequential can hop contexts this way. you CANNOT hold something like a mutex across such code, with a pthread_mutex you'd have undefined behavior and with an os_unfair_lock you'd crash.

I don't think we really need a special importer rule but having some magical way to say "the current queue I'm on" would be interesting I think (though it has some API/behavior concerns at the dispatch level)

1 Like

I think under the hood some internal trampoline would end up on someQueue and would then dispatch back to some other queue depending on the context your async function started in. This is relevant to the next question:

My understanding is that this could be done using a custom Executor (from the Structured Concurrency pitch).

Presumably that's how code running on the main queue would end up back on the main queue (the continuation is scheduling using the executor that was active when the function was suspended). But for code not running on the main queue I think the Executor would be either associated with an Actor (in which case it uses some private queue) or...what? A global queue?

If this code starts on someQueue and passes in someQueue then presumably it prefers to end up back on someQueue. Does that require a custom Executor? Will there be a built-in (public) Executor implementation that lets you specify a queue? I guess I'm building up a list of questions for the Structured Concurrency spec...

I think I'm fine with saying this isn't the job of the importer to handle (especially if it's not common in API), but if it's a pattern that is common enough generally (especially if it's good practice) then we need to know the right way to write async code using functions that follow that pattern. So what is the right way? Do we know yet?

1 Like

Ah, I was taking the example from the other poster less literally, where "lock" was a proxy for "some thing you have to do before and then undo after".

You are absolutely right that one must not hold a mutex across a potential suspension point, because if you do get suspended you will resume on the same task but not on the same thread. This is indeed part of the reason we want potential suspension points marked with await.

Doug

3 Likes

The imported APIs look beautiful!
+1 for the proposal.

I skimmed through the imported APIs and specifically looked for URL loading handling since I remembered a discussion on pointfree.co referring to this article by Ole Begemann:
Making illegal states unrepresentable – Ole Begemann

Here is discussed how the URLResponse of a completion handler to URLSession.dataTask(with...) may be set even though the Error is also present.

Might the case be similar for sendAsynchronousRequest on NSURLConnection? The documentation only states that one of the Data or Error parameters are non-nil, but doesn’t mention the URLResponse.
In Translation into Swift Concurrency model by DougGregor · Pull Request #1 · DougGregor/swift-concurrency-objc · GitHub, this method is imported as async throws -> (URLResponse, Data).
In case the response is always present in the completion in the objc API, then that may be an error.

I was wondering when we would come across such an API. This is the reason we have the swift_async(none) attribute on the Objective-C side: one can disable the translation to async for this API, then implement another one that preserves the extra information using withUnsafeContinuation and the original API.

Doug

3 Likes

Read the proposal and about a quarter of the discussion around it.

When the completion handler is optional (and defaults to nil), should the async method have the @discardableResult attribute? (And vice versa.)

 class NSExtensionContext: NSObject {
 
   func open(_: URL, completionHandler: ((Bool) -> Void)? = nil)
 
+  @discardableResult
+  func open(_: URL) async -> Bool
 }

When the result is discarded, can overload resolution prefer the non-async method?

6 Likes

This is a great idea, thank you!

That would mean that _ = open(url) would implicitly be run concurrently with the body of the function, which IMO we shouldn't make so subtle. I don't think we should tweak the async overload resolution rule on a per-expression basis like this. If we need to sugar

Task.runDetached {
  await extrension.open(url)
}

then we can do that separately.

Doug

4 Likes

This proposal has been accepted with revision, thank you all.

-Chris Lattner
Review Manager

2 Likes

This is such a beautiful thing, I can't wait to be able to use this!

1 Like