It should. It's a known issue. You can refer to Revert "stdlib: Add reasync variants of '&&', '||' and '??'" by DougGregor · Pull Request #36762 · apple/swift · GitHub and [DNM] stdlib: Add reasync variants of '&&', '||' and '??' by slavapestov · Pull Request #36830 · apple/swift · GitHub.
I may have missed this, but the original post states:
This includes Swift 5.5, with support for language-native concurrency.
I take that to mean async/await and Actor types are part of Swift 5.5 language.
If that is the case, is it possible to use what is being referred to as "Structured Concurrency" for apps that support older OS versions, < iOS 15?
Or for the concurrency features of the language to be used, I would have to target iOS 15 and macOS Monterey?
Currently, it only supports iOS 15, but back-deploy options are being explored (not a guarantee, though).
Hmm… thank you for your reply.
I guess I figured that since it was promoted as a “language native support” and not an OS feature; I figured it would be usable right now.
That said, not completely following Apple’s strategy here for Swift, if a core language feature is tied to a closed sourced platform like iOS. Also does that mean the “Structured Concurrency” is not available on Linux or Windows?
I take it that it will be between 3 to 5 years before “language native support” for async/await to be added to apps; because of the reality of software development, for us at least, on iOS is that we have to support more then just the latest version of iOS.
AFAICT, it is available on those platforms, given you install the Swift runtime (included in the toolchain), or embed the Swift runtime in the binary.
Ah. I see. Thank you for the clarification.
I didn't understood how to specify thread or queue to execute async-await code. Also it is not clear to me, is it safe to use self without weak. Was correct support already included in implementation. Or it was just ignored In samples.
Can anyone explain it? Or send the link with existed answer?
I've started adding Swift Concurrency support to Alamofire and it's going pretty well so far. There are a few open issues.
As discussed in my WWDC lab with @John_McCall and @Douglas_Gregor (IIRC), existing APIs which allow users to control the DispatchQueue
on which completion handlers are called have suboptimal choices right now when it comes to what kind of queue to use when wrapping these APIs.
- A serial queue is unnecessarily blocking, creating a bottleneck when completing continuations.
- A parallel queue is unnecessarily parallel, possibly forcing the system create additional threads, in addition to the threads which already exist for Swift's concurrency features.
Providing access to the underlying DispatchQueue
pool through some sort of facade could provide a better answer here. It would provide only the level of parallelism the system has already decided is suitable for the app while not unnecessarily limiting it. However, one obvious concern is preventing the running of arbitrary work on this queue. Some way of limiting its visibility to only continuation contexts, or limiting the access to the concurrency pool to only when it's called from a continuation context, could help alleviate that concern. For now I've used a concurrent queue, as the extra threads should be inherently limited by various underlying properties, like the number of simultaneous requests processed by URLSession
.
One other, somewhat major, issue is the unfortunate intersection of autoclosure
s, await
, and XCTest. Pretty much all of XCTest's various assertion functions use autoclosure
s to evaluate their predicates and comments. This creates an unfortunate limitation where we can't await
within those assertions, complicating test setup by requiring a separate declaration for every await
ed value. This is particularly annoying when testing APIs with the use of async let
. I know the autoclosure
issue has been discussed in other contexts but this issue seems particularly impactful.
Some feedback about overloads that differ only in async
, with Core Data & GRDB as examples, and a request to allow such overloads given a (not beautiful) workaround exists.
Another issue I've run into is that, while resultBuilder
s appear to allow async
builder functions and compiles, there's no way to await
the builder, causing the use site to fail.
@resultBuilder
struct AsyncInts {
static func buildBlock(_ components: Int...) async -> [Int] {
components
}
}
func ints(@AsyncInts _ ints: () -> [Int]) async {
print(ints())
}
async {
await ints { // Error: 'async' call in a function that does not support concurrency
1
2
3
}
}
How should (global) actors and Notifications interact? e.g. a notification may be sent from any thread/actor context. Is there a good way to ensure that the notification handler is run on the right actor? See the class below
let TestNotification = Notification.Name("Test")
class Test : NSObject {
@MainActor
@objc dynamic func handleNotification(_ notification: Notification) {
assert(Thread.isMainThread) // Asserts when notification is received
}
func sendNotification() {
DispatchQueue.global().async {
NotificationCenter.default.post(name: TestNotification, object: self)
}
}
func start_selector() {
NotificationCenter.default.addObserver(self, selector: #selector(handleNotification(_:)), name: TestNotification, object: nil)
sendNotification()
}
func start_block() {
// FIXME - remove observer
NotificationCenter.default.addObserver(forName: TestNotification, object: nil, queue: nil, using: {
self.handleNotification($0) // error: Call to main actor-isolated instance method 'handleNotification' in a synchronous nonisolated context
})
sendNotification()
}
}
The handleNotification()
function is declared @MainActor
, but it's possible to register it with a notification center and then it be called outside the main actor. Should there be a warning about using #selector
with actor-isolated functions?
In start_block()
, I'm instead using the block API. The error can be resolved by using async { await handleNotification(notification) }
. This can be made into an extension like this:
extension NotificationCenter {
func addObserver(forName name: Notification.Name, object: Any?, using block: @escaping @MainActor (Notification) -> ()) {
addObserver(forName: name, object: object, queue: nil) { notification in
async { await block(notification) }
}
}
}
But this then only works for @MainActor
. It would be good if there was some way of abstracting over the isolation context so that this could be done generically for all (global) actors.
Overall, I like the concurrency features very much. There are only some problem, I have encountered. One of them is the following:
You can do this to execute code on the Main Actor:
async { @MainActor in
await asyncFunc()
syncFunc()
}
This works fine. However, if I remove the call to asyncFunc()
, I get the following error:
async { @MainActor in // Error: Converting function value of type '@MainActor @Sendable () -> ()' to '@Sendable () async -> ()' loses global actor 'MainActor'
syncFunc()
}
It has taken me some time to find the syntax needed to fix the error (even though I have invested many hours into the concurrency proposals over time):
async { @MainActor () async in
syncFunc()
}
If find this error unintuitive and hope that changes can be made so that my second example would simply work. Or is there a deeper reason why making this work would present problems?
The async/await proposal says:
Asynchronous function types are distinct from their synchronous counterparts. However, there is an implicit conversion from a synchronous function type to its corresponding asynchronous function type.
To me, this reads like my second example should already work.
Yeah, I agree, that should work.
I'm wondering how I can accomplish a serial queue of events on a Thread subclass. For instance, in this concept, how would I do this without semaphores?
class DBClass: Thread {
// queue property here
override func main() {
while true {
// serially execute queued requests
// sleep thread
}
}
public func execute(_ theCommand: String) async throws {
// queue request
// wake thread
// wait for main to get to it and return after request is executed
}
}
Would this be an actor? Eg
actor DBActor {
public func execute(_ theCommand: String) async throws {
// execute request
}
}
execute()
might not need to be async
if it isn’t itself await
ing anything. If there are awaits, you would need to watch out for reentrancy - where other requests could start running while the actor is suspended.
I've only loosely been following the concurrency work; just because it's been split between a lot of proposals, that have been continuously reworked, and it's hard to find time to try it all out thoroughly. The WWDC talks have been really good at giving an overview of where things stand and what the thinking is behind it all. They're really good, so thanks to everyone involved.
I really like the overall approach. I think it does a great job of cleaning up the syntax required to exploit concurrency today.
With one exception: task groups. Take a look at the example from @kavon 's section of Explore Structured Concurrency in Swift:
The explanation given in the talk is that, while a function using async let
has a statically knowable number of child tasks, you don't necessarily know the number of child tasks for examples like this which loop over a collection (or make use of branching control flow). The problem I have is that, while async let
does a really good job of removing the ceremony and making concurrency feel like a natural part of the language, this code... well, doesn't.
There are a number of things that stick out:
-
Lots of nested scopes. One to begin the task group, another to actually perform the loop, and another inside of the loop containing the child task contents. Then another loop later, within the first scope. And all of that within a function scope.
-
Lack of type inference. I need to declare the type of the child task's results in advance, which doesn't feel very swifty. Functions in Swift can have horrid return types, involving lots of generic wrappers which are a pain to type; some of them can't even be typed.
What do I do if my child tasks return an opaque type? I'm assuming I won't be able to write
withThrowingTaskGroup(of: some MyProtocol)
, since the compiler wouldn't know the underlying type. -
Double loops. I need one loop to add all of the child tasks to the group, and another directly after it to process the results. It just looks really awkward.
-
It's a fairly major expansion of
with...
functions. They are used today, of course, but they tend to be reserved for relatively advanced features, typically involving unsafe constructs. I wouldn't be surprised if most Swift developers have never been exposed to these kinds of functions before, and aren't aware of the rules like not escaping the group object. By the way - what happens if you do escape the group object? Is it undefined behaviour?
Overall, I'm left feeling that task groups lack some of the elegance found in other parts of the concurrency design. It still has a lot of the issues that you have with concurrency APIs today, and it doesn't feel better integrated in the language compared to library-level constructs such as DispatchGroup
.
I don't know if the answer is to include convenience functions for common map/reduce-style operations, some new kind of statement syntax, or what. But that's my feedback; mostly very impressive, but this one part feels a bit lacking.
Yes, it should. @Joe_Groff recently fixed this in the compiler.
Doug
Thanks for the feedback! I agree that task groups in their raw form are clunky (and that's part of the reason we want to also have async let
for the very common single-task, single-value case), but it's difficult for me to imagine an improvement that doesn't sacrifice some amount of generality. The scope stacking is ugly, but on the other hand, it's necessary to have a distinct scope for the group, in order to allow the child tasks' scope to be independent of any individual statement's scope. In the example from the talk, there is one loop to create tasks, and another loop to consume their results, but that's just one pattern you can implement using task groups. As you suggested, I think it would be useful to have higher-level utility functions that encapsulate common patterns, like the map-reduce pattern in the example.
Another thing that I think would help make task groups feel more expressive and integrated is to make the compiler's data race analysis for @Sendable
closures aware of task groups. Right now, we treat the closure argument to group.async
like any other @Sendable
closure, but within a task group, we know that the closure will execute, and that it must finish executing before the group goes out of scope. With those invariants, it should be allowed for child task closures to capture local mutable variables as long as those variables aren't captured by another child task, and the parent task doesn't concurrently access the same variables while the child tasks are running. The compiler could also allow definite initialization of variables by child tasks. Both of these improvements should reduce the number of times you need to express a return type and use a for loop to communicate data from child to parent tasks.
I had the same thought while watching the videos. "Async/await is done so elegantly," followed by Tasks and feeling that Swift was about to get a lot more difficult to comprehend.