SE-0304 (2nd review): Structured Concurrency

Very big +1 to this. It is reasonable to think that detach is too short / not descriptive enough, but prepending spawn to it creates more semantic confusion than it solves. I have an alternative proposal over here, because we've found that we really need an alternative to detach. It both adds an API that subsumes the majority of uses of detach, and gives detach a more appropriately-descriptive name. Since it's an additional proposal, I've gone off and started a separate pitch thread rather than put more alternatives here.

Doug

5 Likes

Sure, I'm not specifically arguing for one name, I'm arguing that detatch by itself as a global function is very ambiguous. How about detatchTask or something like that?

(I haven't read the linked post from Doug, but will later)

2 Likes

Maybe we can rename detach to withDetachedTask, similar to withUnsafeBytes and other methods in the standard library.

1 Like

with* function family passes an argument to a closure (whatever asterisk is — pointer, continuation, you name it). In case of detach there is nothing to pass.

3 Likes

I think detachTask would be more appropriate:

let backgroundTask = detachTask {
  ...
}
2 Likes

Instead of using two separate functions, could we use a control parameter?

func spawn(asDetached detached: Bool, doing body: () -> Void) -> Handle

(I think there was a post above suggesting a third state. Then we would use a three-way enumeration type for the first parameter.)

1 Like

I very much agree with this. Combining it with

maybe we could have a hierarchy of priorities and the resulting global priority would be the product of all the relative priorities.
This would probably introduce the problem of being able to say, from a local environment, "I need to make this relative priority end up higher than that other priority", but I suppose there are solutions to it (e.g. if the priority keeps track of the hierarchy).

But I haven't really used priorities so I cannot foresee if this would introduce other kinds of problems :disappointed:

I would be hesitant to assume that you can build a global priority graph. We tried this with HTTP/2 and it didn't work super well. In general my heuristic is that complex priority schemes are rarely worthwhile: systems with small levels of granularity usually get you 99%+ of the benefit with vastly less of the complexity.

Edit: to clarify, I mean systems with a small number of levels. The new HTTP prioritisation design uses 7 levels, and it probably could have gotten away with fewer.

15 Likes

I would be hesitant to assume that you can build a global priority graph. We tried this with HTTP/2 and it didn't work super well . In general my heuristic is that complex priority schemes are rarely worthwhile: systems with small levels of granularity usually get you 99%+ of the benefit with vastly less of the complexity.

+1

Just a handful are truly needed, I think KISS really applies. (but you need a few :smiley: )

2 Likes

I guess it depends on the use-case; from what I've been able to glean of modern server development, the hardware is flexible and the focus is on scaling to make use of it all. It's easy to just throw more resources at the problem, and that tends to be what server developers reach for to quickly fix their performance problems.

But there are other use-cases, where the hardware is fixed and your focus is on squeezing every last ounce of performance out of it, and especially reducing latency (e.g. gaming, where even if you're loading a bunch of textures, you want something like a sound effect to take priority because stuttering sound is more obvious to the player).

Also, it seems to me that it is the current design which attempts to build a global priority graph. When some asynchronous code in a library declares its task to be "high priority", it is making an overly-broad assertion of its own importance.

Perhaps the solution is to use custom executors to arbitrate the library's priority levels in to application-wide priorities, but that's still a nebulous area of the concurrency design.

Most libraries should not be initiating unstructured tasks; they should be inheriting the priority and other context of the tasks they're invoked with.

2 Likes

Maybe withUncheckedCancellation(operation:onCancel:) is more descriptive than withTaskCancellationHandler(operation:onCancel:)?

withUncheckedCancellation {
  // operation.
} onCancel: {
  // handler.
}

Since it is explicitly documented.

/// Does not check for cancellation, and always executes the passed `operation`.
///
/// This function returns instantly and will never suspend.
func withTaskCancellationHandler<T>(
  operation: () async throws -> T
  onCancel handler: @Sendable () -> Void,
) async rethrows -> T

Thanks for the suggestion but that’s a bit off the mark.

That it does not “check” is pretty much just re-stating what the entire swift cancellation model is; it is not implicitly checking anywhere (except in “spawnUnlessCancelled” but that’s pretty explicit actually).

It’s not like everything else is checking, and this specific one is the “only this one does not check”

1 Like

As @John_McCall says, the error here is having a library make assertions about the importance of any of its own work. All priority schemes will have this risk: either anyone can express a priority, in which case it is always possible for a misbehaving component to assert the highest priority level for itself, or you attempt to weight individual components against each other, which requires the programmer to be able to correctly adjudicate, statically, ahead of time, how important their relative components are.

The answer IMO is to not bother. Library developers should inherit priority from their users and ask when they don’t have good answers, or default to lower priorities. I think a more complex priority scheme will buy extremely little performance improvement over a simpler one when held by an expert user with complete system understanding, and risks being slower when used by non-expert users. That’s a bad trade-off.

6 Likes

... and your focus is on squeezing every last ounce of performance out of it, and especially reducing latency ...

I have spent a fair amount of time working on systems like that with fairly significant latency requirements in wire-to-wire response times, and still a handful of priorities was definitely enough.

We also hosted customer-custom-code as plugins where we allowed them to specify a priority from 0-100, and I can honestly say that was a bad design choice and mostly useless. It is better to guide developers with a smaller useful set of well defined priorities with fairly clear definitions of expected usage, it much better serves its purpose IMHO.

At the end of the day, you end up with wanting to pin threads to cores and minimise hopping/switching between them (and in our case using a userland networking stack, but that is a different discussion), something that it seems is nicely prepared for with custom executors and actors in the new concurrency model.

For the use cases where latency was just important instead of critical, you still just want to be able to do rough bucketing of priorities to ensure decent behaviour. This is the world where I would expect most people to live in, those of us who need full control will need to work with custom executors etc, which is fine - as long as the hooks are designed in (which they seem!).

7 Likes

Voluntary Suspension

The proposal describes a Task.yield() public API.
The implementation also has a Task.sleep(_:) public API.
Could these be combined, by using an argument label and a default argument?
(And possibly renamed to suspend, because yield can also mean "generate a value".)

extension Task {
  public static func suspend(nanoseconds: UInt64? = nil) async
}
8 Likes

What's the status of this review now? It's almost a month after the stated deadline, and I don't see any official updates. Did I miss anything?

The proposal authors are making some adjustments as the result of feedback – the intention is to put it into a 3rd review shortly.

11 Likes

Review Conclusion

The period for the second review has ended. A third review with an amended proposal is now underway.

1 Like