Generally this looks good, but I do have two specific concerns to do with Clocks. The first relates to the part where we say
These absolute deadlines are composable and nestable to any set scope of a deadline. This means that when more than one withDeadline is nested the minimum of the expiration is taken. If any nested cases are differing clocks the deadline is adjusted to the minimum by aproximating the current deadline with the offset of the proposed expiration.
Leaving aside for a moment the typo (should be "approximating", not "aproximating"), the issue here is that there is no particular reason that one Clock should be readily convertible to another.
This came up in my SchedulingExecutor proposal, the first version of which had convert() methods on Clock that let you perform approximate conversions between clocks, but that is tricky to get right and encodes a number of assumptions about Clocks (e.g. it precludes having Clocks that are not directly time-based — e.g. vector clocks or testing clocks that you explicitly tick()). We changed that proposal to not do that, by letting the Clock itself handle scheduling if we didn't explicitly recognise it as one the executor supports.
I'm not sure what the right solution is here, but likely it isn't converting instants between clocks. Maybe not merging the deadlines, but just acting when the first one expires would be sufficient here?
My other concern is this:
extension Task where Success == Never, Failure == Never {
public static var currentDeadline: any InstantProtocol? { get }
}
extension UnsafeCurrentTask {
public var deadline: any InstantProtocol? { get }
}
Since InstantProtocol is generic (and since additionally you don't know the Clock) it's not clear to me how useful this will be in practice?
I guess if the deadlines aren't merged, then this method would go away anyway?
Would a possible alternative here be to ask for the deadline by passing in a clock and we query only that kind of deadline then? We would not compose the different clock deadlines then though, but querying them would require knowing what clock was used...
For propagation it seems like we have to know the clock, or at least the instant type, anyway, so a client making a remote call needs to know if it's a clock they can/must propagate or not... Perhaps this would become currentDeadline(using: clock) // -> C.Instant and then we could decide to default it to currentDeadline(using: .continious) // -> ContinuousClock.Instant?
I'll admit I am very torn about the generic clocks here, they introduce a lot of complexity and it would be good to explore this some more...
Could a generic Deadline protocol help here? In the simple case the concrete type would probably be some sort of ContinousClockDeadline type, but it could also be something else that is more custom, perhaps not even related to a Clock. I'm not sure what the protocol requirements would need to include for that.
But the point is that there would be a single deadline and you can inspect its type rather than having to pass the correct clock type to get a deadline. Merging deadlines of different types than the standard ContinousClockDeadline would be the responsibility of the type implementing Deadline; same for serialization for propagation over the network.
Perhaps it'd also be a good idea to be able to pass this information without automatically triggering a cancellation after the deadline. For instance, if a task is rendering a frame and misses its deadline it'll result in an skipped frame, which is undesirable but you still want it to complete. Kind of a soft deadline.
(And thus perhaps a Deadline protocol could have an affordance for doing something else than cancelling the task when the deadline is reached.)
We just added cancellation shields to the language which allow for this “regardless of cancellation” execution, so I don’t think we should do another mechanism to not have cancellation.
I think the semantic of cancelling after exceeding the deadline is right, it’s just that they can be used for other things programmatically as well.
I wonder if it might be possible to have a Deadline type that holds both a reference to the Clock and the Clock.Instant? Something like
struct Deadline {
var clock: any Clock
var instant: any InstantProtocol
}
then you could have
extension Task where Success == Never, Failure == Never {
public static var currentDeadline: Deadline? { get }
}
extension UnsafeCurrentTask {
public var deadline: Deadline? { get }
}
I don't know how easy this would be to use in practice — I think you'd want to experiment a bit to try it and see.
In reality you will be adding jitter to deadlines anyway
Yes, but the big issue is that with timer coalescing, your jitter won't matter.
Even if you have two deadlines, say at
12:33:44.5678
12:33:45.8875
e.g. they're jittered to be just over a second apart from each other. But let's say that they're a minute or so out from right now, then they'll likely be coalesced together which makes them fire at the same time --> very bad.
, but your request would be specifically to default the tolerance to 0 right?
Correct, everything but .zero presents real issues.
Coalescing is an idea to save on timer events, but I totally see your example of it achieving the inverse effect by firing at the same time and needing to be serviced then.
I think the idea is to let the main processor remain asleep for longer. Maybe this is an okay-ish default for watchOS apps, maybe maybe (I don't think so but don't have data) even on iOS apps but it's certainly not a good default for most things. If you really want timer coalescing, you can opt into this too. But the default should be safe and "what it says on the tin" IMHO.
I wonder though if just adding jitter with the existing API is sufficient or not, in your experience with sleep?
No, it's not because the crucial bit is to set tolerance: .zero.
A Deadline type would be a useful place to hang methods for finding the earlier of two deadlines, or for checking if it’s possible to get the deadline in a format suitable for sending to a different machine.
Not specifying a tolerance infers to the implementor of the clock that the tolerance is up to the implementation details of that clock to choose an appropriate value.
So I think it would be reasonable for some platforms / process configurations to make this mean zero. I remember when timer coalescing was introduced, it was advertised as making a difference to phone and laptop battery life. I can see why it would cause issues in a process serving a large number of clients where it’s more important that load is spread out than concentrated.
Yeah that makes sense, thank you for confirming @johannesweiss.
Personally this sounds like a really compelling argument to default to zero tolerance to me, though it'll be good to hear @Philippe_Hausler and @al45tair's thoughts as well.
It seems like there are two populations that want different defaults for this - app developers with a single client want timer coalescing for load concentration, and server developers don’t want it for load spreading. Is it something that could be configured, so that both can get what they want?
I have some concerns about defaulting to zero, particularly with respect to Windows.
The native Windows executor doesn't do timer coalescing, but it does have to cope with the fact that timer behaviour on Windows is somewhat problematic — specifically if you want to wait for less than 15.625ms, or with a greater accuracy than ±15.625ms, you need to spin, which is expensive. Since that's quite a large quantum, it's very likely that there are legitimate use-cases where spinning will be required, but at the same time we don't want to do it unless we really have to. Most programs likely don't care that much and can tolerate quite a lot of slop in their delays, and being able to specify nil to indicate that fact is important as sensible defaults will differ from platform to platform. The native Windows executor, for instance, treats nil as a 10% tolerance with a minimum of 15.625ms and a maximum of 100ms. If, on the other hand, you specify a zero tolerance, that may cause the Windows executor to spin for up to 15ms or so, as you've asked for an accurate timeout.
If the argument is that we should use a default of zero because of coalescing, I think that's wrong. We need to have a value that means "pick a sensible default", and that should be the default here — nil makes sense for that. If coalescing is a problem, we should have a separate way to disable it, and maybe it should be off by default on non-battery-constrained platforms.
specifically if you want to wait for less than 15.625ms, or with a greater accuracy than ±15.625ms, you need to spin, which is expensive.
If the argument is that we should use a default of zero because of coalescing, I think that's wrong. We need to have a value that means "pick a sensible default", and that should be the default here — nil makes sense for that. If coalescing is a problem, we should have a separate way to disable it, and maybe it should be off by default on non-battery-constrained platforms.
This is all very fair. Let me reformulate my actual requirements:
I think it is very important that the simplest, plainest invocation does surprise anybody. As in, it's key that
which should mean the same but additionally randomise the actual deadline by up to 5 seconds. But if the plain withDeadline behaves sensibly, then this can also be syntactical sugar on top which just adds a bit of randomness around the deadline.