SE-0329 (Second Review): Clock, Instant, and Duration

As I’d commented before during the pitch phase, this seems appropriate for a setup analogous to URL remaining in Foundation and FilePath coming up in Swift System.

To be clear; this approach is not possible since the default tolerances are not a knowable value since they are determined at runtime (potentially at layers that processes don't even have access to). On Darwin for example, the kernel decides what the unspecified tolerance is for timers based on QoS and many other factors. To actually extract that number; the calling process needs root level privileges - so I don't think we can feasibly expose that as an actual value.

Right, of course. Sorry about that, I find it hard to remember all the various protocol requirements without Xcode running nearby.

Sorry if this has been brought up already, but couldn't Tolerance be an enum with cases default and specified(Duration)?

1 Like

You can allocate a certain duration constant value (e.g. all bits set) to represent default duration value, no? And if the notion of infinite duration is needed it can be modelled as, say, next smaller number (or vice versa).

Ah, got it. That's unfortunate from the perspective of the API client, but an awesome implementation detail. Would something along the lines of a maxDefaultTolerance be feasible? I have a hard time envisioning a situation where I'd be ok with asking for the default tolerance without having an idea of what that tolerance is. I'm more likely to end up requiring explicit tolerances via code review or linter rules.

That aside, I'd much rather see a dedicated interval enum as suggested upthread rather than reusing Optional, although with a variation:

The .scaled case would allow expressing things like a tolerance of 5% of the requested sleep duration.

EDIT: Clarified that I find it unfortunate that the default tolerance can't be known precisely, rather than that it's dynamic.

1 Like

They are really nothing alike.

A URL type does not belong in the standard library, because there is (ironically) no universal interpretation of URLs. It may, to an outsider, seem like a solved problem, but in truth, it is anything but.

FilePath could potentially be moved to the standard library, although it would require a filesystem (which compilation targets such as Wasm do not support) and it would be unique in that it, by design, behaves entirely differently on every platform. I don't know of any other standard library interface which does that. For that reason, I personally wouldn't support moving it, were it to be proposed.

A type which represents a duration from an epoch is not like either of those things. It is a pure, abstract concept, and one which we very much need in the standard library. Leaving it to the package ecosystem is a much, much worse alternative and a severe compromise.

And those packages will exist on the very day this proposal is accepted. By not including it, we're only creating needless division and awkwardness for developers.

Do folks think that a reasonable approach would be to have clock define its requirements as two methods instead? that way we avoid the .none case?

  func sleep(until deadline: Instant) async throws
  func sleep(until deadline: Instant, tolerance: Instant.Duration) async throws

That way developers using that would either pose a tolerance or not, no passing of nil or .none allowed.

9 Likes

I'd be happy with that, so long as the documentation made it clear that there's a difference between not specifying a tolerance and specifying a tolerance of .zero. If there is a difference anyway; presumably that would vary from implementation to implementation.

I'd still like to see a maxDefaultTolerance or similar static property on Clock, though.

1 Like

I don't think it's terrible but instinctively I think it's a bit of a footgun. Without looking up documentation, I'd assume the sleep(until: date) version is equivalent to no tolerance as in 'tolerance of zero' rather than a 'default' pliable tolerance. It also might engender the reverse confusion where people call sleep(until: date, tolerance: .zero) because that feels most natural when typing it out, even though they don't actually need the exactness and would benefit from the energy efficient default tolerance.

(IMHO the 'infinite'/'indefinite' static var would be the preferable design then, assuming the enum design is out of the question for memory space reasons as stated upthread.)

1 Like

I will have to dig into that a bit, but from what I understand of the kernel scheduling side of things for Darwin that value is highly dependent and likely even the maximum may not be determinable as a static value.

Fair enough, but I have to imagine there's at least a reasonable confidence interval. If the tolerance can regularly be anywhere from milliseconds to days, then asking for the default feels more like a footgun than an API to me.

(Of course, coming with a succinct property name for "the maximum default tolerance, 19 times out of 20" would be a challenge on its own... :smiley:)

If it's not easily expressible via an API, then I think adding a note to the sleep method's documentation with the intended tolerance range should be mandatory, similar to how most of the collections API methods give big-O estimates of their performance.

Unfortunately—and this was the crux of the problem leading up to this version of the proposal—duration from an epoch is reckoned differently on the different platforms, and there is no universal interpretation of that duration. It may, to an outsider, seem like a solved problem, but in truth, it is anything but. A SystemClock would, by design, also behave differently on every platform. It is for those reasons that I drew the analogy to URL and FilePath.

4 Likes

Not on a system under immense load.

Allowing the tolerance API to express a fraction of the duration seems like a useful addition. For example the NSTimer API docs say:

As the user of the timer, you can determine the appropriate tolerance for a timer. A general rule, set the tolerance to at least 10% of the interval, for a repeating timer.

Oh, I absolutely agree.

On the one hand, that's why I referred to the default tolerance as having a confidence interval. On the other, that's an issue even if you request a specific tolerance instead of the default, so I've been thinking of that as covered by sleep's "best effort" contract rather than specific to tolerances (default or otherwise).

And something that slipped my mind when I suggested a fractional tolerance is that there are two sleep methods in the proposal, Task.sleep(for: Duration) and Clock.sleep(until: Instant). I was only thinking of the former when I suggested specifying a tolerance as a ratio of the duration and completely forgot to consider the latter.

In any case, I think @Philippe_Hausler's idea about using separate overloads for default vs explicit tolerances is a simpler, better approach than an enum, whether that enum is Optional or a new DurationTolerance.

That's fine, but you would still want to have an ability to call the first version via the second (I believe). tolerance values won't be huge in practice (e.g. why would anyone want to write (pseudocode) sleep(until: now + 10 sec, tolerance: 10e9 sec). You may use some bit patterns to represent special values like default or critical etc.

The problem with bit patterns is that those need to have either a saturation or overflow behavior. For example if one gets a tolerance value the immediate expectation is that you add said tolerance (in custom clocks) to the deadline to find the maximum deadline that is acceptable.

The advantage to either a nil or a second required method is that it guides the implementor of the clock to instead search for the nearest next scheduled deadline and coalesce that event together without performing math using the tolerance values. That nearest neighbor search is done within some default tolerance derived by some level of work-load. Hence why an actual reported value may be totally non-deterministic or observable except in privileged processes for stuff that uses libdispatch to schedule work.

I think the better documentation to reference is the leeway on dispatch. schedule(deadline:repeating:leeway:) | Apple Developer Documentation which states that:

But for cooperative dispatch queues (the thing powering concurrency) this leeway isn't really the same per scheduling - Darwin targets are able to convey more detailed information to the kernel with these specialized queues so the leeway calculations are a bit different. Particular to Darwin's kernel optimizations, Apple Silicon devices take a bit more of the tolerances into account and are able to more efficiently schedule work, Intel chips on the other hand have a slight more importance for battery life to make sure those leeway values are suitable to coalesce work due to their p-state bring up power requirements.

Linux x86_64 machines (if I understand the scheduling behavior of libdispatch correctly) will behave closer to the Intel Mac machines.

1 Like

A duration from an epoch is a universal, abstract concept. At any one instant, different systems may consider the current time to be different for any number of reasons - but it is not necessarily a question of the platform; the user may have just set the clock manually. That is inherent to the notion of a system clock.

But it is also irrelevant - the system clock does not need to define those details in order to be the governor of determining which instants should be considered as being in the past, present, or future.

It is nothing at all like URL or FilePath. Again, URL does not behave differently on each platform; it lacks a single accepted specification on any platform. If it did, it would be a good candidate for the standard library. The problem is with the standards, not the concept.

And again, FilePath is (a little) better in that regard, but it has very little utility if you don't know how the platform behaves. That is very different to SystemClock, where its core behaviour is still very much usable without needing to care about intricacies such as leap seconds. It may, to an outsider, seem like an appropriate analogy, but in truth, it is anything but.

For the ultimate proof, look no further than the fact that Foundation is promising to add that API anyway. So there is literally zero gain here - the concept will still be shipped in the toolchain and widely available to Swift developers, just with an enormous cost (depending on Foundation) that doesn't need to be there. If it really was a terrible or flawed concept, or impossible to implement, Foundation would not be promising to implement it and make it widely available.

The standard libraries of plenty of other languages have these features built in. We also need them in Swift. But as it stands, Swift is going to be alone in lacking this feature.