SE-0329 (Second Review): Clock, Instant, and Duration

That's fine, but you would still want to have an ability to call the first version via the second (I believe). tolerance values won't be huge in practice (e.g. why would anyone want to write (pseudocode) sleep(until: now + 10 sec, tolerance: 10e9 sec). You may use some bit patterns to represent special values like default or critical etc.

The problem with bit patterns is that those need to have either a saturation or overflow behavior. For example if one gets a tolerance value the immediate expectation is that you add said tolerance (in custom clocks) to the deadline to find the maximum deadline that is acceptable.

The advantage to either a nil or a second required method is that it guides the implementor of the clock to instead search for the nearest next scheduled deadline and coalesce that event together without performing math using the tolerance values. That nearest neighbor search is done within some default tolerance derived by some level of work-load. Hence why an actual reported value may be totally non-deterministic or observable except in privileged processes for stuff that uses libdispatch to schedule work.

I think the better documentation to reference is the leeway on dispatch. Apple Developer Documentation which states that:

But for cooperative dispatch queues (the thing powering concurrency) this leeway isn't really the same per scheduling - Darwin targets are able to convey more detailed information to the kernel with these specialized queues so the leeway calculations are a bit different. Particular to Darwin's kernel optimizations, Apple Silicon devices take a bit more of the tolerances into account and are able to more efficiently schedule work, Intel chips on the other hand have a slight more importance for battery life to make sure those leeway values are suitable to coalesce work due to their p-state bring up power requirements.

Linux x86_64 machines (if I understand the scheduling behavior of libdispatch correctly) will behave closer to the Intel Mac machines.

1 Like

A duration from an epoch is a universal, abstract concept. At any one instant, different systems may consider the current time to be different for any number of reasons - but it is not necessarily a question of the platform; the user may have just set the clock manually. That is inherent to the notion of a system clock.

But it is also irrelevant - the system clock does not need to define those details in order to be the governor of determining which instants should be considered as being in the past, present, or future.

It is nothing at all like URL or FilePath. Again, URL does not behave differently on each platform; it lacks a single accepted specification on any platform. If it did, it would be a good candidate for the standard library. The problem is with the standards, not the concept.

And again, FilePath is (a little) better in that regard, but it has very little utility if you don't know how the platform behaves. That is very different to SystemClock, where its core behaviour is still very much usable without needing to care about intricacies such as leap seconds. It may, to an outsider, seem like an appropriate analogy, but in truth, it is anything but.

For the ultimate proof, look no further than the fact that Foundation is promising to add that API anyway. So there is literally zero gain here - the concept will still be shipped in the toolchain and widely available to Swift developers, just with an enormous cost (depending on Foundation) that doesn't need to be there. If it really was a terrible or flawed concept, or impossible to implement, Foundation would not be promising to implement it and make it widely available.

The standard libraries of plenty of other languages have these features built in. We also need them in Swift. But as it stands, Swift is going to be alone in lacking this feature.

My main takeaway from the first pitch and review threads is that a duration from an epoch is far from a universal concept once leap seconds are taken into consideration, and that different platforms have taken different stances on how they're handled.

Moving Foundation's wall clock to the standard library without also moving the related calendar and locale functionality would require the standard library to make a choice about how to handle leap seconds rather than delegating that choice to a calendar, and every potential option raised reasonable objections.

Personally, I'd be happy to have a standard library clock implementation that had some relationship to an external reference clock, but I'd be even happier if I can ignore leap seconds for the duration of this review.

Besides, if importing Foundation to get that kind of clock implementation is too onerous, one can always be added later under its own proposal.


I mean these three ways are equivalent (aside from obvious differences that Optional version can't have more than one default value, and "special values" version prohibits using certain bit patterns as an "explicit" tolerance).

Version1 (Optional):

func sleep(until deadline: Instant, tolerance: Instant.Duration? = nil) async throws {
    let realTolerance = tolerance ?? getDefaultTolerance()
    realSleep(... realTolerance)

Version2 (enum):

enum SleepTolerance {
    case explicit(Instant.Duration)
    case default
    case critical
func sleep(until deadline: Instant, tolerance: SleepDuration = .default) async throws {
    switch tolerance {
        case .explicit(let duration): realTolerance = getDefaultTolerance()
        case .default: realTolerance = getDefaultDuration()
        case .critical: realTolerance = getCriticalTolerance()
    realSleep(... realTolerance)

Version3 (special values):

extension Instant.Duration {
    static let default = 0xFFFFFFFF // or some equivalent
    static let critical = 0xFFFFFFFE

func sleep(until deadline: Instant, tolerance: Instant.Duration) async throws {
    let realTolerance: Instant.Duration
    switch tolerance {
        case .default: realTolerance = getDefaultTolerance()
        case .critical: realTolerance = getCriticalTolerance()
        default: realTolerance = tolerance
    realSleep(... realTolerance)

In the "special values" version it would be a mistake to do, say, .default + 2.

Ah! I take it this is (at least partly) why providing some detail about default sleep tolerances is so complicated. I totally see why it would be hard, if not impossible, to assign the default tolerance a numeric value or range.

With that, I'm thinking that what I'd actually like is closer to quality of service rather than a numeric value. That way I'd be able to make an informed decision about how my tolerance requirements compare to the effort the clock will make to wake my code up at the proper time, and then either give the clock as much leeway as it would like, or instead pass my requirements on to the clock.

I'm still looking for a real world use case where WallTime clock can be useful without Calendar API.

The only sensible use proposed in previous discussion was to have a clock synchronised between devices, but as already said, the fact that timestamp are system dependent make this use case invalid (notwithstanding the fact that there is not guarantee the device will be synchronised anyway).

WallTime clock is a poor choice to compute duration, as there is no guarantee the Clock will not completely change at any time (WallTime clock is user settable). If by omitting this Clock we can encourage usage of SystemClock instead, I will consider not defining it in the System Library a good thing.


Imo that functionality is well positioned in Foundation. A way to tell wall clock time isn't necessarily going to be available on every platform Swift could one day be usable on. Consider an embedded chip without any form of hardware clock/networking/OS. Would the stdlib just not offer that API on that platform?

Keeping the stdlib to things that are going to be implementable on any imaginable platform, and keeping things that rely on some specific hardware or OS functionality in Foundation sounds like a good move to me.

Also, how can depending on Foundation be "an enormous cost" and Swift be described as "lacking this feature", if Foundation is, as you say, shipped with the toolchain?


This depends on the unit of the duration, the definition of the epoch, and whether you can ignore special relativity (GPS systems can’t!).

Assuming a non-relativistic system in which the epoch is an arbitrary fixed point in time known a priori by all clients, defining duration as “number of SI seconds that have transpired since the epoch” is indeed universal. But SI seconds are not wall clock seconds! Wall clocks incorporate leap seconds. The question “what was the date in New York 20 SI seconds after the epoch” can differ depending on how a particular implementation handles leap seconds.

I like the updated proposal, the changes are well thought through and the proposed API feels like a great fit in the stdlib. :+1:

One important nit: I am still very much convinced that we need to implement (and expose!) a much finer resolution for the standard Duration type than mere nanoseconds.

Timer resolutions are on the order of 10ns even today, and setting the granularity at 1ns does not leave much headroom for future improvements. If Duration is supposed to be the currency type for dealing with duration values (why else would we be introducing it?), then it ought to be able to accurately express durations less than 1ns. In fact, iterative benchmarks are already routinely measuring (average) time intervals below 1ns for relatively high-level Swift operations such as Array access. I wish to be able to use Duration to accurately express such results.

The proposal does not leave much room for future expansions here, as Swift.Duration's internal representation is fully exposed via its Codable conformance. It's unlikely we'll be able to meaningfully change Duration's encoding after the initial release -- it is in practice part of the type's public API. (And it ought to be documented as such.)

Therefore, I strongly recommend changing Duration to use attosecond precision instead. (A quintillion attoseconds fit comfortably within an Int64 value, and attoseconds precision provides plenty of headroom for the foreseeable future.)

If implementing fixed-point arithmetic with full width integer operations is not in scope for the initial implementation, then at least we should change Duration's Codable implementation to use second+attoseconds instead of the current second+nanoseconds split.


Can we use full 64 bits of precision for fractional seconds?
As a by-product we can have Fixed64.64 integer type that might be useful for other purposes. (Reminds me of Fixed and Fract types old Mac OS had).

1 Like

FWIW, I think switching to binary fractional seconds would make duration arithmetic somewhat easier to implement, but I believe it would make conversions from base-10 SI units unreasonably tricky -- e.g., I see value in ensuring that each 1ns interval is exactly 10^9 units long.

We have 128 bits to play with. If we treated this as a single Int128 counter, this would cover durations ±390 times the current age of the universe in attoseconds precision. However, if we went this way, then we'd need to do a full-width division just to extract whole seconds, which seems a bit rich to me -- so the current scheme of dedicating 64 bits to signed whole seconds and another 64 for fractional seconds seems like a nice pragmatic choice. (This still leaves us with a maximum duration of 2^63 s (or 21 times the age of the universe), which I think should be plenty for most applications.)

For the fractional half, given that time is usually measured/reported in base 10 units, it seems reasonable to choose the smallest decimal unit where 1s still fits in 63 bits, which leads to attoseconds. (2^63 as ≅ 9.223s) (If we went with an unsigned value and used the full 64 bits, then we could choose a unit as low as 100 zeptoseconds. But I think 1 attosecond is already plenty good.)

@scanon is the expert here, though, I'm just a numerical hobbyist. :-)

Semi related, many hardware simulations (and HDLs like verilog) can be run with 1 picosecond timescales (and smaller) and its not inconceivable that someone would want to write this sort of simulator in swift.


Sure, but such a system would probably also need to exclude large parts of the standard library anyway (including ContinuousClock and UptimeClock, which are part of this proposal).

IIRC (and I may be misremembering this), the minimum hardware requirements for porting Linux are a clock and interrupt controller (beyond an instruction processor and some RW memory, which are required for the machine to be Turing-complete). It's very much a core component of any modern computer.

Because the part of Foundation where Date lives also comes with Calendar, DateFormatter, Locale, etc - all built on ICU. Just having the currency type for timestamps will come with a lot of extra baggage that you don't want or need.

If this is going to be the cost of wanting to rename Date, I'd say we just concede and go back to sinking it in to the standard library, and let the Foundation team have their way despite the overwhelming community consensus. Swift libraries today need this, and we don't gain anything by having it in Foundation rather than the standard library.

I think relativistic concerns are very much out of scope for a SystemClock.

That's fine; I don't think SystemClock needs a precise definition for this, as long as it is internally consistent. The entire point is to leave those aspects as an implementation detail. A system clock inherently only has limited reliability, because it is an external source of data which the user is able to manipulate. Similarly, they could compile their own OS and manipulate the interfaces used by SystemRandomNumberGenerator so that it always return 42. That's just how it is.

I cannot stress this enough - this feature :arrow_right: is going in to the toolchain, either way :arrow_left:. Arguments about leap seconds are moot. The question is whether it is better for Swift to have these critical interfaces and currency types live in a secondary, monolithic library with a large number of heavyweight dependencies, or whether it should be added to the (much slimmer) standard library.

I think the latter is better for Swift. There are packages today which need the currency type, and adding a dependency on Foundation just for that is too high a cost.

1 Like

I see the value in this, but does it need to be handled by Duration rather than a theoretical HighResolutionClock implementation down the line, with its own distinct DurationProtocol type?

The main downside I see in adding it to Duration is that units smaller than nano are, in my experience at least, fairly esoteric and likely to be a distraction to developers working outside of domains that require that level of timing. Personally, I know that atto, femto, and pico are all smaller than nano, but I could't tell you off the top of my head how they're ordered, if I missed any prefixes between them, or how many of each are in a nano.


While I'm still in favour of this API over the one in the proposal, I was thinking some more about the comment I made earlier about quality of service. I'm now thinking that an API along these lines would be even better:

enum StrawmanQoS {
  case userInteractive
  case `default`
  case background
  // Other cases as appropriate...

func sleep(until deadline: Instant, qos: StrawmanQoS = .default) async throws
func sleep(until deadline: Instant, tolerance: Instant.Duration) async throws

That would allow the caller to either provide a specific tolerance, or to give the scheduler an idea of how much leeway it has to make its own choice.

1 Like

While getting a timestamp may be useful, I hardly see how having a Clock implementation based on it can be "critical". This is just an unreliable way to mesure Duration (unlike ContinuousClock and UptimeClock which are both accurate and reliable).

Independent of our debate about whether my analogy is apt or not, bearing in mind that this is a review thread, I think it’s appropriate to examining this point above concretely—

What are examples of packages today that need Date for non-calendrical purposes independent of other Foundation APIs, and which cannot import Foundation? Of those use cases, which cannot be served by using the standard library facilities provided in this version of the proposal?


Attaching a numerical timestamp to log entries sounds like such a use case. Calendrical processing of those timestamps would be performed by consumers of those log entries, not the logger itself.

1 Like

Won’t formatting such a timestamp require a calendar library?