SE-0329 (Second Review): Clock, Instant, and Duration

My main takeaway from the first pitch and review threads is that a duration from an epoch is far from a universal concept once leap seconds are taken into consideration, and that different platforms have taken different stances on how they're handled.

Moving Foundation's wall clock to the standard library without also moving the related calendar and locale functionality would require the standard library to make a choice about how to handle leap seconds rather than delegating that choice to a calendar, and every potential option raised reasonable objections.

Personally, I'd be happy to have a standard library clock implementation that had some relationship to an external reference clock, but I'd be even happier if I can ignore leap seconds for the duration of this review.

Besides, if importing Foundation to get that kind of clock implementation is too onerous, one can always be added later under its own proposal.

3 Likes

I mean these three ways are equivalent (aside from obvious differences that Optional version can't have more than one default value, and "special values" version prohibits using certain bit patterns as an "explicit" tolerance).

Version1 (Optional):

func sleep(until deadline: Instant, tolerance: Instant.Duration? = nil) async throws {
    let realTolerance = tolerance ?? getDefaultTolerance()
    realSleep(... realTolerance)
}

Version2 (enum):

enum SleepTolerance {
    case explicit(Instant.Duration)
    case default
    case critical
}
func sleep(until deadline: Instant, tolerance: SleepDuration = .default) async throws {
    switch tolerance {
        case .explicit(let duration): realTolerance = getDefaultTolerance()
        case .default: realTolerance = getDefaultDuration()
        case .critical: realTolerance = getCriticalTolerance()
    }
    realSleep(... realTolerance)
}

Version3 (special values):

extension Instant.Duration {
    static let default = 0xFFFFFFFF // or some equivalent
    static let critical = 0xFFFFFFFE
}

func sleep(until deadline: Instant, tolerance: Instant.Duration) async throws {
    let realTolerance: Instant.Duration
    switch tolerance {
        case .default: realTolerance = getDefaultTolerance()
        case .critical: realTolerance = getCriticalTolerance()
        default: realTolerance = tolerance
    }
    realSleep(... realTolerance)
}

In the "special values" version it would be a mistake to do, say, .default + 2.

Ah! I take it this is (at least partly) why providing some detail about default sleep tolerances is so complicated. I totally see why it would be hard, if not impossible, to assign the default tolerance a numeric value or range.

With that, I'm thinking that what I'd actually like is closer to quality of service rather than a numeric value. That way I'd be able to make an informed decision about how my tolerance requirements compare to the effort the clock will make to wake my code up at the proper time, and then either give the clock as much leeway as it would like, or instead pass my requirements on to the clock.

I'm still looking for a real world use case where WallTime clock can be useful without Calendar API.

The only sensible use proposed in previous discussion was to have a clock synchronised between devices, but as already said, the fact that timestamp are system dependent make this use case invalid (notwithstanding the fact that there is not guarantee the device will be synchronised anyway).

WallTime clock is a poor choice to compute duration, as there is no guarantee the Clock will not completely change at any time (WallTime clock is user settable). If by omitting this Clock we can encourage usage of SystemClock instead, I will consider not defining it in the System Library a good thing.

3 Likes

Imo that functionality is well positioned in Foundation. A way to tell wall clock time isn't necessarily going to be available on every platform Swift could one day be usable on. Consider an embedded chip without any form of hardware clock/networking/OS. Would the stdlib just not offer that API on that platform?

Keeping the stdlib to things that are going to be implementable on any imaginable platform, and keeping things that rely on some specific hardware or OS functionality in Foundation sounds like a good move to me.

Also, how can depending on Foundation be "an enormous cost" and Swift be described as "lacking this feature", if Foundation is, as you say, shipped with the toolchain?

2 Likes

This depends on the unit of the duration, the definition of the epoch, and whether you can ignore special relativity (GPS systems can’t!).

Assuming a non-relativistic system in which the epoch is an arbitrary fixed point in time known a priori by all clients, defining duration as “number of SI seconds that have transpired since the epoch” is indeed universal. But SI seconds are not wall clock seconds! Wall clocks incorporate leap seconds. The question “what was the date in New York 20 SI seconds after the epoch” can differ depending on how a particular implementation handles leap seconds.

I like the updated proposal, the changes are well thought through and the proposed API feels like a great fit in the stdlib. :+1:

One important nit: I am still very much convinced that we need to implement (and expose!) a much finer resolution for the standard Duration type than mere nanoseconds.

Timer resolutions are on the order of 10ns even today, and setting the granularity at 1ns does not leave much headroom for future improvements. If Duration is supposed to be the currency type for dealing with duration values (why else would we be introducing it?), then it ought to be able to accurately express durations less than 1ns. In fact, iterative benchmarks are already routinely measuring (average) time intervals below 1ns for relatively high-level Swift operations such as Array access. I wish to be able to use Duration to accurately express such results.

The proposal does not leave much room for future expansions here, as Swift.Duration's internal representation is fully exposed via its Codable conformance. It's unlikely we'll be able to meaningfully change Duration's encoding after the initial release -- it is in practice part of the type's public API. (And it ought to be documented as such.)

Therefore, I strongly recommend changing Duration to use attosecond precision instead. (A quintillion attoseconds fit comfortably within an Int64 value, and attoseconds precision provides plenty of headroom for the foreseeable future.)

If implementing fixed-point arithmetic with full width integer operations is not in scope for the initial implementation, then at least we should change Duration's Codable implementation to use second+attoseconds instead of the current second+nanoseconds split.

15 Likes

Can we use full 64 bits of precision for fractional seconds?
As a by-product we can have Fixed64.64 integer type that might be useful for other purposes. (Reminds me of Fixed and Fract types old Mac OS had).

1 Like

FWIW, I think switching to binary fractional seconds would make duration arithmetic somewhat easier to implement, but I believe it would make conversions from base-10 SI units unreasonably tricky -- e.g., I see value in ensuring that each 1ns interval is exactly 10^9 units long.

We have 128 bits to play with. If we treated this as a single Int128 counter, this would cover durations ±390 times the current age of the universe in attoseconds precision. However, if we went this way, then we'd need to do a full-width division just to extract whole seconds, which seems a bit rich to me -- so the current scheme of dedicating 64 bits to signed whole seconds and another 64 for fractional seconds seems like a nice pragmatic choice. (This still leaves us with a maximum duration of 2^63 s (or 21 times the age of the universe), which I think should be plenty for most applications.)

For the fractional half, given that time is usually measured/reported in base 10 units, it seems reasonable to choose the smallest decimal unit where 1s still fits in 63 bits, which leads to attoseconds. (2^63 as ≅ 9.223s) (If we went with an unsigned value and used the full 64 bits, then we could choose a unit as low as 100 zeptoseconds. But I think 1 attosecond is already plenty good.)

@scanon is the expert here, though, I'm just a numerical hobbyist. :-)

Semi related, many hardware simulations (and HDLs like verilog) can be run with 1 picosecond timescales (and smaller) and its not inconceivable that someone would want to write this sort of simulator in swift.

2 Likes

Sure, but such a system would probably also need to exclude large parts of the standard library anyway (including ContinuousClock and UptimeClock, which are part of this proposal).

IIRC (and I may be misremembering this), the minimum hardware requirements for porting Linux are a clock and interrupt controller (beyond an instruction processor and some RW memory, which are required for the machine to be Turing-complete). It's very much a core component of any modern computer.

Because the part of Foundation where Date lives also comes with Calendar, DateFormatter, Locale, etc - all built on ICU. Just having the currency type for timestamps will come with a lot of extra baggage that you don't want or need.

If this is going to be the cost of wanting to rename Date, I'd say we just concede and go back to sinking it in to the standard library, and let the Foundation team have their way despite the overwhelming community consensus. Swift libraries today need this, and we don't gain anything by having it in Foundation rather than the standard library.

I think relativistic concerns are very much out of scope for a SystemClock.

That's fine; I don't think SystemClock needs a precise definition for this, as long as it is internally consistent. The entire point is to leave those aspects as an implementation detail. A system clock inherently only has limited reliability, because it is an external source of data which the user is able to manipulate. Similarly, they could compile their own OS and manipulate the interfaces used by SystemRandomNumberGenerator so that it always return 42. That's just how it is.

I cannot stress this enough - this feature :arrow_right: is going in to the toolchain, either way :arrow_left:. Arguments about leap seconds are moot. The question is whether it is better for Swift to have these critical interfaces and currency types live in a secondary, monolithic library with a large number of heavyweight dependencies, or whether it should be added to the (much slimmer) standard library.

I think the latter is better for Swift. There are packages today which need the currency type, and adding a dependency on Foundation just for that is too high a cost.

1 Like

I see the value in this, but does it need to be handled by Duration rather than a theoretical HighResolutionClock implementation down the line, with its own distinct DurationProtocol type?

The main downside I see in adding it to Duration is that units smaller than nano are, in my experience at least, fairly esoteric and likely to be a distraction to developers working outside of domains that require that level of timing. Personally, I know that atto, femto, and pico are all smaller than nano, but I could't tell you off the top of my head how they're ordered, if I missed any prefixes between them, or how many of each are in a nano.

2 Likes

While I'm still in favour of this API over the one in the proposal, I was thinking some more about the comment I made earlier about quality of service. I'm now thinking that an API along these lines would be even better:

enum StrawmanQoS {
  case userInteractive
  case `default`
  case background
  // Other cases as appropriate...
}

func sleep(until deadline: Instant, qos: StrawmanQoS = .default) async throws
func sleep(until deadline: Instant, tolerance: Instant.Duration) async throws

That would allow the caller to either provide a specific tolerance, or to give the scheduler an idea of how much leeway it has to make its own choice.

1 Like

While getting a timestamp may be useful, I hardly see how having a Clock implementation based on it can be "critical". This is just an unreliable way to mesure Duration (unlike ContinuousClock and UptimeClock which are both accurate and reliable).

Independent of our debate about whether my analogy is apt or not, bearing in mind that this is a review thread, I think it’s appropriate to examining this point above concretely—

What are examples of packages today that need Date for non-calendrical purposes independent of other Foundation APIs, and which cannot import Foundation? Of those use cases, which cannot be served by using the standard library facilities provided in this version of the proposal?

2 Likes

Attaching a numerical timestamp to log entries sounds like such a use case. Calendrical processing of those timestamps would be performed by consumers of those log entries, not the logger itself.

1 Like

Won’t formatting such a timestamp require a calendar library?

No: maybe the numerical timestamp is sent raw (binary) over the wire. Or it is formatted/stored as a plain Double on some file/database/sink. As long as the calendrical processing of a timestamp is performed by some process/tool/binary which is not the one that produces it, we have one use case that fits @xwu's request. Such delayed processing can happen, for example, in the case of a logger.

2 Likes

Generating a timestamp does not requires a Clock, but only a gettimeofday() function (which is provided by any lib C and just need a light Swift stub to expose it in a more Swift way).

This whole thread is basically about making the attosecond precise version of gettimeofday.

2 Likes