[Pitch] Clock, Instant, Date, and Duration

You seem to be assuming that the epoch is whenever you start the timer. Also, nanosecond precision is centered on the epoch, so cut that time in half.

3 Likes

sorry, as i didn't see the actual real world example i had to guess. would that be more correct?

i am scheduling a timer and telling it to fire exactly 7 seconds and a specified number of nanoseconds into the future, but for some reason my timer precision was only around 10 milliseconds! note i was performing this experiment in early 2020 and i remember timer precision was much higher in 2000 (corrected)

i may be off again guessing, please formulate the real world example. however if i am close enough this time, then maybe the problem of inexactness is due to some code at play that's doing (pseudocode):

    let date = Date(timeIntervalSinceNow: timeInterval)
    let fireDateInNanosecondsFromNow = date.nanosecondsFromNow

instead of this:

    let fireDateInNanosecondsFromNow = timeInterval.toNanoseconds

and it would be just enough to fix that to resolve the yet to be seen real world issue?

Perhaps the best way to wrap the concerns on Date's storage is: the implementation of how we will be determining wall clock time will be from an API that returns a struct timespec. The API in which to sleep also takes a struct timespec. The memory layout of struct timesepc is 128 bits, and only 96 bits of that is really used. There is obviously a choice to favor the existing double or to favor a layout compatible storage to struct timespec, my choice is to favor the less lossy of the two which has the added benefit that as Swift gains more usage in more spots any loss gets factored out. After thinking on it for a while the simpler answer is perhaps the right one; don't attempt to re-invent things like numeric storage to accommodate for the peculiarities of storing an offset from a UTC reference point. I still will experiment with attempting to make this thinner but in reality it is a easy pick to make; things that fit in under 2 words store quite nicely in swift and often use registers for storage so perhaps even a full 128 bits might be worthwhile for alignment. As with any implementation detail this will need to be measured and tested to see what the actual impact really is, but at this point I don't think there is any more productive use of debating the merits of that storage and we should focus on the utility of the surface area to make sure that is on point.

16 Likes

Apologies in advance for being a bit brusque.

I'm going to put my foot down and say that the question of a 64b or 96b or 128b or Double representation is entirely off-topic and has completely derailed this thread. The pitch is about the API surface being proposed, not the specific representation details necessary to implement that API.

As a numerics expert, I feel quite comfortable saying that any of these options are fine from a computation and cache perspective, and can be made to work quite easily. The pitch should be focused on figuring out the API and the semantics of that API, and then I'll be happy to work with Philippe to find a way to implement those semantics numerically if he needs any assistance.

42 Likes

Yes, let's get back to the API. I have some questions that I've been waiting to ask:

  1. Naming

I'd like to suggest that we rename ClockProtocol to just Clock, and WallClock to SystemClock.

I feel that these names would fit better in the standard library; see, for example RandomNumberGenerator and SystemRandomNumberGenerator. "system clock" is also more commonly used by other programming languages than "wall clock" (although the documentation for those often starts by calling it a "wall clock").

InstantProtocol could also be named TimePoint or PointInTime. That would lead us to a protocol hierarchy which says: every Clock has an associated Instant type, which is a PointInTime. Which is quite nice and intuitive, IMO.

  1. MonotonicClock and sleep

My understanding of the pitch is that the MonotonicClock continues to tick even while the machine is asleep and events might not be responded to. That's quite an interesting choice.

On Linux, CLOCK_MONOTONIC does not continue to tick while the system is suspended. Instead, they have a CLOCK_BOOTTIME clock for that. The reason behind it is sort of fascinating - it used to be a historical accident, and once the implementation changed in the kernel, they tried to change it so that MONOTONIC and BOOTTIME clocks would be the same.

That patch was reverted almost immediately, due to some rather predictable issues:

  • systemd kills daemons on resume, after >WatchdogSec seconds
    of suspending (Genki Sky). [Verified that that's because systemd uses
    CLOCK_MONOTONIC and expects it to not include the suspend time.]
  • MATE desktop dims the display and starts the screensaver right after
    system resume (Pavel).

The Go programming language tried to make a similar change (having timers continue to tick on sleep), and an engineering lead on the Windows team asked them to revert that on Windows:

The bigger question to me is whether the change that introduced this regression was the right one at all. The Windows kernel team changed timer behavior in Windows 8 to stop advancing relative timeouts on wake. Otherwise when you open your laptop lid, every timer in the system goes off all at once and you get a bunch of unpredictable errors. Software is generally written to assume that local processes will make forward progress over reasonable time periods, and if they don't then something is wrong. When the machine is asleep, this assumption is violated. By making relative timers behave like threads, so that they both run together or they both don't, the illusion is maintained. You can claim these programs are buggy, but they obviously exist. Watchdog timers are well-known constructs.

This was a conscious design decision in Windows, and so it's disappointing to see the Go runtime second guess this several years later in a bug fix.

As far as behavior on Linux, there is clearly no consensus in issue #24595, which discusses this same problem. And indeed you can see that the CLOCK_MONOTONIC/CLOCK_BOOTTIME convergence was reverted in the kernel exactly because of the reason we stopped advancing time in Windows: random code has random failures due to timeouts.

Ultimately, Go abandoned making the change from MONOTONIC -> BOOTIME on Linux, and (after a really long and somewhat heated discussion) the corresponding patch for Windows was reverted as well. This post summarises the differences quite nicely:

  1. "Real time", aka CLOCK_BOOTTIME on Linux or "interrupt time" on Windows. This measures the passage of real time and continues to pass when the system is suspended. This clock has meaning in the external world, such as to users and across networks and distributed systems.

  2. "Program time", aka CLOCK_MONOTONIC on POSIX or "unbiased interrupt time" on Windows. This also measures the passage of real time, but pauses when the system is suspended and no programs can make progress. This clock is more meaningful internal to a system.

I think it is worth examining this area of the proposal more closely. This is certainly a delicate issue, and the issues described by both the Linux kernel contributors and Microsoft kernel team make sense to me.

I'm not sure which side of the fence I sit on here, but I think it's worth asking: why are we choosing that the default MonotonicClock in Swift should continue to tick during system sleep? Should we perhaps change that, or should we offer both?

7 Likes

That is a fair critique per the protocol. I'd like to get commentary from the Darwin kernel folks on the WallClock versus SystemClock (since after all they are a big player here that I am attempting to represent).

This somewhat flows in the terminology I started off with of ReferencePoint (which was perhaps too general). One part that the Instant terminology offers that I am not sure PointInTime etc does is the non-timezone, non-calendrical, non-local aware time value nature. Particularly the sticky one here is the wall clock case; I would accept the concept that the naming would be Clock.Time is required to be an Instant if we really wanted to go the route of "dropping the protocol suffix".

Yea this is not an easy concept to square between platforms. Darwin has the concepts of "absolute" time, "continuous" time and realtime/wallclock-time. In that parlance the absolute time is most close to CLOCK_MONOTONIC from linux, and absolute time on Darwin is CLOCK_UPTIME_RAW (with a timebase adjustment). To follow continuous time is most close to CLOCK_BOOTTIME on Linux and CLOCK_MONOTONIC on Darwin.

I didn't feel like it was apt/fair to name these clocks in that parlance of absolute versus continuous clocks and force that nomenclature upon all operating systems that Swift runs on. Admittedly it is perhaps my personal bias to liking Linux as a OS that favored the naming to use the Monotonic/Uptime prefixes and my bias as being an engineer working in Apple ecosystems that influenced their behavioral defaults.

The problems you outline w.r.t. ticks while sleeping are specifically why having both a clock that progresses while asleep and a clock that does not is useful out the gate. However I would dispute the conclusion that Go came to, because transmitting boot time across a network could lead to security vulnerabilities and in the end if you think about it the boot time of two machines is never identical so their boot time based clocks are going to be skewed. WallClock instants are really the only suitable distributed system clock basis since they are synchronized to NTP. (but perhaps their assertions were more so talking about networks on the local machine?)

Absolutely; the proposal includes both an UptimeClock and a MonotonicClock (just with the Darwin and BSD definitions of their behavior).

5 Likes

I would suggest avoiding a really-overloaded-and-messy term like Time. "Time" has a whole bunch of different meanings depending on who you're talking to and perhaps even which kind of clock you're talking about.

A more precise name (that most people will almost never need to use anyway) like PointInTime is good. I agree with @Karl that "A clock has an instant that is a point in time" reads very nicely.

5 Likes

I think that's a significantly worse name. To me it reads like it's being named for its source (the operating system) rather than its semantics. It also breaks the pattern with the other proposed clock implementations. The distinction between SystemClock, MonotonicClock, and UptimeClock (or .system, .monotonic, and .uptime) as options wouldn't be as clear. After all, don't all these clocks come from the system?

It sounds to me like those other programming languages regret not going with "wall clock" to start with but can't change due to backward compatibility... :laughing:

10 Likes

Is there an estimate of how much precision could be lost?

1 Like

I was just testing some of the implementations of this today: for example a Date encoded w/ todays timestamp was off by about 7ns (with my current implementation), distantFuture was off by about 1500ns. Now the caveat is that is presuming we don't have any alteration to the current encoding strategies. Which I would expect that we might add for distributed actor transport encoding a non-lossy strategy.

4 Likes

Duration's maximum 2⁢³ nanoseconds is barely 3 centuries, which is practically nothing compared to what date will be able to represent.

Given that Duration is proposed as the only duration type in the standard library, and that Date is proposed to be lowered from Foundation to the standard library, I assume that in addition to superseding TimeInterval, Duration will take on some of the functionalities of Foundation's DateInterval, such as constructing a duration by comparing 2 Date instances, like DateInterval.init(start:end:). If this is true, then it seems to me that the ranges for Duration and Date should be aligned more closely, and I think giving them the same underlying structure for storage should help with it.

5 Likes

I'd actually prefer if we kept ClockProtocol. Clock is very small compared to something like RandomNumberGenerator. This difference affects generic-parameter naming significantly β€” e.g. I can write <Generator: RandomNumberGenerator>, but I'd have to resort to less descriptive names with the proposed change: <C: Clock>. Arguably, other standard-library protocols are small as well (see Collection and Sequence). However, I don't see why we should follow the precedent established when single-character generic-parameter names were the norm.

1 Like

As an aside, I think InstantProtocol should have a now static-property requirement. MonotonicClock, UptimeClock, and Date already have this property. So standardizing this will simplify the API, and perhaps improve autocompletion.

As others have pointed out; that does not work for all potential custom clocks

6 Likes

This is not a matter of taste. The naming convention has been formalized through a Swift Evolution proposal. Protocol is appended to the name of a protocol only when it is necessary to disambiguate (e.g.: IteratorProtocol). This is a deliberate change from the original naming convention when every protocol had a suffix (originally, *Type). We went through and stripped those out after a thorough review. You are free to name your generic parameters accordingly (e.g.: <ClockType: Clock>).

13 Likes

I'm aware of these naming guidelines, however, I think Clock is just too short. Of course, this is a nitpick, so I'm fine with either option.

Thanks, I hadn't considered this naming scheme.

The reason why I initially went with ClockProtocol was a worry of squatting on existing short names in other APIs. Un-prefixed short names have the problem with potentially conflicting with application code. Naming it ClockProtocol sidesteps that to some extent - however I am perfectly happy with defending WallClock as the canonical wall clock.

2 Likes

Slight update from my work implementing this for the proposal; there is a conflict with existing APIs having the measurement function as a global free-floating function - namely of which it conflicts with XCTest's measure function which seems counterproductive to introduce a tool to measure/benchmark that directly creates ambiguity. The alteration is that the measure function will now exist as an extension on Clock.

extension Clock {
  public func measure(_ work: () async throws -> Void) reasync rethrows -> Duration
}
9 Likes

If these types are defined by the Standard Library, shouldn't user-defined Clock symbols shadow the Standard Library and continue to build just fine?

1 Like

The shadows do work, but they pose an ergonomic issue; for example the global function for measure becomes ambiguous w/ XCTest's instance method of measure - so it then makes folks need to write either Swift.measure {...} or self.measure {...} which is a sub-par experience especially since it would make sense that performance measurement would be something that interfacing with clocks would be ideal.

1 Like