[Pitch] Clock, Instant, Date, and Duration

Right; what I really meant there is that WallClock is defined in terms of Date and therefore timezone-agnostic.

1 Like

I think that was arguing about the length of a second, not whether one second is one second.

That would only imply that NSDate ignores leap seconds. A measurement system which ignores leap seconds can indeed define an hour to always be 3600 seconds.

The debate is whether all clocks will be consistent in this regard and if not, what would that mean for an .hour() or .minute() convenience function? How does this particular clock handle leap seconds? Does it ignore them? Does it take the reference date into account when calculating the length of the next minute or hour? Something else? Do we care if .minute() always yields an interval of 60 seconds, or should it sometimes yield 61?

Spitballing, and I know this is language design not ideal OS design so some of this might be going overboard...

WallClock, I think things would make more sense if that was called just Clock, be a thing for displaying time, knows what hours and days are. It can change by an hour automatically twice a year, you can change it by setting your timezone, or manually to 5 minutes ahead like your clock radio to fool yourself into being late less often.

Whatever the system time keeper is, MonotonicClock I suppose in this pitch, I don't think there should have any more relation between it and a Clock than between it and a calendar. So I think it should be called something other than Clock.

A point on the Clock should be a Time, measuring the duration between any two of them should be as unique an API as finding next Wednesday on a calendar. It doesn't make sense to have an API that can run an animation for 1/2 second on a Clock. Maybe it shouldn't even have a now? Similarly a point on a Calendar should be a Date, without question.

A system time keeper should be used for that 1/2 second of animation, any application duration timer kind of thing. Maybe it has related Instant and Duration types. There needs to absolutely be some way to wrangle elapsed time vs. execution time ignoring process or system sleep, but maybe this shouldn't be a different type of time keeper but a different kind of duration?

If such a system time keeper is complicated for high level code, or beginners, maybe also Stopwatch type.

This would make more sense to me.

Just for the sake of the discussion, can anyone provide a use case that involved leap second, but does not involves using a Calendar API ?

1 Like

I have one in which leap seconds are involved in so far as they should never get in the way:

1 Like

To schedule animation, you don't use WallTime Clock, but instead a MonotonicClock that stop when the computer goes to sleep.

So if the user suspends the computer in the middle of the animation, it will resume as expected when the computer wake-up.

Leap seconds are not involved in this use case.

3 Likes

I'm developing a sport application, a chronometer. The app measures the elapsed time between two presses of a button. Requirements:

  • Sub-second precision is required, because I'm measuring the performances of high-level athletes.
  • Leap seconds must not alter my precise measurement. We may register a new world record!
  • Device sleep must not alter my precise measurement. Sleep may happen due to built-in energy-saving policy, or because I inadvertently hit the "sleep" physical button when I put the device in my pocket. None of those events should invalidate the measurement - we're live on TV during the Olympics!
3 Likes

Afaics, MonotonicClock should be replaced with "monotonic clock" :nerd_face: — or UptimeClock (those two should be structs).

That would be a job for MonotonicClock... WallClock.measure is really not that useful.

No, because I added the requirement of insensitivity to device sleep.

Directly from the proposal:

When instants are for local processing only and need to be high resolution without the encumbrance of suspension while the machine is asleep MonotonicClock is the tool for the job.

Just curious; what have you been using, and does it not have the same problem?

It seems like your interpretation is in direct conflict with this one.

Nothing. It's even possible my requirements are out of scope (read: not implementable). I'm trying to help the author of the proposal understand how much confused people are. IMHO, this is because the proposal is redacted in a way that any practical conclusion requires difficult derivation from poorly written axioms.

I certainly do not want to blame the OP for the limits of the initial writing of the pitch. I just want to stress out how much more care should be put in the redaction of an actual proposal that people can relate to, and evaluate.

1 Like

You do have a product, I presume? So you have to be using something. Knowing that would help us get a better idea of what is acceptable when we know the ideal SI-second clock is impossible w/o hardware supports, of which Swift doesn't assume any.

I'm not hiding information, I promise :slightly_smiling_face: My example use case was just the product of my imagination. But it is not rhetorical: if it happens it describes something realistic that falls in the scope of the pitch, then I fulfilled my goal.

1 Like

Whatever the clock names, this proposal includes both a Clock that pause during sleep, and one that don't.

Both are based on hardware clock that tick at a fixed predefined interval. They are not related in anyway to "WallTime Clock" and so are not affected in anyway by user change of the system time or leap seconds.

@Tino, I wrote:

Re-reading the proposal:

This description is enhanced later:

If UptimeClock "does not increment while the machine is asleep" then I conclude that MonotonicClock does (maybe "without the encumbrance of suspension while the machine is asleep" could be made more clear).

I have also read the "a monotonic clock" expression as a way to refer to both MonotonicClock and UptimeClock (that's how I understand this expression now), so please pardon my slow crawling to the conclusion; @Tony you were quite right.

So my Olympics chronometer needs to use UptimeClock:

let start = UptimeClock.Instant.now
// later
let end = UptimeClock.Instant.now
let duration = UptimeClock.duration(from: start, to: end)
let nanoseconds = duration.nanoseconds // OK, that I can display

Since @Jean-Daniel is still looking for "use case that involved leap second, but does not involves using a Calendar API", I may further refine my scenario, and just make it involve two devices this time:

  • One device records the time at the exact moment it triggers the signal that unleashes the athletes (RUN!)
  • Another device is located near the finish line and records the time at the exact moment a camera notices that the line is crossed.

We have left the scope of UptimeClock and MonotonicClock, so we must now use the "transmittable" time of WallClock. Again, I want my final duration measurement to be independent of leap seconds. Is it possible in the current state of the pitch? Or is it still in flux?

2 Likes

Depending your need of precision, I'm not sure WallTime Clock is appropriate in the first place. Even with a good NTP, you hardly get a millisecond precision. If you want a very high accuracy, you need to use a dedicated distributed clock, like discussed here. Moreover, you can't trust WallTime clock as it may be adjusted by the user, but if your two devices are on the same wired-network and use the same local NTP servers, it may do the trick (and users don't have access to system time settings).

That said, WallTime Clock as proposed is implementation dependent. So there is no guarantee about how it handles leap seconds, and if I recall correctly, it does not have any means to report a leap seconds to the user, which make it inappropriate for subsecond accurracy.

This bring the question : Is the Wall Time Clock the right tool to synchronize distributed events with a very high accuracy ? If the answer is yes, so we should definitively enhance the API to report a leap second, and not return the same timestamp for both 23:59:59 and 23:59:60.

Finally, I will too refine my question. Still using only Clock API and not Calendar API, is there a real use case where defining .hours as exactly 3600 seconds would cause an issue ?

4 Likes

Thank you @Jean-Daniel, this is very clear.

I really wish this kind of fact would be clearly stated in the proposal, even if it sounds boring to knowledgeable people, even if it does not bring any new information on top of what Foundation.Date already does, even if some would say "this is trivially derivable". No, this is not trivially derivable. A good proposal, in such a complex topic, should debunk as many expectable misconceptions as possible, so that the mental model of the reader is as clear as possible. This would also avoid "if I recall correctly" expressions that always cast the shadow of a doubt for no good reason.

12 Likes

To refine my answer as succinctly as possible:

The objection to defining .hours in any way in a non-calendrical API for general use is that an "hour" refers to a calendrical unit in many use cases. Users who are unaware of the difference will naturally gravitate towards using the simplest API that is the most widely available; when the standard library allows one to write Clock.now + .hours(3), they will be guided to use this non-calendrical API even when they actually need calendar-aware operations.

Consider by analogy Swift's String APIs: The standard library doesn't offer locale-aware operations, which are provided by Foundation instead, but the standard library's APIs are all Unicode-aware. We don't offer an O(1) count that can be used to count characters for ASCII strings without CRLF but really only counts UTF8 code units and a separate O(n) unicode.count for Unicode grapheme clusters. You could similarly ask, using only UTF8 code unit APIs and not Unicode grapheme cluster APIs (e.g., when you have a known ASCII string with normalized line endings), is there a real use case where defining a "character" as exactly one UTF8 code unit would cause an issue?

But that is missing the point, which is that for general use, defining a "character" as a UTF8 code unit would cause users who really need Unicode-aware APIs to reach for the wrong set of APIs by default. This is not an objection to offering UTF8 code unit APIs in addition to Unicode-aware APIs (as we do), nor denying that for ASCII strings a fundamentally useful property is that each code unit is its own character (eliding the CRLF issue), just that it is not appropriate to generalize that relationship by defining a "character" as part of the UTF8 code unit APIs.

Similarly, it is well and good to define an "hour" as exactly 3600 seconds for use with SI units. There is no objection to offering clock-based APIs as distinct from calendrical APIs (as we here propose to do), nor denying that it is useful for use with SI units and even for other use cases* that 3600 seconds is equal to an hour. The objection is that it is not appropriate to generalize that relationship by defining an "hour" as part of the clock-based APIs.

[*] Perhaps even a great many use cases, just as with ASCII strings.

14 Likes

I unfortunately have not kept up with this topic, and there hasn't been a clear attempt to summarize discussion so far. So unfortunately, I may repeat ideas already voiced before.

I would argue that a Clock is a measurer of time, and that there are broadly different mechanisms by which it does so. We should represent that as a property or set of properties on a Clock itself, eg.:

enum ClockIncrementation {
    case disjoint
    case pausing
    case continuous
}

Monotonically increasing (pausing or contiguous) time is useful for knowing the delta between two Instants is a minimum Duration, but I do not know if this is useful from a generic sense. Or to put it differently, is there a programmatic reason to define monotonic clocks, or does it exist to have a common name for differing operating system behaviors?

Second, I have a question is whether all the leap second behaviors have consistent behavior on the calendar system - so if a leap second causes an adjustment one hour into the future to always be 3600 seconds, or if it is sometimes 3599 and sometimes 3601.

If the leap second behavior of a clock changes how it is interpreted into a calendaring system, we need to be really careful to capture this for serialization, as a lot of systems serialize to a seconds-since-epoch format in transit.