[Pitch] Clock, Instant, Date, and Duration

@Dante-Broggi's proposal of a ProcessClock also highlights an issue with Duration:

"15 seconds" on a ProcessClock is an indeterminate length of time, since we have no way of knowing how much physical time will elapse; the process could be suspended indefinitely, for all we know.

This also suggest that calling a Clock's duration "seconds" is problematic. For some clocks it might be "seconds", but perhaps "tick" might be a better term?

5 Likes

I would expect tick to be the smallest unit of resolution the clock could measure. For example, a tick on an M1 Mac would differ from that on an Intel Mac. This is in fact a very useful concept for certain system programming applications, so maybe it makes sense to define second in terms of tick?

I still have some hope that we'll have programming languages much more advanced than what we have today when the first human lands on another planet (but I guess we'll be still poking with C…).

However, unless people completely rely on computers for even simple tasks like deciding when to have lunch, I guess actual wall clocks on Mars would be different from what we have on Earth today (which is another point that consolidates my opinion that WallClock is not the best name).
But when you change the hour, you'll also introduce a conflict for common units of speed (cars would be faster… guess Elon Musk would like that :smiley:), so I think this would be a good motivation to finally switch to full decimal and have hundred [whatever the unit would be called] per Mars day... but it could still be an argument to have nothing but seconds (but even those have been scaled :face_with_spiral_eyes:)

Back on earth, I really think we should limit the first iteration to "stopwatch time", because that is the right choice for most tasks (measurement, timeouts); the concept pitched as WallClock (or a variation of it) has certainly merit, but without a calendar, it is seriously hampered.

On top of that, calendars could resolve some challenges like those non-SI seconds: When you convert to something like TAI, the actual clock used for scheduling does not need to know all the quirks.

Imho it is not only useful: Not every device capable of executing code has a build-in clock — but many still have ticks.
As this pitch makes a real clock mandatory, you would need a dialect of Swift to target such systems (and afair, that is still something Swift really does not want).
I could also think of other counters that could be used as clock, and explicitly referring seconds would make Duration awkward in such contexts.

Can you provide an example of a platform which has a “tick” that isn’t defined in terms of seconds? I don’t doubt one exists but it would be nice to have a concrete example.

As far as dialects are concerned, the standard library already makes certain APIs unavailable based on platform. For example, ManagedBuffer.capacity is unavailable OpenBSD because it doesn’t have malloc_size. Perhaps this discussion is an indication that the split between “wall” clocks and “monotonic” clocks is not well-modeled by two concrete instances of the same protocol.

To throw another example out there: a multiplayer real-time strategy game might implement a “game clock” in which ticks have no fixed duration but are monotonically increasing to order in-world events.

3 Likes

Sorry, I have to pass because as far as I could see, Arduino has built-in conversion; I guess I should have written RTC instead of just clock.
Maybe chips which are not deployed with a single, fixed frequency?

Commenting on this quote from the pitch thread over here as I don’t have a complete review of the pitch to contribute:

@Karl :

It also looks totally weird at the call-site:

DispatchQueue.main.asyncAfter(deadline: .now.advanced(by: .seconds(3), clock: .wall)

I also agree that it looks weird, although I see this more from the choice of clock for the parameter name. The pitch defines a clock as “the mechanism in which to measure time”, so it almost feels to me like writing a temperature function like so:

someTemperature.increased(by: 2, thermometer: .celcius)

Specifically calling it out as wall time would make it clearer to me, something like:

.now.advanced(by: .seconds(3), of: .wallTime)

This is a great idea! I'm working on a Swift package to convert Measurable types. If it's possible, I'd love to see your Duration replace my Time type and inherit the Measurable protocol to easily convert to different units of time.

As a user of any clock API, I would expect "now plus 3 hours" (.now.advanced(by: .hours(3))) to behave like a stopwatch set to count 3 hours, ie. 3 * 60 * 60 seconds, no if's and's or but's. Days and months start to stretch the limits of the stopwatch analogy so it's not clear where to stop the convenience functions. I don't see there being much call for "now plus 5 centuries", but maybe I'm wrong.

If I started a stopwatch at 9:00:00PM and it ending one seconds off of midnight due to a leap second that would be totally expected. Similarly, advancing 3 hours from an arbitrary time point like now should give me the same duration of seconds in each case.

Contrarily, if now.advanced(by: hours(3)) gave me a time 3 hours and 1 second from now because of a leap second, then that would be unexpected. More illustratively, if called at 11:30PM and it returned a time equally either 2 or 4 hours depending on a time change then that would be very unexpected. If I wanted behavior like that I'd use a calendar API, hopefully it would be easy for beginners to discover that.

I would expect this of either a monotonic or wall time clock. An uptime clock could make sense with different expectations, perhaps caveats of inaccuracy across periods of sleep. One that skews its seconds to be faster or slower would obviously have that skew applied.

Perhaps naming these functions elapsedMinutes and elapsedHours would help portray this intent (and similar for sub-second units).

6 Likes

I'm surprised that nobody in this thread seems to mind the clock type having a sleep function. I would expect a clock to only have a now and calculation primitives, to look elsewhere regarding processes and threads for operations like sleep, like the noted extensions to DispatchQueue. A measure function ... I'm on the fence about that one, because the obvious implementation merely gets the difference between now before and after.

Although if this functionality is to be common across all platforms, maybe it does make some kind of sense to be in the clock type, but it still seem backwards to me. I'd appreciate some discussion of this, sorry if it's come up already in this thread.

P.S.: is everyone also fine with advanced(by:)? Can't we get a + operator? And can 3.hours be a thing?

3 Likes

Note that, as proposed, the wall clock will do exactly that—it will give you a time that is 3 hours and 1 second from now when a leap second is inserted.

I think this is a great example of how different people have absolutely irreconcilable expectations for what happens when you advance “now” with respect to a clock by “3 hours.” It’s not a convenience function when there is no easy consensus on what it’s even supposed to be convenient for. The solution, then, is not to expose such an API at all.

2 Likes

I don't think .hours(1) or .minutes(1) have anything to do with this problem. You'll have the same problem with .second(1) occasionally meaning 2 seconds with the wall clock. The irreconcilable expectation is caused by the clock's flow of time changing during a leap second. And the solution should be obvious: use a clock that does not do this when you don't want that behavior.

If it's important to make people realize those are two distinct types of durations, then clocks with different time flows should simply not share the same duration type.

8 Likes

My opinion is that the concept pitched as WallClock does actually more harm than good because of all those issues — and that isn't a new claim, so there was plenty of time to come up with compelling use cases.
There is one I can think of, but updating GUI elements according to current system time is not even a good fit for the Clock-API...

2 Likes

With .seconds(1) moving over a leap second, the human-displayed time might not be what one expected, but it would still be one second later, even if it's the 60th second of the same minute instead of the zeroth second of the next minute.

However, with .hours(3), it may or may not be 10,800 seconds later, depending on clock, implementation, and the presence of a leap second occurring during that interval.

As far as I understand, that is just not correct, because human-displayed time is authoritative in this case.
You can't rely on anything else with system time, no matter if you advance one second, two seconds or 3600 seconds — and calling 3600 seconds an hour does not change that.
That is because the clock does not actually measure passing time, but works with whatever is set by software. You could configure a timeout of 100ns, and still you could pick yourself a coffee and return to your desk before it is triggered.

Something like .now.advanced(by: .seconds(3) has to be turned into an actual point in time, and when this is reached is defined by the clock that is used. If that clock is adjusted while waiting, conflict with other clocks (stopwatch) is unavoidable.

I don't understand the significance. If I schedule something for .now() + .seconds(1) at 23:59:59, there are a few possibilities:

  • there is a leap second in this minute, so the timer will fire at 23:59:60
  • there is no leap second in this minute, so the timer will fire at 00:00:00
  • there is a leap second, but the clock doesn't do leap seconds, so the timer will fire at 00:00:00

In all cases, the timer fires at whatever time the clock considers to be 1 second later.

If I schedule something for now() + .hours(3) at 21:00:00, there are a few possibilities:

  • there is no leap second in that interval, so the timer will fire 10800 seconds later
  • there is a leap second in that interval, but the clock assumes hours are always 3600 seconds long, so the timer will fire 10800 seconds later
  • there is a leap second in that interval, and the clock accounts for it when determining the length of an hour, so the timer will fire 10801 seconds later

In other words, a second is always a second, even if it's not the same second you thought it would be, but an hour has two interpretations: 3600 seconds or the number of seconds to move the clock from the current hour:minute:second to (hour+1):minute:second.

That is true no matter how one defines any unit of time. If the clock can be changed, no interval is safe. The only question that seems worth worrying over is how to deal with expected variations (leap seconds) or changes (DST) to the current time.

So far, there has been a big focus on leap seconds — but I don't think those are actually that relevant:
They are rare, and, well, it's just a second.
Time adjustments on the other hand can not only be significantly larger, they also happen much more often.

This example depends on hitting 23:59:59 with nanosecond precision (which might be even less likely than encountering a leap second).
As soon as you are a single ns later, the clock will reach 00:00:00 (but won't trigger the timer), than it will be adjusted to 23:59:59 (just before creation time of the instant), and trigger after another second.

That is still true, as it's with any other interval and any clock. Nothing special happens as long as you stay in one time domain — but there is always real wall clock time, which is causing the conflict.

Edit: Does 23:59:60 even exist?

1 Like

According to Wikipedia, yes:

When it is mandated, a positive leap second is inserted between second 23:59:59 of a chosen UTC calendar date and second 00:00:00 of the following date.
...
The extra second is displayed on UTC clocks as 23:59:60.

That won’t happen if the clock honors leap seconds. If the clock is bouncing back for some other reason, that falls into unexpected adjustments, and no policy around what unit methods to provide can account for those.

See [Pitch] Clock, Instant, Date, and Duration - #222 by lorentey
The clock repeats a (leap) second, and for NSDate, all days have the same number of seconds. You can only avoid all those nasty consequences by using a monotonic clock.

1 Like