[Pitch] Clock, Instant, Date, and Duration

I know how hard it is to document ambiguous concepts.

You start by carefully write the rules, maybe with some illustrative examples, and think you have done your job.

Then you realize that users are unable to perform correct derivation from the rules. They fall into wrong conclusions, forget about edge cases, misunderstand key concepts, and generally happily shoot themselves in the foot.

Time makes it even more difficult, since the behavior of time isn't constant. Most code usually work, until it does not.

When you think that you've avoided all DST-related problems by pushing them to calendrical computations, boom the leap seconds bite you in the face.

One way I know to avoids this trap is to not only document the rules, but also guide users in "real-life" use cases.

For example, can I schedule some code to execute at the end of an animation that lasts 1 second? I never heard of animations of "1 second" that can last between 0 and 2 seconds of absolute time because a leap second happens to be inserted or withdrawn at the most unexpected moment. So can I execute some code in one second of "animation clock", and get perfect synchronization with my user interface? Which one of the clock described in this proposal should I use? Does it even exist?

There are many more use cases that the fertile imagination of the community can invent. How does this proposal addresses them?

Answering such questions would help the proposal authors feeling confident that their proposed api addresses users' needs. And it would also help the community share a common understanding, and feel confident that their needs are addressed. I know this is a lot of work. This work should be done by the proposal authors.


How does the clock know that the first second (yay English!) has passed? Why can’t that mechanism be used to correctly schedule the timer?

If the clock displays the same time for two seconds, it has to have an independent concept of the passing of time.

1 Like

This is true for every units. .seconds(1) may occurs hours later if you are using a clock that suspends during computer sleep.

Human display time is a Calendar concept. As soon as your are using human display time, you are manipulating Calendar Time, and not Timestamp and this is out of scope for the Clock API IMHO.

I would class that as an outside event that is not under the control of the clock implementation. However, even in that case, the clock knows a second has elapsed. It just couldn't do anything while it was suspended.

Very few timer implementations guarantee exact delivery of events, but we don't consider that as being inconsistent with tracking time. That's just the reality of non-real time systems.


You're talking about delivery, not the definition of a second. The definition of hour is dependent on the specification of the clock. A second is a second.

And an hour is an hour, that is 3600 seconds.

Using the number of seconds between 2 moments when a astronomical clock display 00 seconds as the definition of an hour seems overly complicated and as I said, out of scope for this API, as astronomical time is not a concept that should be part of the Clock API but handled by a Calendar API.

That's the crux of the discussion, though. An hour might be 3601 seconds.

There is no clean separation, and that makes things tough:
When you set your system clock from 13:00:00 to 11:00:00, WallClock.now would move back in time two hours as well.
The same can also happen because of leap seconds, and even if NSDate / NSCalendar does not care for leap seconds, I think this is definitely a calendrical concept.
My main point is that there are no seconds which are counted by the WallClock, but "hidden" from the user; there is a direct link.

I have my quarrels with this formulation: Does any clock know anything? A Clock as pitched is a source of time, and it is free to diverge from what is actually happening in the physical world as its creator decides:
It can be slower or faster than TAI (like for each nanosecond in the "real world", a custom Clock could advance a whole hour), it can ignore time passing in the real world (UptimeClock), or do other strange things like WallClock.

Note that there is absolutely no need for a Clock to "know" about things like hours: It just has to count nanoseconds (or another unit; but the pitch is quite specific in this respect), but it does not have to do that in the same, simple way as a stopwatch does.
As I learned in this thread, adding multiples of 24 * 60 * 60 seconds to any given NSDate will preserve the time of day from the starting point, no matter how many leap seconds are encountered.
So when you query WallClock, no hours with 3601 WallClock-seconds exist — that deviation only happens in another reference system.

Not even that is without controversy: Look for "Mars" in this thread.

However, you can not only define that an hour has 3600 seconds — it is very common to do so, or did anyone ever encounter a traffic sign with a speed limit that cares about leap seconds? ;-)

1 Like

There’s a distinction that’s getting lost here (and which I ignored when I made a comparison between DST and leap seconds earlier): the proposal is, very indirectly, specifying a WallClock that’s in UTC. Big leaps of hours at a time will only happen when the clock is being adjusted, because it’s somehow showing the wrong time. Things like DST or time zone changes will not create discontinuities in WallClock (except in rare cases like the user manually adjusting the clock instead of changing the time zone setting). NTP time adjustment should be handled by gradual skewing rather than jumps unless the clock is very far out.

This means that questions like “what happens when you add three hours across a DST change” are no different for WallClock than they are for MonotonicClock; WallClock is monotonic modulo adjustments, and the timeout will happen after 10800 wall clock seconds, unless there is special logic to reinterpret the duration.

By contrast, leap seconds are an issue because the underlying system clock (on Apple systems) has a defective definition which cannot correctly represent positive leap seconds. Unix time (on Apple platforms) represents the number of seconds since the 1970 epoch, minus the number of positive leap seconds that have occurred since then, plus the number of negative leap seconds. In order to maintain this definition, there is a discontinuity in the Unix clock (which is non-monotonic for positive leap seconds).

This means that a one-hour interval that spans a leap second (or the point where the leap second adjustment is applied) will effectively be 3601 elapsed clock seconds, unless there is special logic to reinterpret the duration.

On a platform that used 24-hour smearing, you would instead get an interval of 3600 elapsed clock seconds that approximate 3600.041666… real-time seconds.

(This could be avoided by having the source-of-truth clock count in TAI and have UTC clock APIs that perform leap second adjustments, but imposing that design on all supported systems retroactively is out of scope for swift-evolution.)

tl;dr: leap seconds are a different kind of problem than other WallClock discontinuities.

1 Like

Well, ackshually (I think you and I agree and are ultimately saying the same thing, but to clarify for those reading later), in that the behavior of WallClock is based on Unix time APIs, its behavior with respect to leap seconds makes it explicitly not a UTC clock but the deviation from UTC is specifically during inserted leap seconds.

Right; what I really meant there is that WallClock is defined in terms of Date and therefore timezone-agnostic.

1 Like

I think that was arguing about the length of a second, not whether one second is one second.

That would only imply that NSDate ignores leap seconds. A measurement system which ignores leap seconds can indeed define an hour to always be 3600 seconds.

The debate is whether all clocks will be consistent in this regard and if not, what would that mean for an .hour() or .minute() convenience function? How does this particular clock handle leap seconds? Does it ignore them? Does it take the reference date into account when calculating the length of the next minute or hour? Something else? Do we care if .minute() always yields an interval of 60 seconds, or should it sometimes yield 61?

Spitballing, and I know this is language design not ideal OS design so some of this might be going overboard...

WallClock, I think things would make more sense if that was called just Clock, be a thing for displaying time, knows what hours and days are. It can change by an hour automatically twice a year, you can change it by setting your timezone, or manually to 5 minutes ahead like your clock radio to fool yourself into being late less often.

Whatever the system time keeper is, MonotonicClock I suppose in this pitch, I don't think there should have any more relation between it and a Clock than between it and a calendar. So I think it should be called something other than Clock.

A point on the Clock should be a Time, measuring the duration between any two of them should be as unique an API as finding next Wednesday on a calendar. It doesn't make sense to have an API that can run an animation for 1/2 second on a Clock. Maybe it shouldn't even have a now? Similarly a point on a Calendar should be a Date, without question.

A system time keeper should be used for that 1/2 second of animation, any application duration timer kind of thing. Maybe it has related Instant and Duration types. There needs to absolutely be some way to wrangle elapsed time vs. execution time ignoring process or system sleep, but maybe this shouldn't be a different type of time keeper but a different kind of duration?

If such a system time keeper is complicated for high level code, or beginners, maybe also Stopwatch type.

This would make more sense to me.

Just for the sake of the discussion, can anyone provide a use case that involved leap second, but does not involves using a Calendar API ?

1 Like

I have one in which leap seconds are involved in so far as they should never get in the way:

1 Like

To schedule animation, you don't use WallTime Clock, but instead a MonotonicClock that stop when the computer goes to sleep.

So if the user suspends the computer in the middle of the animation, it will resume as expected when the computer wake-up.

Leap seconds are not involved in this use case.


I'm developing a sport application, a chronometer. The app measures the elapsed time between two presses of a button. Requirements:

  • Sub-second precision is required, because I'm measuring the performances of high-level athletes.
  • Leap seconds must not alter my precise measurement. We may register a new world record!
  • Device sleep must not alter my precise measurement. Sleep may happen due to built-in energy-saving policy, or because I inadvertently hit the "sleep" physical button when I put the device in my pocket. None of those events should invalidate the measurement - we're live on TV during the Olympics!

Afaics, MonotonicClock should be replaced with "monotonic clock" :nerd_face: — or UptimeClock (those two should be structs).

That would be a job for MonotonicClock... WallClock.measure is really not that useful.

No, because I added the requirement of insensitivity to device sleep.

Directly from the proposal:

When instants are for local processing only and need to be high resolution without the encumbrance of suspension while the machine is asleep MonotonicClock is the tool for the job.

Terms of Service

Privacy Policy

Cookie Policy