[Pitch] Clock, Instant, Date, and Duration

This makes sense! A static minimum bound seems far more useful, and it’s also easier to implement.

This gets muddy very quickly for WallClock.Instant. How likely is a developer to think the system will delay a timer set for .seconds(10) if it was set immediately before 2am on the last Sunday in October? What about .hours()? Does it depend on whether the call site makes it explicit that it’s passing a WallClock.Instant?

The description of WallClock doesn’t explain why MonotonicClock is not suitable for that purpose. The example doesn’t go into detail about what happens if a WallClock.Instant is shared between two systems and one system’s WallClock slews. Perhaps better names are SharedReferenceClock vs. LocalReferenceClock?

java.time.Instant and java.time.Duration:

The Java Time-Scale divides each calendar day into exactly 86400 subdivisions, known as seconds. These seconds may differ from the SI second.

Java can use UTC-SLS (smoothed leap seconds) or Unix Time, as far as I can tell.

Swift.Date might be more "date-like" by supporting durations of days, if possible.

Just to make sure: My question about leaving Date out of this pitch wasn't rhetorical — I'd really appreciate an explanation.

In the meantime, as no one else gave a direct answer to your question:
I guess there was no response because it makes no sense in the in the future we(?) aim for:
The vision is not to have Date and Timestamp, and having to choose — it's not having a any real successor of NSDate at all (and maybe have Timestamp instead).

Problems with NSDate
Date has a wrong format, a wrong name... so what is left to justify to cement it in the stdlib? I cannot count how many colleagues have been shocked that the Date they just initialized as next Friday is shown as Thursday by the debugger, and I expect it's similar all over the world (except may if you work in Greenwich ;-).

Why bother?
I see the those problems can't be resolved easily, but I don't think this pitch needs to make that cleanup harder than it already is:
The Clock concept is based on protocols, and Date could easily conform to Instant without being part of the stdlib. We have been living with its flaws for years, and we don't have to tie the solution to this pitch.

WallClock
Obviously, if WallClock needs Date and should be part of the stdlib, Date has to be there as well.
But is it actually that useful to have a non-monotonic clock at this level?
It's even dangerous, because accidental use can lead to really nasty bugs!
Does it make sense to use measure with a WallClock (maybe if you want to use daylight savings time to cheat with a record ;-)?
Do you want requests to time out because your server got an NTP-update?
Are there any examples that need WallClock in the stdlib?? (also a real question).

4 Likes

This part is definitely a needed item. The reasoning for a wall clock based instant is to provide a type suitable for deadlines in distributed actor scenarios. The moment a transport communicates a deadline across the wire to a distributed actor that means that the two machines involved need to have a shared agreement on time.

Obviously using an uptime clock in that case is distinctly flawed for a number of reasons, namely of which the two machines will never have been booted precisely at the same time, but also it exposes a potential security vulnerability. Monotonic time also has a non shared frame of reference similar to uptime (this varies a bit more per platform but practically is often a boot as zero reference point style instant).

The next logical question is; why do we need to use instants as the transport for deadlines? Why not timeouts (read Duration)? In that case the deadline based on a timeout is not composable, any attempt to compose that would gain strange drift on the deadline where the access to now ends up taking time out of that deadline. Using a distinct instant means that the actual deadline for work is preserved without needing to recalculate per frame.

So in order to offer the transport layer for distributed actors; we need a concrete type suitable for expressing a shared and agreed upon frame of reference for measuring time. The available concepts across swift's target operating systems that are supported (and pretty much all of them that I can even think of) have a concept that does precisely this. The root derivation of clock_gettime(CLOCK_REALTIME, ...) or its equivalent exist on all platforms (granted the API names may vary a bit here and there). That all boils down to the fact that we need something below (or at the same layering level) distributed actors to offer them the right type. This means that the Concurrency library/standard library is the right place for that thing.

I am quite certain the concerns have been heard, and well understood by now that there is a conflation between dates for computers and calendrical dates. I have some time set aside today to talk with some of the responsible parties and I will make sure to report back here once I have more details. Suffice it to say there are a lot of considerations to make here and we need to make sure to offer APIs that make sense but also need to balance supporting existing usage.

To be clear, this is just the start of a few proposals and work in this area; I know it can be frustrating to feel there is an opaque layer involved, but moving types down is a good way to start to remove some of those barriers. Everyone benefits (in my personal opinion) with things being more open.

6 Likes

Thanks for the detailed response — I already had the feeling that there must be some kind of "it's groundwork for the next WWDC"-motivation here ;-)

But as usual, answers just produce new questions...

Not sure how to understand this… I guess "composable" refers to the problem that transmission takes time, so that your actual deadline is always slightly bigger than intended, right?
That obviously wouldn't be nice, but it's impossible to get two clocks in perfect sync (at least usual clocks), so there will always be some difference.
I just tried to find some common values, but only found vague answers ("should be better than 100ms"), but then I remembered I had to tweak the configuration of an NTP-client in the past, because it was sending emails every day… so, from live experience, I can say there is not a single day in the logs where the needed adjustment was smaller than 1.2 seconds(!).

I'm not sure what a typical timeout will be, but I'm sure you cannot guarantee that the difference of two computers clocks is small enough so that you can express timeouts correctly using absolute time.
So are you going to have an additional clock which will deliver wall clock time plus some delta to compensate differences? But if you do that, you could also use MonotonicClock, couldn't you??

The "two computers" scenario also made me think that WallClock might not be a good name: When you call someone in Australia and ask what time their wall clock is displaying, you'll end up with a difference bigger than any common timeout ;-); maybe WorldClock would be a better fit?

I can speak to that since it matters to where we want to use deadlines (not "timeouts"!) in Swift Concurrency.

Imagine you have two tasks, and they may happen next to each other in parallel, but they may also happen sequentially. You get an incoming request in a server (or app) and set a deadline for it that "this must complete within 2 seconds, or I don't care".

MOCK API:

await Task.withDeadline(in: .seconds(2)) { // MOCK SYNTAX; no idea how we'll expose it yet
  let first = await first()
}

Now... if the first and second are implemented like this:

func first() async {
  await Task.withDeadline(in: .seconds(10)) { <<slow stuff here>> }
  << other stuff >>
}

It would be very terrible if the actual time the <<slow stuff here>> gets to run without cancellation always is 10 seconds because first() func said so. That doesn't make much sense, since the "entire task" that is calling first() is known to have a deadline "in 2 seconds, from the moment where the original 'outer' task was started".

I.e. what we want is the following semantics:

  • Task0: set deadline to now() + 2 seconds == T0
  • Task0 / Task1: attempt to set deadline to now() + 10 seconds == T1, BUT that would be past the parent's deadline, so that does not make sense and instead reuse the parent's deadline T0

This means that "inner" tasks are free to set earlier deadlines, but may not extend the deadlines indefinitely - longer than the parent wanted to wait for them.

The term WallClock is a good name for the "two computers" scenario -- it invokes the right reactions about the clock, i.e. that it cannot be truly relied on but it is a good approximation. Wall time clocks cannot be truly relied on as absolute truth, but they're often best-effort-good enough for many things.

The wallclock is the same UTC time, regardless where it is read, don't mix timezones into this :-)

2 Likes

But that is what real wall clocks do all the time, don't they?
The vast majority of (visible, classic) clocks is configured to display local time, and I have even seen wall clock time used as a synonym for local time.

Wikipedia does not relate to the concept to UTC either, but rather to a stopwatch:

Elapsed real time , real time , wall-clock time , or wall time is the actual time taken from the start of a computer program to the end. In other words, it is the difference between the time at which a task finishes and the time at which the task started.

So it is even worse than I thought: Following the official (I think Wikipedia has some authority) definition, people will assume that the "WallClock" is the right one to use for measurements — and they may get wrong results because of that!

I guess my favorite name would be UTCClock: For those who know about the prefix, this would be clear — and for those who don't know, the acronym might be scary enough to make them use something else ;-).

1 Like

UTCClock implies this clock has some Timezone knowledge, which is wrong, as it is just counter of seconds from a predefined point in time.

Converting this counter into a hour:minute time should be done using a locale aware Calendar.

Great!

In the internal debates, please don't forget the "experts" coming from other languages, or without development experience at all (but a clear understanding of what "date" means).
We have some seasoned Darwin developers here, and yes, I guess they would actually be surprised to see NSDate be replaced... but in a positive way! There's an impressive consensus that cleanup would be a good idea, and afaics, none of the presented downsides made people move away from their opinion.

12 Likes

Picking up from SE-0329: Clock, Instant, Date, and Duration - #9 by davedelong
I'm fine with minute and hour as well.
Yes, those can cause trouble — but using seconds isn't safe either when you are using the wrong clock (and imho this is the actual danger).

Years and months are definitely not a good addition, simply because there is no proper translation to seconds.

For weeks and days, I'd say the connection (1 day equals 24 * 60 * 60 seconds, and a week has seven days) is quite strong (because exceptions are really rare), but the connection to a calendar is stronger:
It is location dependent, and we just don't visit Mars or Venus often enough to encounter conflicting interpretations.

For hours and minutes, I've never seen such a conflict, and all stopwatches I know make the same cut.

4 Likes

I finally articulated this morning why I've got an issue with the proposed Duration, thanks to your post @Tino:

The unit is ambiguous.

There are two kinds of durations we deal with: physical seconds and clock seconds. Most of the time these are equivalent, but they do not have to be.

Past Clocks

A bunch of historical clocks had variable length seconds, because the length of a second was derived based on how much daylight there was that day. So if anyone wanted to implement a historical clock for a game or other emulator, what is their Duration value?

Present Clocks

An NTPClock might have a Duration value that is slightly different from a MonotonicClock, if the NTP server chooses to smear leap seconds over the course of a day, instead of injecting a :60 second at the end of the day. This means that the length of an NTPClock's second is slightly longer (by 1/86400th of a physical second) than a MonotonicClock's second. (This also means that applying a Duration by an Instant needs to happen at the Clock level and not the Instant level, because the physical length of a Duration changes depending on the properties of the Clock)

Future Clocks

You mentioned Mars and dismissed it, but I think it's worthy of consideration (especially since we're considering an instantaneous type with hundreds of billions of years of range at nanosecond [or finer] precision). Martian days are about 39 and a half minutes longer than Earth days, and most attempts at martian timekeeping take the approach similar to the NTP clock by smearing the 39 minutes over the 24 hours. This, in effect, is equivalent to saying that "1 second on a martian clock is equivalent to 1.027 seconds on a terrestrial clock".


So... what is Duration measuring? If it's measuring physical seconds, then there needs to be more work to consider what it means to apply a physical second duration to clock-specific instants. If it's measuring clock-specific seconds, then there needs to be more work to consider how to translate "equivalent" durations between clocks.

8 Likes

@Dante-Broggi's proposal of a ProcessClock also highlights an issue with Duration:

"15 seconds" on a ProcessClock is an indeterminate length of time, since we have no way of knowing how much physical time will elapse; the process could be suspended indefinitely, for all we know.

This also suggest that calling a Clock's duration "seconds" is problematic. For some clocks it might be "seconds", but perhaps "tick" might be a better term?

5 Likes

I would expect tick to be the smallest unit of resolution the clock could measure. For example, a tick on an M1 Mac would differ from that on an Intel Mac. This is in fact a very useful concept for certain system programming applications, so maybe it makes sense to define second in terms of tick?

I still have some hope that we'll have programming languages much more advanced than what we have today when the first human lands on another planet (but I guess we'll be still poking with C…).

However, unless people completely rely on computers for even simple tasks like deciding when to have lunch, I guess actual wall clocks on Mars would be different from what we have on Earth today (which is another point that consolidates my opinion that WallClock is not the best name).
But when you change the hour, you'll also introduce a conflict for common units of speed (cars would be faster… guess Elon Musk would like that :smiley:), so I think this would be a good motivation to finally switch to full decimal and have hundred [whatever the unit would be called] per Mars day... but it could still be an argument to have nothing but seconds (but even those have been scaled 😵‍💫)

Back on earth, I really think we should limit the first iteration to "stopwatch time", because that is the right choice for most tasks (measurement, timeouts); the concept pitched as WallClock (or a variation of it) has certainly merit, but without a calendar, it is seriously hampered.

On top of that, calendars could resolve some challenges like those non-SI seconds: When you convert to something like TAI, the actual clock used for scheduling does not need to know all the quirks.

Imho it is not only useful: Not every device capable of executing code has a build-in clock — but many still have ticks.
As this pitch makes a real clock mandatory, you would need a dialect of Swift to target such systems (and afair, that is still something Swift really does not want).
I could also think of other counters that could be used as clock, and explicitly referring seconds would make Duration awkward in such contexts.

Can you provide an example of a platform which has a “tick” that isn’t defined in terms of seconds? I don’t doubt one exists but it would be nice to have a concrete example.

As far as dialects are concerned, the standard library already makes certain APIs unavailable based on platform. For example, ManagedBuffer.capacity is unavailable OpenBSD because it doesn’t have malloc_size. Perhaps this discussion is an indication that the split between “wall” clocks and “monotonic” clocks is not well-modeled by two concrete instances of the same protocol.

To throw another example out there: a multiplayer real-time strategy game might implement a “game clock” in which ticks have no fixed duration but are monotonically increasing to order in-world events.

3 Likes

Sorry, I have to pass because as far as I could see, Arduino has built-in conversion; I guess I should have written RTC instead of just clock.
Maybe chips which are not deployed with a single, fixed frequency?

Commenting on this quote from the pitch thread over here as I don’t have a complete review of the pitch to contribute:

@Karl :

It also looks totally weird at the call-site:

DispatchQueue.main.asyncAfter(deadline: .now.advanced(by: .seconds(3), clock: .wall)

I also agree that it looks weird, although I see this more from the choice of clock for the parameter name. The pitch defines a clock as “the mechanism in which to measure time”, so it almost feels to me like writing a temperature function like so:

someTemperature.increased(by: 2, thermometer: .celcius)

Specifically calling it out as wall time would make it clearer to me, something like:

.now.advanced(by: .seconds(3), of: .wallTime)