[Pitch] Clock, Instant, Date, and Duration

Do they? On the one hand we've got Date which is a effectively a typealias for "a Double of the number of seconds since the reference date", and on the other hand we're talking about a new type that has femtosecond precision for billions of years. Those are very different things. The former is the misnamed basis for calendaring APIs, not suited for precision timing. The latter is much closer to a true instantaneous timestamp.

We're already largely in agreement about an instantaneous type with several orders of magnitude more precision than Date; why not fix the name as well?

14 Likes

Omitting hours doesn’t seem arbitrary to me… most people on the planet experience two hour-based discontinuities per year. There have been serious bugs caused by daylight saving transitions.

Minutes, on the other hand, seem equivalent to seconds in terms of risk. Leap seconds are rare enough that for all intents and purposes seconds are treated as monotonic, and a minute is treated as exactly sixty seconds.

2 Likes

I do hope we can think of a better name for InstantProtocol. It's a crucial part of the whole design.

What's interesting is how a clock's Instant type is different a collection's Index. They are both opaque associated objects produced by a parent, but with indices, the collection decides which indices appear in which order; and with instants, the order is set and new instants can be created without involving any specific clock. A particular clock only decides when it arrives at future instants.

public protocol InstantProtocol: Comparable, Hashable, Sendable {
   func advanced(by duration: Duration) -> Self
   func duration(to other: Self) -> Duration
}

This is basically Strideable, except that the Stride (Duration) is not a SignedNumeric.

(fun side-note: the docs for Strideable demonstrate using it for a type Date, which represents a calendar date :upside_down_face: )

The name InstantProtocol implies that this doesn't represent a significant abstract "thing" by itself, that it is defined by the clock whose instant it is. But the fact that it can apply quantities of time (e.g. seconds) to an instant implies that it models something significant about how time progresses.

Also, it sounds like a microwave/"just add water!" protocol. Don't mix with :fish:able.

Because then we'll have two types...

If we're going to have two types I think we need to answer these questions:

I think this is a really good idea - it seems distinctly reasonable to have that as a requirement, however is the resolution of a clock something variable? or is it something static? The problem with it being a variable is that if the resolution of a clock changes from one point to the next it will immediately be vulnerable to a race condition. So perhaps I am in favor of a resolution being a minimumResolution. That way it infers the value should not change from call to call.

edited; changed my mind on requiring static

1 Like

For me, the question to answer is actually "why should Date in the proposed form exist at all as part of Swift?".
If you believe Swift will flourish beyond the Apple-realm, I think it will be hard to explain to someone with, for example, a Linux background, that one of the types of the stdlib has a quite misleading name and strange persistence rules.
Building a language for the future instead for the past would be a strong enough argument for me (and as far as I can see, for the majority of people discussing here and not working at Apple) to not include NSDate — but even with a strict focus on iOS, afaics it does not buy us much take that step:
It's not a core part of what is proposed here, so why cling so eagerly on Date in a pitch about scheduling (which does not need dates at all in many cases)?

15 Likes

The values being stored are not daylight savings effected since they are not calendrical values. so hours definitely seems still reasonable; even to say .hours(3) as @ktoso mentioned per server side usages. I agree that the .days is definitely a boundary that I want to avoid since that does get into the question of "what is a day"; which immediately gets into the civil/legal/religious connotations brought in by the concept of calendars.

4 Likes

This makes sense! A static minimum bound seems far more useful, and it’s also easier to implement.

This gets muddy very quickly for WallClock.Instant. How likely is a developer to think the system will delay a timer set for .seconds(10) if it was set immediately before 2am on the last Sunday in October? What about .hours()? Does it depend on whether the call site makes it explicit that it’s passing a WallClock.Instant?

The description of WallClock doesn’t explain why MonotonicClock is not suitable for that purpose. The example doesn’t go into detail about what happens if a WallClock.Instant is shared between two systems and one system’s WallClock slews. Perhaps better names are SharedReferenceClock vs. LocalReferenceClock?

java.time.Instant and java.time.Duration:

The Java Time-Scale divides each calendar day into exactly 86400 subdivisions, known as seconds. These seconds may differ from the SI second.

Java can use UTC-SLS (smoothed leap seconds) or Unix Time, as far as I can tell.

Swift.Date might be more "date-like" by supporting durations of days, if possible.

Just to make sure: My question about leaving Date out of this pitch wasn't rhetorical — I'd really appreciate an explanation.

In the meantime, as no one else gave a direct answer to your question:
I guess there was no response because it makes no sense in the in the future we(?) aim for:
The vision is not to have Date and Timestamp, and having to choose — it's not having a any real successor of NSDate at all (and maybe have Timestamp instead).

Problems with NSDate
Date has a wrong format, a wrong name... so what is left to justify to cement it in the stdlib? I cannot count how many colleagues have been shocked that the Date they just initialized as next Friday is shown as Thursday by the debugger, and I expect it's similar all over the world (except may if you work in Greenwich ;-).

Why bother?
I see the those problems can't be resolved easily, but I don't think this pitch needs to make that cleanup harder than it already is:
The Clock concept is based on protocols, and Date could easily conform to Instant without being part of the stdlib. We have been living with its flaws for years, and we don't have to tie the solution to this pitch.

WallClock
Obviously, if WallClock needs Date and should be part of the stdlib, Date has to be there as well.
But is it actually that useful to have a non-monotonic clock at this level?
It's even dangerous, because accidental use can lead to really nasty bugs!
Does it make sense to use measure with a WallClock (maybe if you want to use daylight savings time to cheat with a record ;-)?
Do you want requests to time out because your server got an NTP-update?
Are there any examples that need WallClock in the stdlib?? (also a real question).

4 Likes

This part is definitely a needed item. The reasoning for a wall clock based instant is to provide a type suitable for deadlines in distributed actor scenarios. The moment a transport communicates a deadline across the wire to a distributed actor that means that the two machines involved need to have a shared agreement on time.

Obviously using an uptime clock in that case is distinctly flawed for a number of reasons, namely of which the two machines will never have been booted precisely at the same time, but also it exposes a potential security vulnerability. Monotonic time also has a non shared frame of reference similar to uptime (this varies a bit more per platform but practically is often a boot as zero reference point style instant).

The next logical question is; why do we need to use instants as the transport for deadlines? Why not timeouts (read Duration)? In that case the deadline based on a timeout is not composable, any attempt to compose that would gain strange drift on the deadline where the access to now ends up taking time out of that deadline. Using a distinct instant means that the actual deadline for work is preserved without needing to recalculate per frame.

So in order to offer the transport layer for distributed actors; we need a concrete type suitable for expressing a shared and agreed upon frame of reference for measuring time. The available concepts across swift's target operating systems that are supported (and pretty much all of them that I can even think of) have a concept that does precisely this. The root derivation of clock_gettime(CLOCK_REALTIME, ...) or its equivalent exist on all platforms (granted the API names may vary a bit here and there). That all boils down to the fact that we need something below (or at the same layering level) distributed actors to offer them the right type. This means that the Concurrency library/standard library is the right place for that thing.

I am quite certain the concerns have been heard, and well understood by now that there is a conflation between dates for computers and calendrical dates. I have some time set aside today to talk with some of the responsible parties and I will make sure to report back here once I have more details. Suffice it to say there are a lot of considerations to make here and we need to make sure to offer APIs that make sense but also need to balance supporting existing usage.

To be clear, this is just the start of a few proposals and work in this area; I know it can be frustrating to feel there is an opaque layer involved, but moving types down is a good way to start to remove some of those barriers. Everyone benefits (in my personal opinion) with things being more open.

6 Likes

Thanks for the detailed response — I already had the feeling that there must be some kind of "it's groundwork for the next WWDC"-motivation here ;-)

But as usual, answers just produce new questions...

Not sure how to understand this… I guess "composable" refers to the problem that transmission takes time, so that your actual deadline is always slightly bigger than intended, right?
That obviously wouldn't be nice, but it's impossible to get two clocks in perfect sync (at least usual clocks), so there will always be some difference.
I just tried to find some common values, but only found vague answers ("should be better than 100ms"), but then I remembered I had to tweak the configuration of an NTP-client in the past, because it was sending emails every day… so, from live experience, I can say there is not a single day in the logs where the needed adjustment was smaller than 1.2 seconds(!).

I'm not sure what a typical timeout will be, but I'm sure you cannot guarantee that the difference of two computers clocks is small enough so that you can express timeouts correctly using absolute time.
So are you going to have an additional clock which will deliver wall clock time plus some delta to compensate differences? But if you do that, you could also use MonotonicClock, couldn't you??

The "two computers" scenario also made me think that WallClock might not be a good name: When you call someone in Australia and ask what time their wall clock is displaying, you'll end up with a difference bigger than any common timeout ;-); maybe WorldClock would be a better fit?

I can speak to that since it matters to where we want to use deadlines (not "timeouts"!) in Swift Concurrency.

Imagine you have two tasks, and they may happen next to each other in parallel, but they may also happen sequentially. You get an incoming request in a server (or app) and set a deadline for it that "this must complete within 2 seconds, or I don't care".

MOCK API:

await Task.withDeadline(in: .seconds(2)) { // MOCK SYNTAX; no idea how we'll expose it yet
  let first = await first()
}

Now... if the first and second are implemented like this:

func first() async {
  await Task.withDeadline(in: .seconds(10)) { <<slow stuff here>> }
  << other stuff >>
}

It would be very terrible if the actual time the <<slow stuff here>> gets to run without cancellation always is 10 seconds because first() func said so. That doesn't make much sense, since the "entire task" that is calling first() is known to have a deadline "in 2 seconds, from the moment where the original 'outer' task was started".

I.e. what we want is the following semantics:

  • Task0: set deadline to now() + 2 seconds == T0
  • Task0 / Task1: attempt to set deadline to now() + 10 seconds == T1, BUT that would be past the parent's deadline, so that does not make sense and instead reuse the parent's deadline T0

This means that "inner" tasks are free to set earlier deadlines, but may not extend the deadlines indefinitely - longer than the parent wanted to wait for them.

The term WallClock is a good name for the "two computers" scenario -- it invokes the right reactions about the clock, i.e. that it cannot be truly relied on but it is a good approximation. Wall time clocks cannot be truly relied on as absolute truth, but they're often best-effort-good enough for many things.

The wallclock is the same UTC time, regardless where it is read, don't mix timezones into this :-)

2 Likes

But that is what real wall clocks do all the time, don't they?
The vast majority of (visible, classic) clocks is configured to display local time, and I have even seen wall clock time used as a synonym for local time.

Wikipedia does not relate to the concept to UTC either, but rather to a stopwatch:

Elapsed real time , real time , wall-clock time , or wall time is the actual time taken from the start of a computer program to the end. In other words, it is the difference between the time at which a task finishes and the time at which the task started.

So it is even worse than I thought: Following the official (I think Wikipedia has some authority) definition, people will assume that the "WallClock" is the right one to use for measurements — and they may get wrong results because of that!

I guess my favorite name would be UTCClock: For those who know about the prefix, this would be clear — and for those who don't know, the acronym might be scary enough to make them use something else ;-).

1 Like

UTCClock implies this clock has some Timezone knowledge, which is wrong, as it is just counter of seconds from a predefined point in time.

Converting this counter into a hour:minute time should be done using a locale aware Calendar.

Great!

In the internal debates, please don't forget the "experts" coming from other languages, or without development experience at all (but a clear understanding of what "date" means).
We have some seasoned Darwin developers here, and yes, I guess they would actually be surprised to see NSDate be replaced... but in a positive way! There's an impressive consensus that cleanup would be a good idea, and afaics, none of the presented downsides made people move away from their opinion.

12 Likes

Picking up from SE-0329: Clock, Instant, Date, and Duration - #9 by davedelong
I'm fine with minute and hour as well.
Yes, those can cause trouble — but using seconds isn't safe either when you are using the wrong clock (and imho this is the actual danger).

Years and months are definitely not a good addition, simply because there is no proper translation to seconds.

For weeks and days, I'd say the connection (1 day equals 24 * 60 * 60 seconds, and a week has seven days) is quite strong (because exceptions are really rare), but the connection to a calendar is stronger:
It is location dependent, and we just don't visit Mars or Venus often enough to encounter conflicting interpretations.

For hours and minutes, I've never seen such a conflict, and all stopwatches I know make the same cut.

4 Likes

I finally articulated this morning why I've got an issue with the proposed Duration, thanks to your post @Tino:

The unit is ambiguous.

There are two kinds of durations we deal with: physical seconds and clock seconds. Most of the time these are equivalent, but they do not have to be.

Past Clocks

A bunch of historical clocks had variable length seconds, because the length of a second was derived based on how much daylight there was that day. So if anyone wanted to implement a historical clock for a game or other emulator, what is their Duration value?

Present Clocks

An NTPClock might have a Duration value that is slightly different from a MonotonicClock, if the NTP server chooses to smear leap seconds over the course of a day, instead of injecting a :60 second at the end of the day. This means that the length of an NTPClock's second is slightly longer (by 1/86400th of a physical second) than a MonotonicClock's second. (This also means that applying a Duration by an Instant needs to happen at the Clock level and not the Instant level, because the physical length of a Duration changes depending on the properties of the Clock)

Future Clocks

You mentioned Mars and dismissed it, but I think it's worthy of consideration (especially since we're considering an instantaneous type with hundreds of billions of years of range at nanosecond [or finer] precision). Martian days are about 39 and a half minutes longer than Earth days, and most attempts at martian timekeeping take the approach similar to the NTP clock by smearing the 39 minutes over the 24 hours. This, in effect, is equivalent to saying that "1 second on a martian clock is equivalent to 1.027 seconds on a terrestrial clock".


So... what is Duration measuring? If it's measuring physical seconds, then there needs to be more work to consider what it means to apply a physical second duration to clock-specific instants. If it's measuring clock-specific seconds, then there needs to be more work to consider how to translate "equivalent" durations between clocks.

8 Likes