SE-0329: Clock, Instant, Date, and Duration

I agree. Dates (instants) are spot-on in discussions about lossless serialization. I do not want to say that lossless has no value. But I want to say that lossless is not as obvious a requirement as it looks. When a value is serialized, you never know which system will later decode it. It may be the exact same program that has encoded it, but not always. The date may be stored in a database that will perform date computations (and may require a lossy format to do so). The date may be sent to another operating system that has a different "native" date representation (and will loss precision when decoding). The date may be decoded by a future version of your app that uses a different programming language, with its own native date representation as well. In all those situations, lossless date encoding can not be achieved, and is thus not a goal at all.

My point of view is that serialized data is much more important than the language and the api that processes it. Dates are very particular because they do not have an obvious serialization format (unlike ints, strings, and other trivial scalars). In the various design constraints we face in this discussion, I'm not sure lossless round-tripping has the highest priority.

If allowing lossless encoding is an obvious requirement, it is the fact of using this lossless encoding by default which must be weighed against other constraints.

1 Like

Right, forgot about Timestamp - would be more than ok with that too and agree I don’t see a problem it being confused with rubber stamping in this context.

3 Likes

I already braced myself that despite all the backlash, Date will be moved to the stdlib :frowning: — but there has been much constructive critique and serious questions besides that topic which have not been addressed at all.
Those may seem minor, but it is easy to improve a proposal, but tough to correct a mistake once it's released (just imagine we had NSTimestamp as a starting point... but there's more than bikeshedding about names here and in the pitch).

Your milage may vary, but I'd be really, really disappointed if all the input from the community is summarized with an empty phrase like "There has been some critique about minor details, but after thorough consideration, the core team decided to move on anyways"...

11 Likes

I have been putting off the evaluation of the proposal to the last minute as I tend to do with all tasks I'm not relishing. I know that lots of work went into putting together the initial pitch, so I'm saddened to have to share that I feel that the proposal in the form in which it has come to review has not adequately reckoned with the concerns laid out during community discussion of that pitch.


First, a point of order on the process of this also—and I do hope the core team will address this in their review summary. During the pitch stage, multiple references were made to the "responsible parties" at Apple who have contributed to the design, such as the following:

To my knowledge, there hasn't been the promised reporting back, but that is not even my top concern here. Rather, I think we need to address the role of proposal authors and their relationship to the proposal.

It has been stated that an expectation is that authors should be available and responsive to the community during the stages of Swift Evolution, and by all measures the named authors have been. However, it has been made clear that several relevant design decisions here have been shaped by "responsible parties" unnamed with design constraints unstated. Their relationship to the proposal text has been just as authorial (in the sense of authorship commonly used in scientific and technical contexts, whether or not they have typed out any of the words contained in the document) as the named authors, as evidenced during the pitch phase when the discussion has turned to those areas over which these parties have exerted authorship and the named authors have become intermediaries promising to confer and report back.

It is obviously not possible fully to engage in a fluent discussion about design tradeoffs in this manner. I do not believe that it is in the spirit of the review process, nor adequate, that named authors should not among them have collective authorial control over all parts of the proposal. Put another way, relevant parties with authorial control were not readily available and engaging with the community during this process.

Some have said here that "decision are made by those who show up"; I think this is an imminently important cornerstone and principle of fairness in an open-source and transparent project. It should not be the case (in my view) that certain people have special dispensation to make decisions without showing up. If it is not possible for whatever organizational reason for "responsible parties" to participate like everyone else here, then I would argue that their proposal isn't ready to come to review or be incorporated into the Swift open-source project.


I agree substantially with @davedelong's feedback and am grateful that he took the time to write it out. I will name some specifics with which I utterly agree:

  • Regarding the presence of .minutes and .hours in Duration, I agree that they are attractive nuisances and would urge (as I did in the pitch phase) dropping them. It is overwhelmingly the natural thing to reach for clock.now + .hours(3) when you want something three hours from now, and it'd be wrong, and for that reason we should not be exposing an API in this form.

  • Enough has been said about Date's naming specifically, with which I agree. I would like to call out some points additionally which I don't believe have been said:

First, while it is certainly acceptable to weigh pros and cons and then come down against renaming or creating a distinct type (though I'd disagree), it is simply incorrect to characterize renaming as "needless churn for no clear advantage" (emphasis mine). Taken at face value, the authors are stating not that they have weighed pros and cons, but that they have dismissed this choice as having no pros, which discards vast swaths of feedback given. This goes back to my overall feedback that this proposal in its current form has not adequately reckoned with the feedback at the pitch phase.

Second, a central con (actually, the only con) listed besides "needless churn" in the proposal text is that having a type with a new name would lead users to have a quandary of "which type should I use." I and others have noted that the possibility of implicit conversion at API boundaries, exactly like CGFloat and an approach both initially mooted for a different type as part of this proposal and explicitly allowed for by the core team's previous guidance in this area, would address the points about both churn and user quandary. This does not seem to have been reckoned with even as an alternative.

As has been noted by all involved, there is going to be some sort of implicit conversion in the proposal as it is, an unavoidable consequence that arises from our general agreement that the underlying storage of the proposed type must be different from that of Foundation.Date. In the present proposal, this issue emerges with serialization behavior. Although I can accept that lossless serialization may not be a sine qua non, we should be careful here to note that the design as proposed injects uncertainty as to whether serialization will be lossless due to mismatches in Swift version. At least for me, reasoning about future and past Swift versions interacting at runtime is significantly more difficult than reasoning about implicit conversion at API boundaries, which is determined at compile time. I think the latter alternative is significantly more within the reasoning capacity (and, therefore, control) of the author at the time they're writing their code.

So, I'd urge consideration of implicit conversion among two distinct types as an alternative even to the "same-type-different-name" approach detailed by @Douglas_Gregor. If there is agreement on both different underlying representation and a different name for our future Date replacement, then for the reasons above I think allowing each type to keep its own serialization behavior and offering seamless conversion at API boundaries may be a better user experience while still addressing the churn-and-quandary problem cited by the authors.


I want to bring up an issue not mentioned by @davedelong or, unless I'm mistaken, anyone else in this review thread. And that's the issue of the design proposed with respect to leap seconds. The proposal states:

It has been considered that Date should account for leap seconds, however after much consideration this is viewed as perhaps too disruptive to the storage and serialization of Date and that it is better suited to leverage calendrical calculations such as those in Foundation to account for leap seconds since that falls in line more so with DateComponents than Date.

Despite the statement that this opinion is the product of "much consideration," it is almost verbatim unchanged from what was first expressed in the pitch thread when the question of leap seconds was brought up, even as much was revealed in the course of that discussion:

Initially, I took it at face value that the issue of leap seconds was purely calendrical based on the claim that Date would represent actual time elapsed since a reference date. If that were the case, then the authors are correct that adjusting dates for leap seconds is purely calendrical, as it would just be about converting seconds and nanoseconds elapsed to local dates and times.

However, it has been later clarified that there is no possibility for an implementation on Apple platforms actually to include leap seconds among the seconds elapsed since a reference date without hitting the file system or network; by necessity, the standard library must elide leap seconds—and, as @lorentey discovered, during the leap second the system's wall clock actually repeats the previous second (or as he put it, "silliness" that isn't even "sort of halfway-sensible").

While it may be true that we just have to accept a discontinuity on Apple platforms at present, during the pitch discussion, we also talked at some length about how this is not the case either for Linux or for Windows. In the case of Linux, clock_adjtime provides the information necessary to distinguish a repeated leap second from the prior second, and on Windows, the system wall clock has been entirely reworked (since some release of Windows 10) not to elide leap seconds.

It's simply not correct that not incurring a discontinuity during leap seconds (where supported by the platform) would be "too disruptive" to the storage and serialization of Date. Without any change in storage or serialization—that is, within the existing design where nanoseconds are stored as a UInt32 and ordinarily normalized to be less than 1 billion—it would be possible to use the range 1_000_000_000..<2_000_000_000 to indicate an inserted leap second based on information available from the Linux system wall clock. This can be deserialized correctly even on Apple platforms where the same value can't be generated, and with a little bit of care in the underlying implementation all operations on it should, to my understanding, just work.

I stress that this idea wasn't invented by me in the moment, but rather it has already been discussed elsewhere on the Internet by folks who have been trying to tackle this problem. It would not have taken much to discover it if the authors would have been moved to look, but nowhere in the text is there evidence that it has been considered. This goes back again to my overall impression that the review has not adequately reckoned with the concerns laid out during community discussion during the pitch phase.

That both Linux and Windows have made changes to mitigate the problem of leap second representation in their system wall clock is, to me, good supporting evidence that this is a relevant issue which needs to be accounted for even in the non-calendrical APIs available in Swift. The bare minimum, in my view, is that a type designed for the future should not be discarding the improvements already made in this area by the underlying system; rather, it should ideally gracefully accommodate them wherever available and in a way that future improvements on other platforms can similarly be incorporated without having to overhaul these types again.

Undoubtedly yes, and for many-faceted reasons.

See above. I share concerns with others regarding the trajectory (e.g., future users versus backwards compatibility) that is inherent to choices regarding sinking Date into the standard library with the design as proposed. Similarly, I have such concerns regarding leap second handling.

I have obviously needed to deal with time, but I do not have as much depth in this area as @davedelong and others. I have studied both the prior art detailed in the proposal text as well as looked at some of these same aspects in JavaScript, but not in enough detail to evaluate comparatively.

I put in some in-depth study in the pitch phase, and then read through the proposal text twice. My intention during this review period was to spend some time to study the proposal design in even greater detail and maybe even "noodle" with some alternatives, and to study up on prior art along the way. However—echoing again my overall appraisal of this proposal—it was extremely discouraging that in those areas where I did study up and put in great effort at feedback during the pitch stage, all of that effort was apparently dismissed in the proposal text without apparent consideration. It was sufficient to raise questions as to whether any further effort along these lines would be to any effect.

42 Likes

Surely, it must — by physical necessity — be the other way around.

All time-keeping hardware available to humankind, including those found on MacBooks and iPhones, use some kind of physical approximation to measure actual elapsed time (since powered on).

Whether it be a quartz crystal or some other high frequency alternative, it keeps humming away at regular intervals, progressing the system clock linearly and continuously.

I agree with most feedback given in this thread so far, but I want to just very quickly add this: I find it highly questionable that a representation of time where a purported duration of one second could in fact be anything from zero to two seconds could prove a solid default type for deadlines etc.

My country recently switched from summer to winter time, and—would't you know it—precisely as the radio station I was listening to announced the jump had just taken place, they lost their ability to play any clips or prerecorded content for a good half an hour.

If we can't expect developers to handle this very well known and precisely scheduled snag in date processing, I am certain such a small and irregular thing like leap seconds messing with how long a "second" is have not a snowball's chance in hell of being handled properly and not causing anything from minor curiosities to major outages.

Or to put it another way: If I set a timeout for a task that is whatever the api I'm using calls one second long, I would deem it quite unacceptable if that timeout fired either 0.5 seconds or 2 seconds in, no matter what particular second on what particular day it is, and irregardless of the particular mood the earths rotation has displayed for the preceding few months.

2 Likes

In that case, you shouldn’t set your timeout in terms of WallClock but rather one of the other two clocks provided.

Since my previous response touched on quite a few different topics, I’d like to restate my most important objections that haven’t been raised elsewhere:

1: The names MonotonicClock and UptimeClock don’t adequately communicate the distinction between the two clocks, and they rest on platform-specific conventions that will be very confusing on other platforms. This is made clear in the Definitions section of the proposal, but the confusing names are then used without any discussion or motivation.

2: The initializer Date.init(converting uptimeInstant: UptimeClock.Instant) is listed but not specified. As far as I can see, this conversion cannot be implemented in a reasonable way that gives meaningful results for times other than “now”.

9 Likes

I forgot to make a review so I'll make this quick.

  • What is your evaluation of the proposal?

-1, for all the reasons outlined in this thread. For me, the Date move is the biggest issue. I think there's a dichotomy in how Date is typically used within and outside Apple. Within it seems to be used for lower level deadline APIs like RunLoop, outside it's mostly used to parse dates from backends and then show them to the user. These are two vastly different use cases and so should be two different types.

Like others, I also feel that alternatives to moving Date into the standard library were not fully explored. Namely, the alternative of creating a new type should be explored without the assumption that it would replace all usages of Date. It would not and should not. Instead, I'd like to see a new type replace only the lower level uses of Date, like RunLoop, where the value proposition would be highest. Adding something like Timestamp to Swift and then adding additional API to the low level types which need it should be much easier than attempting to replace Date across the board. This would also create an explicit bifurcation between computer dates and human dates.

  • Is the problem being addressed significant enough to warrant a change to Swift?

Yes, integration of core time representations is important.

  • Does this proposal fit well with the feel and direction of Swift?

For the most part the low level APIs are well designed. I simply disagree with the Date move. If this proposal gets a second revision I'll take a closer look at the other aspects.

  • If you have used other languages or libraries with a similar feature, how do you feel that this proposal compares to those?

I used NSDate extensively in Obj-C, if that counts. The history of that type in Obj-C, where it had many of the same misuses as it does in Swift, demonstrates quite strongly that it's not appropriate for low level use.

  • How much effort did you put into your review? A glance, a quick reading, or an in-depth study?

Involvement in the pitch, reviewed the proposal.

15 Likes

As I understand the proposal, that's not an option when setting a deadline for operations across distributed systems. Only WallClock is defined in a way to be meaningful in that case.

That’s true, but for fundamentally the same reasons expecting a 1 second “distributed timeout” to be exactly one second (for some value of “exactly”) is going to lead to disappointment one way or another.

That said, distributed timeouts do need to take leap seconds into account in some way to minimize this disappointment; it wouldn’t be great if a client is handling leap seconds by ignoring them while the server is handling them by smearing over a day.

Actually… this will happen if WallClock times are interpreted using the system clock on both ends; the alternative, for the transport to reinterpret WallClock meanings on top of NFT, would mean that WallClock instants have different meanings in different contexts. Wouldn’t it make more sense, in that case, for the transport to have its own clock type?

I'm not sure I get your point here. If you remove this API, user will only write clock.now + .seconds(3 * 3600) instead of clock.now + .hours(3).

Can you elaborate on why clock.now + .hours(3) is not a proper way to express a time 3 hours from now ?

1 Like

Not every hour has 3600 seconds. If you want 3 * 3600 seconds explicitly relative to a clock, that’s something the standard library can deliver on. Calendrical APIs on Foundation (or another library) might be able to give you 3 hours, but since the standard library cannot, it would not be accurate to promise such—and it would make it impossible for a Foundation API (or another library) to offer such an API with the natural spelling.

My point is precisely that it is the proper way to express a time 3 hours from now. The standard library APIs are explicitly non-calendrical, so any definition of “hour” that it can use necessarily differs from what you or I would mean by an hour.

To return to an analogy I’ve used elsewhere, I agree that “Full Self-Driving” is an excellent name for a full self-driving feature in a car. But since it is not possible actually to offer full self-driving in a car, we should not call any feature that turns the steering wheel “Full Self-Driving.”

3 Likes

As I understand the objection, there are two levels of problem, both about the relation of elapsed time (measuring how much time has passed) to calendrical time (our various systems of recording dates and times). I may not be using these terms correctly; hopefully someone can correct me if so.

  • The first level is that time units larger than a second are not fixed measures of elapsed time. This is most obvious with a day, which (defined calendrically) can contain either 23, 24, or 25 hours. A minute is defined to be one of 60 portions of an hour, and because of leap seconds, it can be any of 59, 60, or 61 seconds long. (I believe a leap second has never yet been removed, but it's allowed.) An hour can therefore be any of 3599, 3600, or 3601 seconds long. You can give these units defined values of elapsed time according to their vastly-most-common measurements, but if someone expects there to be a relationship between those durations in elapsed time and the "calendrical" times we present to users — for example, that 23:59 + 60s is always 0:00 — they're in for trouble.

  • The second level is closely related to that last sentence, which is that the duration type here is defined as a fractional number of seconds, which can make it error-prone for doing calendrical time manipulations, which is what a lot of use-cases want to do. Again, this is clearest to see at a larger scale. If you advance the abstract calendrical time-and-date "November 6th, 2021 at 1:15pm" forward by one day, you should get "November 7th, 2021 at 1:15pm"; if you instead advance it by 24 hours, you may get "November 7th, 2021 at 12:15pm" because (in some places) that crosses a DST line. Similarly, if you advance a time by 24 * 60 * 60 seconds, and the previous day ended in a leap second, the result won't necessarily be the same time of day on the next day. As a result, abstractions built around elapsed time are not necessarily suitable for calendrical time, and I think people are concerned that the duration type here is meant to be a currency type that will eventually be used for calendars.

Have I captured these concerns correctly?

22 Likes

This is an excellent summary :clap:

There's some more discussion about the topic:

and for issues like this

there are some thoughts in Clock synchronization - #8 by Tino (I'm not sure about results... but so far, I'm still convinced that system time alone is not a good choice for timeouts in distributed contexts)

The conclusion was that they're not.

However we also have cross-process situations (which distributed actors also are designed to handle) where they would be desirable, since it is the same clock.

The problem with leap second is a WallClock specific issue. You can make the same argument about Duration.seconds when using a clock that suspend while the computer is sleeping.

Moreover, the WallClock should almost never be use. If you want to specify point in time, you should use Calendar API. If you want to express a duration from now, you should use a monotonic clock.

Its primary usefulness is with distributed systems, and you won't even be able to guarantee that each system is treating the leap second the same way. Moreover using Duration with such system would be inaccurate by design because of network delay.

Should we really design the API based on this specific use case ?

1 Like

Overall, I like the proposal and I think these APIs will be a great addition to the language. My main area of concern is the move of the Date type for the reasons already mentioned in the thread, specifically:

  1. The type name. In my 9 years of working with Foundation, I've seen so many young iOS developers, myself included, totally misunderstanding and trying to misuse the type, thinking it has something to do with months, timezones, and other calendar matters just because it's called Date. I think the proposed TimePoint, Timestap or Clock.Instant alternatives would lead to less assumptions about the type, even if they are not great names.
  2. Coding. I think this needs to be defined if the storage is to be changed. Can we have a lossless journey from NewDate -> JSON -> NewDate? This seems important to me.
  3. Does it really have to be moved? Is having a separate (Wall)Clock.Instant type that much of an issue?

Additionally, there is one thing that I don't quite feel alright with. Some of the concrete Instant types provide the value of now. How is this going to be implemented? Reading the value from some kind of a Clock singleton? It doesn't seem like a right thing to do, even though it might be easy to implement.

5 Likes

I think there's been some confusion here. A distinct issue with WallClock as currently proposed is that it propagates Unix time's odd, discontinuous behavior during intercalated seconds as though the clock were turned back by one second—even on platforms where it's possible not to do that because there exist additional APIs that can disambiguate. But this isn't what we're talking about here.

In fact, the issue that arises with clock.now + .hours(3) potentially being off by a full second specifically doesn't apply to WallClock, because that clock elides leap seconds. It is exactly when you use a monotonic clock that you run into trouble, because an hour isn't fixed to be 3600 seconds with respect to an arbitrary monotonic clock (as opposed to WallClock).

As to your question specifically, there are two ways to think about it:

From a practical standpoint, your question boils down to this—Is it optimal (or even acceptable) to design an API where users of, say, monotonic clocks who ask for a time x hours from now in the most intuitive way instead may get a time that is x hours less one second, in scenarios that aren't easily testable ahead of time? For certain applications, being off by one second is no big deal. However, for other applications (some folks in the pitch phase were interested in making sure that the types in this proposal can represent sub-nanosecond durations precisely), being off by one second is very much a big deal. So it would not be optimal or even acceptable.

The existence of network delay and other types of error related to the measurement or distribution of time isn't different in kind from those errors inherent to measuring any other physical quantity. By contrast, the distinction between the necessarily inconsistent relationship among astronomical units of time (e.g., days and their subdivisions (hours and minutes) and years and their subdivisions), which require intercalation determined by civil authorities, and the fixed relationship between the SI second and its subdivisions (based on orders of magnitude) are definitional. From a philosophical standpoint, then, your question boils down to—should we bother to model the relationship among units of time in a way that adheres as closely to standards-based definitions as possible given that machines will always have inaccuracies in measuring time? Yes, just as we should and do offer IEEE 754-compliant floating-point APIs even as floating-point representation, and indeed any machine representation, cannot perfectly model the reals.

3 Likes