[Pitch] Clock, Instant, Date, and Duration

That is not possible; it would require the stride to be Numeric. Which would then lead the Duration to be able to be multiplied by another Duration.

1 Like

Hmm. Would that be illogical?

Pretty much; what does it mean to multiply 5 seconds by 4 seconds? It results in 20 seconds^2. This has been a problem with TimeInterval and really poses a distinct problem with correctness of code dealing with time. Now... if there was a base protocol just below Stridable (or if somehow Stridable could be changed to not require Numeric) then yes I wholeheartedly agree that the spirit of that protocol should apply. However that seems perhaps out of scope here (and a potential future addition if and when that protocol issue is resolved)

1 Like

What if you defined the raw value as being unitless? That is, one nanosecond is equivalent to one unitless raw value, and you’re never actually multiplying or adding with units?

Stride requires SignedNumeric because it needs negate(), by the way. I would wholeheartedly support refactoring the number protocols in Swift 6, because they are a mess.

To be clear, 20 seconds^2 itself is not a problem. The problem is that Numeric requires a*b have the same type as a and b, so now we have to be able to add seconds to seconds-squared, and that's obviously bogus.

8 Likes

Im not sure I follow completely, are you suggesting that Duration is just a raw Int64 or something like that? Then we are back at the problem with TimeInterval. Perhaps I am misunderstanding.

That type having a distinct structure is the crux of this and a number of other efforts that will fall-out from this change so I am not sure we really want to take that path. It seems to me that fixing the protocols would be a fine "next step" in associated things. Clock/Instant/Date/Duration obviously are not the end here - This opens up a whole world of possibilities that we can explore together.

2 Likes

Alternatively, you could trap? I definitely see your point, but it’s better than recreating Strideable from scratch.

A trap is precisely that; it is better that we prevent the misuse via the type system than to crash at runtime. Giving a developer a chance to immediately spot the error prevents the crash from becoming a failure users of that developers' app face.

3 Likes

Could you provide some examples that illustrate how cumbersome it is when Duration is per Clock?

I guess, from the perspective of "Most importantly, Swift is designed to make writing and maintaining correct programs easier for the developer.", this seems like a pretty high bar for consciously making a choice to make it harder for the developer to write a correct program.

I'm not saying that bar is insurmountable, I just hope that we aren't doing this just because it makes it easier for certain frameworks to adopt this.

This really great, thanks for all the hard work and the clear proposal!

I want to echo what others have mentioned, that in my experience the ergonomics around testing are as or more important to me than how the production code call sites may look.

It would be amazing if some kind of manual clock for testing could make it into the proposal. Imagine jumping into a project with a giant set of unit tests for some gnarly time/date adjacent system (not hypothetical!) and seeing a familiar setup for manipulating time and checking async stuff rather than the usual bunch of Date extension boilerplate.

4 Likes

Basically the problem boils down to a few similar cases to this:

Lets say a developer wants to use a timeout. Well to do that they must immediately be tossed into the deep end of time calculations to understand which is the right clock for that duration. That also means they must understand that they need to make the function generic upon a clock, which means that the clock must also be passed along with the duration, now expanding the signature to two parameters (compared to the initial view of only one bit of info needed).

But the other rub is (associated) that makes that duration type part of the ABI contract of that function. And when/if they realize that the clock that they initially chose is wrong... well they are stuck with that interface as long as they want to maintain ABI compatibility. This particular problem is perhaps more egregious for framework developers, but a consideration of general ABI stability.

So the question then became; what is the failure mode if durations are shared? How can they be misused because we are lacking type information? The primary case is; if you measure a duration from one clock and either use that duration applied to another clock, or compare it to another clock. Applications won't deadlock if they are misused as such. Nor will they crash if they are misused like this.

So the cost is a slightly incorrect capability of comparing a monotonic seconds elapsed to a wall clock seconds elapsed versus making the initial experience of using time being complicated. After discussing this at great length with some of the maintainers of Dispatch, the Darwin posix layer, kernel, and a number of linux experts as well as my own background, we came to a conclusion that this is a sensible middle ground to allow for a forgiving introduction both in complexity and ABI impact to the correctness of something that is perhaps not really that big of an issue in the end.

The other added benefit is that we can consider more expansive impact like the interaction with TimeInterval this way. Which basically lets us correct something that with hindsight perhaps should have been handled differently. That is a pretty attractive characteristic to learn from the experiences so far and push for something a bit better.

6 Likes

There are a few extra parts that this needs to be fully productionized but a simplistic version is here

I definitely consider that part of the next steps after we identify the base requirements and types.

3 Likes

Thanks, I appreciate the explanation.

I think, if a developer mistakenly compares a duration to a duration from a different clock (without realizing that this most likely a mistake), then the result of that comparison could (and quite likely would) be unexpected, and I'd argue that this could result in a crash (if you are lucky), or a deadlock (unlikely but still possible), or any number of outcomes when the outcome of the comparison is unexpected. If the outcome is not a crash, then the developer has to track down the source of the bug, and they may eventually realize that they should not have been blindly comparing durations from separate clocks.

Now, perhaps, absolutely crystal clear documentation is sufficient to address this concern.

Clearly this is a significant tradeoff, and it seems that most aspects of the trade have been considered, anyway, thank you for the explanation.

@lorentey just reminded me, ClockProtocol and InstantProtocol must be Sendable. Updated the pitch to reflect that requirement.

3 Likes

585 billion years around 1970 ?!

to put this into perspective:

  • age of universe is 14 billion years. we'll be able measuring dates some 280 billion years prior to big bang, wow!

  • earth will die in 4 billion years

  • sun will die in 5 billion years

  • Milky Way will die in 4-6 billion years

  • 96 bits of precision is enough to count all atoms in the universe and to do this 100_000 times.

looks like serious overengineering to me.

i appreciate you've spent some serious time and effort into this... but i can't see why 64 bits isn't enough for a reasonable time span with a reasonable precision. just because rust does it isn't a good justification for me. and believe me - 100 years from now people will change this whole thing anyway, no matter how future proof it appears to look now. and 4 billion from now we will have more serious issues to deal with.

having said that, switching from float to fract is a good idea for time and space measurements while we are on Earth.

I cannot refute that is a bit overkill; however the reasoning is that we need to be compatible with existing ranges. Date in particular utilizes +/- 63113904000 seconds around Jan 1 2001. That exceeds the storage in which we can have a full resolution nanosecond scale storage for 64 bits.

That all being said: first off this is an implementation detail that I was considering and decided it was reasonable to share that detail such that folks understand the impact of it, secondly this is only one approach; another approach is to store value using 62 bits for the value, 1 bit for sign and 1 bit for a marker to re-interpret the value as stored extremas such as .distantFuture, .distantPast and even more bizarre (but seen in the wild) uses of Date(timeIntervalSinceReferenceDate: .infinity).

Scaling back out; systems programming has an issue looming in the relatively near future of 2037 where the 64 bit storage limit to 1970 is soon coming to a reality - basically the overkill case is to handle that as a forward looking version that would be more than enough to represent the storage needed.

As a side note: struct timespec is 128 bits of storage - it stores a value that when normalized takes up 94 bits and rounding up to the closest even representation yields 96 bits.

This particular implementation detail is rather mutable; if there is a suggestion to keep nanosecond precision, at least encompass Jan 1 2001 +/- 63113904000, and bonus points to represent silly values like +/- infinity then I am quite open to changing that.

4 Likes

glad you are open to changes.

what use cases do you have in mind to require nanosecond resolution for dates around 1AD or 2000 years from now? is it important for, say, a dispatch async timer to fire with a nanosecond resolution when it is scheduled to run 1000 years in the future, or is millisecond or sub second resolution quite enough for such distant time spans? if you are thinking about, say, predicting planet positions with pinpoint accuracy 100 years from now, or measuring distances with light to sub meter accuracy - that's a very special use case and even nanosecond accuracy is not good enough for those tasks, and some special means would be used for those.

it's a 32-bit overkill :grin:

The use case for that is the utility of Date as it currently interfaces with Calendar. So it is perfectly reasonable to have a date that is a few thousand years in the past or a few thousand in the future. Granted that instant is probably ok to loose precision as it moves out (I mean who needs nanosecond scale at 400 BC); hence the existing storage as Double.

good to have some common ground. what's exactly the problem with Double?
(yep, we can still fix it here and there, e.g. prohibit multiplying two things that have a double inside, allow adding date and interval and two intervals but not two dates, etc).

The issue with Double is that as the date moves past the reference point it becomes lossy due to floating point math and the storage issues around floating point in general. So seemingly reasonable values may have timer accuracy issues. If we hope to ever fully address that then we need to have a lossless storage: hence a rational or integral storage.

One potential alternative is to play the same trick as String does and when the value would need extra storage it uses an indirection. Which would make most reasonable values only take 64 bits but when you need a high precision unreasonable Date it faults to a larger storage via an object

2 Likes