I think it really needs to be brought up that accounting for leap seconds would incur a file read from disk. On Darwin and Linux the source of truth for those adjustments to APIs like gettimeofday or clock_gettime are housed in /usr/share/zoneinfo/leapseconds. I would imagine that a random file read during time based calculations would be quite unexpected. Also that drift for leap seconds is not deterministic so any future date that would be scheduled would potentially (when serialized and deserialized) would be askew from any new updates to that database.
NSDate is very similar to unix time in this regards, see the table that shows how the leap second is crossed.
the more i think about it the more i realise that leap seconds concept was a mistake that should have never been done in the first place and that should be corrected asap (i hope as soon as 2023). that timeIntervalSince1970 on a whole day boundary is divisible by 24x60x60 is "a good thing" that i hope will carry over to the NewDate API. leap years, daylight saving time switching, and changes of those switching rules (that can happen due to countries boundaries changes and other politics) are resolved at levels higher than this proposal (Calendar / DateComponents / DateFormatter).
the new and old Date API will have to coexist for quite some time. we'll need ways to convert between new and old dates, ideally the new date round tripped through old date shall come unchanged and vice versa. old API of the form:
func foo(_ oldDate: Date) { ... }
could be called with:
foo(Date(newDate))
and face lifted to provide the version that support new dates (leaving the old date version as is for now):
func foo(_ newDate: NewDate) { ... } // same but for new dates
old API can be reimplemented on top of the new API:
this is probably the least complex part of the proposal: providing a facelift version of a number of API's (like 200 or 300 hundred of those). initially we can facelift a small number of most frequently used APIs.
i believe that Time name coolness outweighs these concerns and if not for this purpose of being used as a "new Date currency" I see no better option how it can be used. Timestamp is not too bad, Date is quite bad (but reusing Date name per se is not be possible for a number of reasons starting with source and ABI compatibility, so that's not really an option unless we really want to name the type as, literally, NewDate which i hope we don't).
Are the gettimeofday and clock_gettime APIs required to use Unix Time (i.e. excluding leap seconds) for POSIX compliance?
SQLite date and time functions are implemented by converting Unix Time to a Julian Day Number. This might also be a good strategy for Swift.Date (possibly by storing julianDay: Int64 and nanoseconds: Int64). It should then be possible to convert to/from an ISO 8601 date-time string (with fractional seconds) for LosslessStringConvertible conformance (and Codable conformance, if this is a new type). Duration could have .days(_:), .hours(_:), and .minutes(_:) again.
Using that particular quote from that article seems to be arguing in bad faith since the article is largely about physical stamps. Additionally, every single example
format included has a time and date, and the point @davedelong seemed to be making anecdotally was referring to the confusion around Date including time information.
Finally, and most importantly, @davedelong's comment was implicitly limited to the context of programming/computers and not rubber stamps.
If the naming is not actually up for debate, the proposal should likely be updated to include that requirement.
@allenh, your answer is much too much severe. The debate involves people of good will, and does move forward. I express my gratitude and support to @Philippe_Hausler, as well as other participants.
Sure, I'm making a case for a native date type, bound to a physical property. I was commenting on the specific semantics for Date, that some people were advocating upthread.
And it wouldn't have to, if Date represented actual elapsed time. Then the difference would always be, the actual elapsed time. No need for any leap seconds handling. It would only be needed when converting some huge number into a human-readable string (or when converting into calendar specific machine readable date components).
Yeah, that wouldn't be very nice and very unexpected.
As always, there is an engineering trade-off to be made (then again, it is probably reasonable to precache that data when a process is starting once if desired to support this (or at first calculation), the file is less than 4KB on my macOS box and mostly consists of comments and currently has just 27 entries...).
For the record, I like the overall direction of the pitch a lot and thanks for the work being done here, my only major concern is as already expressed is with the aforementioned Date naming, but has nothing more to add there that haven't been said already (Instant is not too bad IMHO).
(and with regard to @allenh comment, I disagree with that position and think there is good will, it's obviously the case that peoples experience and view of usage will influence their view here, I come from a server-side, non-Foundation use case, so more care about understandable naming that makes more intuitive sense, than how it would interact with (in my view) legacy API:s, for someone who needs to handle that on a daily basis, I can see a different viewpoint, even if I'd disagree.)
As an ordinary Joe who uses Swift exclusively on Apple platforms today, I'd like to give my two cents:
I have absolutely no problem with renaming Date (or creating a new Date-like type that is not called Date). I have written a niche app that uses Date pervasively, but almost never as a calendar date. And the times it does, it only wants the year-month-day, and not the time at all.
I think it's worth noting that if we ignore leap seconds, the base unit is no longer seconds, but days. Or more precisely 1/86400 of a day, which is commonly called a second but is never exactly a SI second. In fact, the difference in duration between a SI second and 1/86400 of a day is variable, hence why leaps seconds are unpredictable.
In my own little library to handle date and time (for calendaring purpose, mostly), I have these types to denote various kind of points in time:
TimePoint: number of seconds since 1 Jan 1970 00:00 UTC (ignoring leap seconds)
DatePoint: number of days since 1 Jan 1970 (integer, no time of day)
DayTime: number of seconds since midnight (no date), for displaying wall clock time
I find this easy to work with. Conversions between these types always involve a time zone, but are otherwise unrelated to a calendar:
If I had to deal with SI seconds, I could introduce another type:
ExactTimePoint: exact number of seconds since 1 Jan 1970 00:00 UTC (including leap seconds)
And some conversions:
exactTime.time // -> TimePoint, number of second shifted to remove leap seconds
time.exactTime // -> ExactTimePoint, number of second shifted to add leap seconds
And then I'd need three duration types:
TimeDuration: duration in common seconds (1/86400 of a day)
Can't convert directly between TimeDuration and ExactTimeDuration here: a range of TimePoint is needed so leap seconds can be inserted or removed. Although if you're just running a timer for a few minutes on your computer it'd be bad to insert a leap second in there. So there's no perfect rule for that conversion.
All this to say I'm not sure what the standard library should do. I never liked much Foundation.Date so it's fine by me if we make a clean break to define better abstractions. SI seconds may not need to be part of it. I think second as 1/86400 of a day is ideal for most uses.
This doesnāt reflect how (most) systems based on Unix time implement leap seconds; itās not smeared across the whole day. We must be alert not to adopt uncommon approximations of time standards that conflict with underlying platform data sources and common practice.
Hmm, it certainly does seem that this is not possible to do at this time without a file read on Apple platforms.
However, if I understand correctly, clock_adjtime on Linux will reflect whether a leap second is currently being inserted, and therefore if that is propagated in the underlying storage of Date it would be sufficient to uniquely identify the repeated second for comparison and hashing purposes. (My understanding is that CLOCK_TAI is not usable on Linux to get an actual TAI time because the offset from UTC defaults to zero on system startup, requiring a custom-configured ntpd to set the correct offset.)
Meanwhile, my readings indicate that Windows has switched to TAI-based timekeeping as of some release of Windows 10, with APIs converting to UTC only when needed, so that Windows now handles leap seconds in a very standards-compliant way; therefore, retrieving the actual seconds since the epoch should be trivial, although I havenāt explored the APIs available in great depth.
Additionally, C++20 now has std::chrono::utc_clock, which is explicitly specified to keep time since the epoch including leap seconds (as well as tai_clock and gps_clock), and conversion routines to_sys and from_sys are supposed to do the exact accounting for leap seconds that you mention here.
I certainly agree with you that it is fine for the system clock to represent leap seconds as a discontinuity just as though a user turned the clock back by one second on platforms where no other information is available without, say, performing a file read. So what you propose for Appleās platforms makes practical sense to me, particularly since we have no say in Appleās product roadmap with respect to its clock_gettime implementation.
However, for a new API thatās meant to be a solid foundation for the next n years (where n hopefully is as long or longer than the number of years that Date has been around), I also think it is valid to be concerned that the design should have the capacity to accommodate gracefully āupgradingā on non-Apple platforms to support disambiguation of the leap second discontinuity.
It could certainly be a distinct clock as C++20 has done, or (what seems more Swifty to me) having the default clock do the most correct thing available to each platform, just as String incorporates the latest Unicode revisions to grapheme breaking without requiring users to adopt Unicode14String. For that to be possible, I think Date should have some way to store the TIME_OOP flag on platforms that report it (it would seem to suffice to use the nanosecond field and simply exceed 1000000000, as others have proposed Linux should do with a hypothetical CLOCK_UTC), as well as use leap second-aware offsets on platforms where that can be reasonably accessed as readily as leap second-omitting offsets. When robust implementation of the C++20 APIs are available on all platforms, perhaps we could just wrap that down the line.
you bring up some interesting thoughts, I have been looking more into the facilities on linux w.r.t. CLOCK_TAI as well as discussing them w/ the Darwin kernel/dispatch teams. One requirement is that we need to make sure that serialization either to disk or as a precise form can be transmitted across operating systems.
That means that breaking existing file formats (read Date's Codable options and serialization form) cannot break. I am not willing to let end-user files get skewed or worse yet that they crash applications when opening them or corrupt files such that they can't be opened. I'm sure that it is safe to say that is a non-starter, but that does not preclude any new potential options that we decide that are worthwhile.
The other rub to consider is what that offset means; Just as much as string breaks on grapheme clusters as you aptly pointed out the leap seconds currently act by skipping over that meaning (somewhat similar to grapheme breaks). Date ends up being only meaningful when shoved into Calendar for any leap day or month. Currently ICU4C has no descernable facilities for accounting for leap seconds. Maybe that could change? Could this be a driver for changing that? To be quite honest, I have yet to tread into that set of discussions with folks with any depth. However again we run into the problem that may cause a skew - for example that would incur a skew of 32 leap seconds on kCFAbsoluteTimeIntervalSince1970 which is the basis of conversion from gettimeofday and NSDate.
As a point of reference Rust has a similar conundrum. Perhaps the solution is really the "if you need leap second aware calculations then use Calendar with this leap second aware InstantProtocol thing". That seems like an interesting future direction, but is honestly a TON of work to sign up for (also beyond the scope of the proposal).
Having an additional type that is beyond the CLOCK_REALTIME style sources (gettimeofday, clock_gettime(CLOCK_REALTIME, ...), and mach_get_times) feels like the right answer in the end. The interfaces should not have surprising edges going from one platform to another as much as possible. Some places we cannot avoid it, but if (per your example) String behaved differently with regards to grapheme breaking on Darwin than it does on Linux, personally I would consider that a bug. To that end there have been efforts to excise ICU as a dependency from the standard library. Partially to reduce the library load footprint but also to ensure a uniform set of behaviors. Clocks should be no different; they should behave as close as possible for platforms we intend to support. If for example Linux grows a fully supported mechanism to get higher resolution than Darwin, that is likely a tolerable difference. But if the linux side is skewed by 30 some odd seconds that seems like a case where it might cause problems.
Per storage; we have more than just a few bits, after looking at the performance side of things it seems like we have plenty to store stuff if we need, my worry is of course how to calculate that and the impact from a perf standpoint of calculating and reading that data and is that type the right place to calculate that. It is worth noting that information might be lost across bridging to NSDate (unless we go to great lengths to preserve the information).
(Side note: Even though I was one of the people who raised the naming thing, I have no real problem with the name Date for WallClock.Instant -- as long as WallClock implements the exact same clock as Foundation.Date. We aren't designing things in a vacuum, and there is plenty of precedent for using this name for this purpose, even in other languages.)
I agree completely!
IIUC, the (implicit) clock behind Foundation.Date is counting seconds with the same strategy for handling leap seconds as Unix time -- i.e., they get merged with the subsequent normal second, temporarily slowing down the clock by a factor of 2.
If we wish to make the stdlib WallClock.Instant type compatible with the existing Foundation.Date, then it follows that WallClock must guarantee to implement the same behavior -- this should be a documented requirement. (On all platforms.)
Restating this in another way: if the consensus is that this merging behavior is undesirable, then it follows that WallClock.Instant cannot be the same type as Foundation.Date -- so e.g., it must not be possible to pass it directly to any API that currently takes a Date.
I think the pragmatic decision would be to accept that WallClock handles leap seconds using this nonuniform time scale strategy, and to document this behavior. Wall clock time is inherently non-uniform anyway, what with gradual/sudden NTP adjustments and whatnot.
I really don't see why the WallClock.Instant type would need to carry a dedicated bit that indicates whether it is in one of those double-length "seconds". (In theory, the clock can always recover this information from the instant data by consulting the same table as it uses to generate the instants in the first place.)
I believe it would also be important to document that MonotonicClockdoes not use Unix time. (I sure hope it doesn't do that, as a nonuniform scale would make it unreliable for use in simple benchmarking.)
ā ā ā
On a related note, assuming we want to sink Date into the stdlib, I want to raise the issue of our current inability to change Date's Codable representation.
IIUC, Date currently encodes itself as a Double value, counting Unix seconds since 2001-01-01T00:00:00Z. Switching to a different clock seems very undesirable / practically impossible, but we can and should do something about the encoding.
I can't imagine we'd want to represent wall clock instants in the stdlib as a Double -- but a proper fixed point representation* would mean that we would no longer be able to reliably serialize/deserialize Date values without losing some information.
(* which must necessarily be extravagantly generous about representable values; I don't think covering, say, 10 trillion years with attosecond precision (as can be done in 128 bits) would be unreasonable at all)
One way to help us dig ourselves out of this hole is to finally add Date as an explicitly supported scalar type for Codable.
protocol SingleValueEncodingContainer {
...
@available(tbd)
func encode(_ value: Date)
}
extension SingleValueEncodingContainer {
@_alwaysEmitIntoClient
@_warnIfUsedInConformance // This is not a thing :-(
func encode(_ value: Date) {
// By default, use a lossy encoding for compatibility
let secondsSinceReferenceDate: Double = ...
encode(secondsSinceReferenceDate)
}
}
// Ditto for `SingleValueDecodingContainer`.
This will encourage people working on encoder/decoder implementations to think about how they wish to represent wall clock instants. I think most properly designed serializers need to implement something like JSONEncoder.dateEncodingStrategy, but there is nothing about the protocols that alerts people that this is a thing they need to be doing.
This way, existing encoders will continue to work as before, preventing serialized data from becoming incompatible with older releases. However, future encoders (and future versions of existing encoders) will be more likely to do the right thing, allowing round-tripping Dates without information loss.
I think it would be best if the other instant types did not conform to Codable, or if we even added unavailable conformances to prevent people from implementing them on their own. (With potentially conflicting definitions.)
ā ā ā
Finally, I want to raise the issue of Duration's representation: (Apologies if this has been raised before -- I couldn't find it in the discussion.)
The pitch currently gives Duration a public API that defines it as a 64-bit count of nanoseconds.
public struct Duration: Sendable {
public var nanoseconds: Int64
public init<T: BinaryInteger>(nanoseconds: T)
}
I think this is not nearly wide or fine grained enough -- things are already happening in our computers on a shorter scale than 1ns*, and 600 or so years doesn't seem like a wide enough interval to cover, either. It would make sense to extend this type to more bits and switch to a (much) finer grade.
(* Timer resolution is currently larger than 1ns, but it's uncomfortably close (10--40ns or so). And durations around/below 1ns can be easily measured, although in aggregate. For example, on the (obsolete) system I'm typing this in, a simple Swift for-in loop over an Array<Int> that just passes the values to an opaque noop function takes about 1.16ns per iteration on average -- I think it would be desirable for Duration to be able to precisely represent such measurements.)
In addition to widening it, I think it would probably be a good idea to also define Duration as a @frozen type with well known layout.
@frozen
public struct Duration {
var _seconds: Int64
var _femtoseconds: UInt64 // or whatever
}
We currently have no way to promise the compiler that a non-frozen struct will never contain non-trivial stored properties, and AIUI this means that such structs incur some potential overhead that would probably be unfortunate in this particular use case. (IIUC, one issue is that the compiler must assume that destroying a non-frozen struct value might release things, so it often needs to emit extra retain/release calls to protect other things from getting deinitialized as a side effect.)
Looking at it from an API design standpoint, it also seems preferable for Duration's public API to cover any reasonable future use case right from the start. I believe we already know that measuring durations in whole nanoseconds is not good enough -- I think newly designed API ought to directly acknowledge this.
I think this is particularly important for Duration's Codable conformance -- I expect we will need to add one, and I expect we'll want to encode it as a pair of integers, not as a single Double. This is going to be a departure from TimeInterval (and it will be a potential hazard for people who wish to migrate their existing code to Duration), but since we're introducing a new type, we ought to be free to do the right thing. Explicitly exposing Duration's representation also helps set expectations about its serialized encoding.
(The same considerations apply to all the new Instant types introduced in this pitch -- I think they too ought to be trivial structs with frozen representations and they should be wide enough to cover any foreseeable practical use case, with plenty of headroom. I don't expect we can both sink the Date type from Foundation and also make it @frozen without major complications; but perhaps it'd be okay to leave that particular type unfrozen for now and hope that we'll eventually gain a way to declare it trivial.)
(Edit: I withdrew the suggestion to make standard Instant types @frozen -- it's probably a better idea to let them directly use whatever representation the best native clock uses on the system, deferring normalization until the instants get converted to durations or human readable dates. It would be really nice if we could tell the compiler that these will always be trivial types and/or constrain their layout in other ways -- but I guess we can always add such an attribute later, if it gets implemented.)
AIUI, this is definitely not the case. The wall clock produces instants that approximate real time, but these are never exact, and the clock needs to be regularly synced with a more exact time source, which makes this clock very unreliable at small scales.
The wall clock is also explicitly not monotonic.
let start = WallClock.now
...
let end = WallClock.now
precondition(end - start >= .zero) // This can fail
Leap smearing and other techniques that eliminate small discontinuities can also make the wall clock run significantly faster or slower than what a real world timer would measure.
let absoluteTime1 = getAbsoluteTime()
let wallClock1 = WallClock(from: absoluteTime1)
....
let absoluteTime2 = getAbsoluteTime()
let wallClock2 = WallClock(from: absoluteTime2)
assert(absoluteTime2 >= absoluteTime1) // this can never fail
assert(wallClock2 >= wallClock1) // this can fail
where getAbsoluteTime is exactly "a number of seconds / nano seconds since 1970", on some ideal eternal unbreakable stopwatch, no ifs, no buts.
and during the conversion to WallClock or other higher level calendar dates - leap seconds or leap days, or day light saving effects, or etc can be added/subtracted.
in this approach the time difference between two absolute times can indeed be as trivial as Integer subtraction.
in regards the need of expressing times some hundred let alone million years from now with a nano or femto second precision i'm still skeptical, but hopefully that's just me.
I don't understand how WallClock instances would be comparable. Do you mean WallClock.Instant, a.k.a. Date?
Assuming that's the case, AIUI the problem with this is that while getAbsoluteTime would indeed be extremely desirable to have, it is sadly impossible to implement. There is simply no way to reliably get the current "absolute" time. This isn't for lack of trying -- it's literally impossible. (Leap seconds do contribute to this (by fuzzing the meaning of future instants), but they're a trivial problem compared to getting the current absolute instant on, say, a device without an active network connection.)
As far as I understand, MonotonicClock gives us a way to measure elapsed time in a (relatively) reliable way. WallClock is more or less completely useless for this purpose, no matter how it handles leap seconds.
The pitch includes a way to convert wall clock instants to monotonic clock instants, but (unless I missed it) not the other way round. I expect the conversion will inherently involve some hand waving / approximation (for both future and past instants); and the resulting values will sometimes vary wildly when the conversion is later repeated.
it is possible, in a way... consider a simple circuit that adds, say 1 to an integer counter every nanosecond. it simply can't go backwards (wrapping aside) so it is strictly monotonic. it's not important it is "not exact".. it can pause for a a number of nanoseconds, or produce a few "extra" nanoseconds according to your exact nanosecond atomic wrist watch - the app and the rest of the system just believes that that counter is a "source of true nanoseconds". at some points the system will get external times as correction deltas but those would be added to the table of corrections to the values that only matter at higher levels (calendar, wall clock, etc). they are never added directly to that counter itself to cause it go backwards or skip forwards.
this would be very unfortunate but unavoidable that conversion between absolute time and calendar time would give different results at different times. the GMT offset rules of a region can change due to politics for example (we've seen this many times). or new leap seconds introduced (hope that will never happen again).
IIUC, this is of course very much doable. The pitch calls this sort of clock MonotonicClock.
I expect that MonotonicClock.now generates instants based on the system's real time clock, e.g., something like a temperature-corrected oscillation count of some piezoelectric crystal -- i.e., a reasonably reliable source of elapsed time measurements over short time intervals.
If the difference between two instants generated by MonotonicClock.now measures as exactly 3s, then I expect that the actual elapsed time was indeed very close to 3s. (But not exactly 3s, as my computer does not have an atomic clock, it has a rather coarse tick, and the act of reading the clock isn't instantaneous, so it introduces some random noise. I do expect that it'll be close enough to 3s for most practical purposes.)
I expect WallClock to usually also measure the same interval as 3s, but depending on what's going on with time synchronization and manual adjustments of the system clock, sometimes it may come out as 2s, 2.999s, 3.001s, -2 hours, or +several decades.
For it to be useful for measuring elapsed time, I'd expect that the monotonic clock would never pause, or skip ahead while the system is running, or intentionally run fast or slow. It should tick at as close to a constant rate as the hardware implements. (I don't know if this expectation is reasonable in practice. I hope it is.)
I don't believe it is reasonable to expect that such a persistent database of past corrections exists on any of our supported platforms. This also seems unreasonable for the stdlib to try implement on its own.
Worse, IIUC, the table wouldn't even be very useful -- it would need to be updated whenever the system gets synced up with a reliable time source, retroactively invalidating previous conversions for instants since the last sync. Any conversion of future instants would be pure guesswork, as the mapping obviously cannot have reliable information on how the clock might drift in the future.
I'm glad we're on the same page. If I understand correctly, this is exactly what the pitch is proposing.
It gives us a reasonably reliable monotonic clock for measuring machine-local elapsed time
It gives us a wall clock that makes a best effort guess at the current UTC time
It gives us a best effort API to convert between instants of the two (albeit one way only)
Time zones, leap years, the concept of an August, or an imperial epoch etc. are entirely irrelevant to these clocks: they are merely counting seconds (on some scale) from some reference instant. If the region I'm in arbitrarily decided to switch to UTC-7:13 this midnight, none of these clocks would be perturbed in any way -- they'd happily keep on ticking, exactly like before.
Leap seconds are relevant to some of these clocks -- because those clocks tweak the length of a second near/around leap seconds so that a day always comes out as precisely 86,400 seconds. (This is a bit of a problem because (1) it means that a "second" in such clocks will sometimes significantly differ from the standard second, and (2) every time a new leap second is added, we replace the definition of the existing clock with a slightly different new one, changing the meaning of instants after the newly inserted second. If UTC inserted a leap second at the end of 2021, then the real time clock implementation behind WallClock (but not MonotonicClock!) would presumably need to be updated to account for it, as well as any code that converts between clock instants, or converts them into human readable forms.)
However, no matter how silly it is, we need to have at least one clock that implements this weird way of counting seconds, as this is what Foundation.Date uses, and we Swift developers collectively have tons of code that is implicitly relying on this scale whenever it passes around date values. Further, I think it also makes perfect sense to have the "default" WallClock implement this behavior, rather than relegating it to a hidden corner of the stdlib -- I believe doing otherwise would be a disservice to Swift developers, who would then be forced to learn about the difference.
(Unix time allows most code that naively processes dates to ignore that leap seconds exist and still produce usable results. To me, this feels very similar in spirit to how UTF-8 was designed specifically to allow Unicode data to be passed through (and even minimally processed by) 8-bit clean C code that knows nothing about Unicode.)
Aargh, this is completely wrong. I misremembered how Unix time "works" -- I thought it defined some sort of halfway-sensible continuous behavior, rather than simply repeating a second. That's not ideal.
(I assume I subconsciously refused to accept this silliness and substituted it with less terrible behavior.)
If the system's wall clock doesn't implement anything but Unix time, then I'm not sure what the stdlib can do other than to accept this.
Ideally we should also have a UTCClock that is guaranteed to handle leap seconds in a platform-independent way, but is that actually implementable? I suspect this (and similar clocks like TAIClock) should be first explored in a package outside the core stdlib.
Sadly I think my conclusion is still largely valid -- I think WallClock.Instant ought to remain compatible with Foundation.Date, and it's pretty clear that that type uses Unix time.
The proposal I'm looking at does not define what instant WallClock uses for its epoch, nor does it specify whether it uses Unix time or not.
OK, so Windows FILETIME implemented proper accounting for leap seconds a few years ago. AIUI, they did it by only counting future leap seconds, leaving previous ones overlapping with a neighboring regular second. (Edit: Or did they?) Interestingly, it seems the routine that splits such filetimes into calendar components implements the same sort of 2x slowdown by default as I described -- perhaps that's where I got the idea? In any case, this seems like a pretty good setup.
Nit: Which standard do you mean when you say "standards-compliant"? (Is there a standard that specifies that operating systems should use this particular behavior?)
Important note: corelibs-foundation on Windows is using FILETIME to implement Date(). If I understand correctly, this means that Foundation.Date on current versions of Windows is going to either count future leap seconds or temporarily slow down the clock around them (depending on process configuration), which will potentially make its behavior (and encoded values) incompatible with other platforms.
I need to update the proposal to reflect that it ended up needing to be defined as:
@frozen
public struct Duration: Sendable {
@inlinable
public var seconds: Int64 { get }
@inlinable
public var nanoseconds: Int64 { get }
}
I am leery of supplying accessors that are femtosecond scale being that none of the reasonably achievable clock implementations today could provide such a scale. Under the hood it splits those seconds and nanoseconds into it's own representation to account for some edges w/ negative values.