[Pitch] Clock, Instant, Date, and Duration

Do you update the first post or is there a url to a proposal draft somewhere?

The most recent update is being tracked here. I will have some more updates for the duration changes based out of implementation discoveries later today. Perhaps it might be good for me to update the proposal in the pitch initial post too.

1 Like

Are those APIs proposed to go directly into the stdlib or do we get those as a package first?
Didn't we say that things which not require compiler magic should be introduced as a package?

This is proposed to be housed in the standard library (and concurrency library) because it is a part that is required for interaction with Task itself. This really cannot be implemented properly as a package.

1 Like

I have no recollection of any such rule, and that sounds far more absolute than I would expect the Core Team to be. Do you have a reference for where you saw this policy?

Doug

2 Likes

i think we are on the same page on this one, obviously the monotonic clock must do its best to keep up with real time and all departures from real time would be unintentional.

perhaps not a table or a database, just a single correction number, persistent on a device so it can be used after restart.

we need to consider some use cases here to know what to correct and what to not... e.g. "file dates shown in finder shall not update when leap second is added or when we manually adjust the time or when DLS shift happens, but they shall be updated when we switch the time zone". things like those.

i realised i didn't understand what "WallClock" is... i literally thought it is a clock similar to that on my wall in a living room, the clock that is adjusted twice a year (either manually or automatically) by the day light saving switch, and that is getting adjusted to a new time zone when i take it with myself on a trip. this raises a question whether the "WallClock" name is a good one, but, again, maybe that's just me.

i really hope this can be simpler and leap seconds would be abolished completely in 2023... without leap seconds translation between WallClock and monotonic clock (which i stubbornly want to call absolute time) always gives the same result which is the goal worth fighting for. that would still leave a question what do we do about dates between 1972 and 2016 (probably we just leap second correct them using the hardcoded 27-entry table, or we just retroactively undo the leap seconds introduced).

I've got such an impression from this past announcement: Swift.org - Standard Library Preview Package

It's an option, not a requirement.

Doug

This is how I've always understood this term: "WallClock time" is the human time that you would expect to see in a physical clock hanging on your wall. Such a clock should reflect leap seconds, DST, current timezone, etc.

A quick web search suggests this is the common usage.

Of course, a clock that only counts seconds since the start of some epoch necessarily avoids this, so I suppose the term works okay in that context.

The key distinctions here are between

  • a "monotonic" clock that never goes backwards and attempts (within the limits of the hardware) to provide consistent interval measurements,
  • a clock that is updated to match some external reference and therefore may change rate (or even go backwards) in order to do so, and
  • a synthetic clock that may be entirely divorced from any external time concept (for testing or simulation purposes).
5 Likes

Well I thought of it more like a best practice than just an option, but okay.

Sure, it's unlikely computers will gain femtoseconds-resolution timers. However, my main point was not to suggest a concrete grain size for Duration -- rather, it was to point out that counting elapsed time in whole nanoseconds doesn't leave much if any headroom for future improvements, even now.

  1. The clock resolution on supported platforms is already on the order of 10ns. (IIRC, its 14ns-ish on x86_64, and 42ns or so on Apple Silicon.)
  2. CPU clock speeds are measured in multiples of GHz these days -- so a core running at full speed is typically ticking at fractions of a ns.

It doesn't seem entirely unreasonable to expect that monotonic clock resolution might increase by as much as a factor of 10-100x in the foreseeable future. Are we going to need to regret Duration if/when that happens? Why not just get ahead of this game, and add plenty of headroom right now?

(To be absolutely clear, I don't have any insight into any future hardware plans. I just want to make sure we aren't painting ourselves in a corner for no good reason.)

I really do think that the internal representation (as well as Duration's Codable encoding) ought to use a significantly finer scale than 1ns. Any reason not to use picoseconds at least? It's not like we're short on bits...

Speaking of clock resolution -- I think ClockProtocol ought to have a requirement that exposes a minimum bound for the clock's effective resolution, as in clock_getres or std::chrono's period values.

public protocol ClockProtocol: Sendable {
  ...
  var resolution: Duration { get }
}

It'll be rather difficult to, e.g., properly scale benchmarks without having some idea about the resolution of the clock.

8 Likes

CPUs probably won't anytime soon, but we know how to build optical clocks with this level of precision (and higher!) today. It is not terribly far-fetched to believe that access to such clocks on computational devices will become more common, even if the TSC on a CPU is never operating at that granularity. I would not want to limit myself to ns granularity in a general-purpose standard library date type that we expect to use for the next few decades.

14 Likes

For backwards compatibility:

public struct Date {
  private var _secondsSince2001: Float64
  private var _attoseconds: UInt64
}
  • One attosecond is 1e-18 seconds (similar to NTPv4 Date resolution).
  • 60-bits can store the maximum 999_999_999_999_999_999 attoseconds.
  • The existing Foundation APIs can read from _secondsSince2001 only.
  • The proposed Swift APIs can use _secondsSince2001 and _attoseconds.
  • The negative-zero edge case is partly handled by Float64 (aka TimeInterval).

Thanks for the great work here Philippe!

I spent some time looking into the proposal, work in progress implementation, and implications to the Swift Concurrency functions we'd like to implement using these as well as how they'd play with distributed actors. I also attempted to read most of the thread, but maybe I missed some points, apologies if so.

Here's a small summary of my notes:


The two functionalities that set the context for how I'm looking at the proposal:

  • Task Deadlines causing Task Cancellation: Most notably we'll want to use these types to set "deadlines" on tasks; A deadline would, cause cancellation though not proactively, but only when isCancelled is called, then we'd want to check if the deadline "point in time" was exceeded, and the task shall be considered cancelled.
  • Distributed Actors, HTTP Requests, etc. and a way to implement best effort "not worth your time to even work on this outdated request anymore" logic in servers
    • Similar in spirit what Go's Context Deadline's allow, we'd like to "set a deadline on the task" and have this carry through distributed calls; Those may be IPC, or across-network boundaries. This means that whatever the type of time was stored as the task deadline, we'd like to get as clock time in a best effort to carry it to another host. This is best effort, since the clocks of two distinct machines are not guaranteed to be absolutely in sync, but realistically, we do assume in server envs that they're at least synced enough for such purposes.
    • Since it is up to a transport to implement "restoring" the deadline into the receiving task, we could ignore such wall clock deadlines if we see "they seem pretty off", or if the transport is some client / server, and we don't want to trust the client to set any deadlines.

Primary notes:

  • I agree that durations must be comparable, as the proposal suggests; not making them so is splitting hairs IMHO.
  • By my reading of the proposal, we're saying that the actual stored instant, would be the Date, since from that we can get a monotonic instance if we wanted to, but we can also serialize it if we wanted to.
    • I really would like to avoid "it only propagates if you happened to set it using the "right" clock type".
    • And by settling on a specific type, we'll be able to implement following important semantic: if we are in a task that has a deadline set already, and someone wants to set a deadline further out in the future, we want to ignore that "further out in the future" deadline and keep the current one. We're able to implement this, because we know we're working with WallClock.Instants, and therefore we're able to compare the instants and make the right decision right away on them. Does this match your expectation of the use of WallClock.Instant / Date with regards to deadlines and preventing "accidentally extending them beyond the initial deadline.
  • I like that this allows us to provide the same API with a "by default uses clock X, but if you really want to, here's one that takes a generic clock"
  • I'm very much in favor changing the internal representation of Date to integers, the proposed representation sounds good to me :+1:
    • The 96 bits are fine... to be honest 128 would even be totally IMHO fine as well, honestly :wink: but I know you've done your research here so I'll trust you and other numerics experts on the thread.
  • Leap seconds -- I honestly don't care for cross node, because the nodes may be drifting apart by much more and it's all best-effort anyway for the use-cases I have in mind (for WallClock.Instant), honestly let's leave it to Calendar and friends.

Minor notes / nitpicks:

  • Minor usability improvement idea: I don't love that we have to spell "deadline: .now.advanced(by: .seconds(3))", I'd rather say deadline: .in(.seconds(3)), but that's luckily just a convenience extension that does exactly that "now + ...", I hope we could provide that, it'd result in nice APIs I believe.
    • On that note, every time I work with durations I miss Scala's ability to say in: 3.seconds, it just reads so nice :wink: But yeah, I guess we're very used in swift to the .something() notation, so let's stick to it.
  • There has been some discussion that "Date" is ill-named, while it's pretty true, and a name like Instant would be "more true" I understand Tony's arguments – I don't think it's really a hill worth dying on, and we can always just call it WallClock.Instant if we want to, since that is Date :slight_smile: In other words... if the teams working on this, and the core team feel this is the right trade-off, I'm happy to support that.
  • "Removed .hours(_:) and .minutes(_:) creation for Duration" are these still removed? I honestly do have real use cases for deadlines which are like "30 minutes" or even "a day", for doing background cleanups for tasks like "remove this node from seen-nodes-set" in clustered environments, even if they're applied passively, not even by scheduling an active task "in 30 minutes".

This is all excellent work, I'm looking forward to it @Philippe_Hausler !

6 Likes

So .days is definitely an issue of potential conflation to calendrical calculations. If we have a compelling reason for hours and minutes I think that is reasonable to bring back. Perhaps you can show some cases where that is useful in real world uses for folks? I know there were a few objections on those constructors.

5 Likes

Unless there's a technical reason why Date and WallClock.Instant should be different types, I think the proposal makes the right choice in sinking Foundation.Date into the standard library.

It's better to have a single type:

  • A single type will prevent an unnecessary (and potentially confusing) schism in the library ecosystem.
  • A single type will prevent wasted energy debating (with teammates or oneself) which is the right type for a data model or an API.

I think the benefits of a strongly-typed Duration are sufficient to justify replacing TimeInterval. However, I don't think any of the alternative names to Date are beneficial enough to justify the downsides of having two types.

1 Like

I'm not really arguing for days, we but omitting hours and minutes feels very arbitrary.

An example I have is "quarantining" nodes in a cluster; when faced with flaky nodes/connections, you often want to quarantine a node for a while to make sure it won't come back -- keeping a set of such quarantined nodes, with estimated deadlines when we deem it save to remove them from the set (i.e. we know it'll shut down by itself after a minute or two if receives no heartbeats; so we can prune it from this set after some extra time, say 2x of that -- 10 minutes or more).

Multiple minutes and hours are not unheard of for such "timeouts". They're not active timeouts that they need to run at a specific time, nor do they have to be super precise since they're often "2x of expected thing" anyway. In any case, I think it's useful to be able to have an instant and be able to add add(gracePeriod: .minutes(10)) without jumping through any extra hoops.

// edit 1: typo

6 Likes

I haven't done much backend work, but why would you use this level of API for the work you describe? Rather than scheduling an absolute deadline in memory, subject to crashes, skew, and other persistence and sync issues, don't most systems schedule such work through a system with built in resilience, like persistence, or through a completely separate system, so issues with the main server don't affect scheduled tasks. Simply through a (hypothetical) perform(in: .hours(4)) seems extremely fragile. Extending this pitch from high precision monotonic time into APIs used to server scheduling seems to make it extremely broad.

This confusion already exists, it just exists within the Date type itself at the moment. Separating the local, high precision instant from the type that Foundation primarily uses to parse and display dates for humans will correct one of the biggest, decades-long source of confusion in the entire framework. Solving the issue by separating the concept into two types is simpler than you seem to think. And in the end I don't think most users will operate at this level anyway; Date will continue to be used most often since it can interoperate with other date formats. And finally, finally we'll have a simpler way to explain the difference between human dates and computer timestamps.

Personally, I consider sinking Foundation types into the Swift language itself to be misguided. Swift must be allowed to solve its these problems in ways optimized for the language and for modern development practices. It shouldn't be saddled with Foundation's bagged, for better or worse.

14 Likes

You're implying that sinking Foundation types and providing solutions optimized for modern development practices are mutually exclusive. Yet this pitch is simultaneously sinking Foundation.Date and upgrading its internal representation to be more modern.

I'm just as eager as the next person to correct past mistakes (the introduction of Duration in favor of TimeInterval does just that!), but as nice as it would be to always start from a blank canvas, we have a duty to weigh existing code and users when making design choices.

I understand folks may be concerned that sinking Foundation types could set a bad precedent or even that we're being "forced" to accept the Foundation types against our will. This is not the case.

In fact, I think this change sets a good precedent that sinking of Foundation types will be carefully considered: the opportunity will be taken to upgrade their internals, any incongruous functionality will be left behind in Foundation, and if necessary the type will be replaced with a modern alternative.

I still fail to see the value in providing two different types that behave exactly the same. Why is this an important distinction for the user to be making if there is no observable difference between choosing Date or Timestamp? What are the mistakes users are making today because they only have Date and how will they be fixed in the future due to the introduction of an additional type?

1 Like