Reviews are an important part of the Swift evolution process. All review feedback should be either on this forum thread or, if you would like to keep your feedback private, directly to the review manager via the forum messaging feature. When contacting the review manager directly, please keep the proposal link at the top of the message.
What goes into a review?
The goal of the review process is to improve the proposal under review through constructive criticism and, eventually, determine the direction of Swift. When writing your review, here are some questions you might want to answer in your review:
What is your evaluation of the proposal?
Is the problem being addressed significant enough to warrant a change to Swift?
Does this proposal fit well with the feel and direction of Swift?
If you have used other languages or libraries with a similar feature, how do you feel that this proposal compares to those?
How much effort did you put into your review? A glance, a quick reading, or an in-depth study?
Big +1 from me on this one, it's a no-brainer to add this, and there's no risk to doing so.
Adding a BinaryInteger-constrained factory method (mirroring seconds, nanoseconds, etc) probably also makes sense, but because this proposal is so focused, I do not want to start piling other API on to it, even those that make good sense. We should let small focused proposals remain small and focused.
No particular reason. Initially I just had it because I did not know which version Iād write in before when building my local toolchain and after that I had no compelling reason in mind to change it.
I asked because it seems to me that even though @backDeployed has been introduced for quite some time now, the unofficial @_alwaysEmitIntoClient keeps being used sometimes, so maybe I was missing some nuance. Even if it doesn't make difference here I do feel that usage of @backDeployed should be preferred and encouraged in general. Sorry for nit picking
A big +1 on adding Int128 support for attoseconds. I'm glad to see this come through.
However, I do find the API choice (init rather than a factory method, as used by all other units) a bit puzzling. I guess I differ with @scanon in this regard in that it seems to me there's no compelling reason to treat attoseconds specially in this way.
I agree that floating-point attosecond values make no sense and that therefore there should be no .attoseconds() method that accepts floating point. Durations are measured at attosecond precision, and attoseconds can't be subdivided. But that isn't an API inconsistency, it's just an accurate reflection of the underlying model.
Beyond that, attoseconds are a perfectly reasonable and ordinary unit for subsecond durations. (Some might find Wikipedia's list of attosecond-scale events entertaining.)
Floating-point aside, the proposal's explanation for avoiding factory methods is
A BinaryInteger overload introduces additional complexity. Since it would need to support types other than Int128, arithmetic operations would be necessary to ensure correct scaling and truncation, negating the simplicity and precision that the Int128-specific initializer aims to provide.
Why is this more so for attoseconds than for other units? You can't pack more than a few second's worth of time into an Int64 at attosecond resolution. But if you need a longer duration, just use Int128.
Is there a technical reason why an .attoseconds() factory method with an Int128 overload would be materially less simple and precise than an init method?
I hope I got your question right. I think the most compelling reason why I'd argue against replacing the initializer with the BinaryInteger static method is, that it cannot directly initialize Duration without involving some sort of splitting/shifting/truncating (and I think some other binary operations as well), because the concrete type is not known. Thus making it less efficient for Int128 parameters. Efficiency for other unit scales is already compromised because they have to scale.
There's also another downside pointed out by @xwu in the pitch. This:
... does not compile. The literal is a legit Int128, but it tries to store it as Int for some reason, for which it is too big:
Integer literal '1000000000000000000000000000000000000' overflows when stored into 'Int'
This of course also happens with the other unit scales, but for attoseconds you are more likely[1] to use those huge literals.
In the end, Duration is attoseconds as an Int128 (deconstructed). It is the raw value of it, so an initializer is, at least to me, the most obvious way to spell that.
I think having an extraBinaryInteger factory method to keep symmetry, is a valid point. However, we kind of have to make sure that the performance implications of using .attoseconds(_:) vs init(attoseconds:) are clearly pointed out to the user (or at least that they are not the same implementation-wise).
This seems like a no brainer to me. IIRC the only reason Duration didn't already have this initializer was that we didn't yet have Int128 when we added Duration.
This review caught my attention because ... there is no attosecond.
I understand the motivation, and I agree that Int128 is a convenient type for logical time duration (as mathematical construction), so I have no objection to this implementation.
I have only one advice for how to not use this API: do not extend logical time duration to physical time duration, in other words do not give the wrong impression to the users that they now get access to high precision physical time quantities.
At the big end, the maximum duration in this type is 2^63 sec. The age of universe is estimated to be less than 2^59 sec, with a precision of about 7 bits (52 bits are noise, needless to talk about the lower 64 bits).
At the small end, a reasonable absolute precision for a computer clock is at the order of milli-sec, assuming it is synchronized regularly (otherwise it drifts at the order of sec per year or so). An atto-sec is far farfar too small for any reasonable timestamp coming from a computer clock, and also for any difference between two timestamps.
A few pitfalls when logical timestamp and logical duration are taken literary:
If the same event is timestamped by two different computers with resolution of micro-second or less, and assuming that both computers are synchronized regularly, it is highly unlikely that the two readings are equal.
If two events A and B have timestamps u_A and v_B (taken by two different computers as above), u_A > v_B does not imply that A happened after B. If the logical duration u_A - v_B is small, it has nothing to do with the physical duration between A and B. Small here means at the order of micro-sec.
A physical duration of 1 atto-sec is a joke. At fine resolution, time is modeled with noise and jitter, which is at the order of nano-sec to pico-sec, for reasonably good clocks. If someone asks: "the timestamp of A is 1 atto-sec bigger than that of B, did A happen after B?", I would answer: "I would like to say that they happened at the same moment, but there is a bigger problem. Apparently you did something very wrong if the difference is only 1 atto-sec, so I cannot answer."
Atomic clocks have precision at the order of atto-sec. Yes, but we don't employ one in our applications. To get this precision you need super-expensive equipment and super-fast connections. With synchronization over an internet protocol, atto-sec is a joke. Even with connections that are extremely good for other purposes, there is no hope to "see" 1 atto-sec. On paper of course you can write any number.
What? No. Normal computers have clocks that deliver nanosecond measurements. My physicist friends regularly deal with units of femtoseconds. Attoseconds are mostly still out of reach, but there is absolutely a use for programs to be able to talk about sub-nanosecond durations.
Duration is just an abstract duration, intended for any vaguely general-purpose use. It is not inherently tied to any physical clock, nor is it only intended for network timestamps. This is a fundamental misapprehension of the purpose of the type.
I wrote about the absolute precision. It is defined as the error between a DUT (device under test) and a (theoretical) ideal clock, or the best clock you can find in the lab.
In more details: computer clocks are based on crystal oscillators. They have frequency accuracy of a few ppm (parts per million). In one sec they drift order of micro-sec.
What you know about nano-sec readings is different. This means that the counter which runs with the clock has nano-sec resolution. To get 1 nano-sec resolution you need 1 GHz clock. This is not the problem, modern clocks are faster than this. But if you measure their reading and compere it to an ideal clock, they have drift over time, plus small fluctuation from cycle to cycle (which is less of a problem at a scale of sec, but it is dominant at a scale of nano-sec).
A more practical explanation: leave a computer (or a watch) without synchronization for a year. How precise is it?
Duration is just a duration; absolute accuracy with respect to some clock is not in question. Even duration with regard to any physical clock is not in question.
What is the meaning of a duration of 1 atto-sec? It may mean that you define some duration to have this value. This is a logical (mathematical) construction, and it is perfectly fine. What it cannot mean, is that you physically measured two timestamps, and their difference is 1 atto-sec. Physically this statement is non-sense, for the reasons I explained.
My warning is to not extend logical duration to physical duration.
You could, for example, write a program in Swift to simulate protein folding, and use a Duration object to keep track of the simulated time, or the timestep of the simulation. That would be one way to have a duration in the order of magnitude of attoseconds with physical meaning.
I commented about physical time, measured by physical instruments or devices. In a computer the most common (if not the only) such device is its clock.
And I didn't say that you cannot measure small durations (there are instruments for this purpose). I said that there are several pitfalls if you interpret mathematical constructions as if they are physical quantities. Of course not only for time. 128 bits accuracy is an overkill, for any physical quantity I can think of (at least in electronics).
i just donāt think this has a lot to do with the design of the Duration type, which is nothing but a currency type for applications to exchange āattosecondā-typed values.