Although this is unfortunately not the case for class properties or globals. It might make sense to still provide a tailored diagnostic for these cases.
Yeah, that's the intention. We don't want to leave the door closed by naming this Atomics
and have to reintroduce another module if we decide to add locks or concurrent data structures. (As well as there now being a module name conflict between the atomics package Atomcs
and the stdlib Atomics
)
Yeah, relaxed has no effect when used as a fence. We could potentially introduce a fourth structure for memory orderings, AtomicFenceOrdering
.
I don't want to sidetrack this thread too much on the topic, and I'm happy to start a new thread if there's lots to say on the topic, but I want to address this future direction since it's been mentioned a few times now. This style of compile time query has come up in the past and it seems like it would be pretty useful feature for many reasons. The conclusion of those threads was that although this would be an intuitive syntax, it seems infeasible to support generic compile time queries for the presence of a declaration because it implies a cycle between parsing and semantic analysis (resolution of the name in the query requires semantic analysis, which requires parsing, which would need to resolve these conditionals โฒ). Perhaps it's simpler if the queries were restricted to type names, but I'd prefer a more general solution. An alternative way to get this kind of functionality that I've been thinking about would be to allow modules to declare compile time feature flags for clients to test. The flags should be explicitly designed to resolvable unambiguously by the module loader during parsing, side stepping a need for semantic analysis.
This is something that was brought up and I think it's an unfortunate thing for the stdlib. We really want there to be an explicit import to some module to import atomics because we don't want these things available the top level for free, but you're right in that the standard library really wants to use atomics. The best solution in my opinion is submodules which would let the stdlib proper use these new atomics types and still require folks to import Swift.Synchronization
or something. I played around with something for submodules, but it's too large of a feature and somewhat unrelated to atomics to prevent finally shipping these things. Hopefully in the future if we do get these feature we can somehow migrate these APIs back into the stdlib, but for now yes the stdlib will either have its own internal Atomic<Value>
or just keep doing what it's doing now and call the builtins.
We should probably do something like this, yeah. You can technically do the following:
struct MyAtomicValue: AtomicValue {
typealias AtomicRepresentation = UInt8.AtomicRepresentation
var asUInt8: UInt8 {
...
}
static encodeAtomicRepresentation(
_ value: consuming Self
) -> AtomicRepresentation {
UInt8.encodeAtomicRepresentation(asUInt8)
}
...
Although, this sort of does solve the problem that you describe here:
Because now nobody needs to interface with the AtomicStorageNN
and just use UInt8
or Int8
's encode function (thus solving the multiple overloads problem).
This makes sense given Karoy's rationale that things like Int8
, UInt8
, and Bool
can all use the same type. I'll go ahead and make this update to the pitch soon.
DoubleWord
was introduced as a portable name, yes. Even if we had just AtomicStorageDoubleWord
as the portable AtomicStorage
, we'd still need a type for the AtomicValue
conformance for use with Atomic
. We can make the storage itself conform to AtomicValue
, but Atomic<AtomicStorageDoubleWord>
doesn't roll off the tongue.
Hi all, apologies for the silence there! I've been taking all of the feedback into account and have updated the proposal with some significant changes to the API.
Some notable changes include:
-
New atomic storage types that are separate from the standard integer types who are named
AtomicInt8Storage
,AtomicInt16Storage
, etc. Some platforms do not have the correct alignment needed on the standard integer types for use in atomic operations, so we need separate types who we can force alignment on and these storage types are just that. -
The removal of the
AtomicStorage
protocol. This protocol was weird because it was an implementation detail of the standard library, but we really don't want to expose public API like that. We've come up with a slightly different design that removes this protocol (and the associated conformance fromAtomicValue.AtomicRepresentation
) and instead conditionalizes the fundamental atomic operations when the atomic representation is equal to one of the fundamental atomic storage types. This means in generic contexts, you must have something similar towhere T.AtomicRepresentation == Int.AtomicRepresentation
or similar to access atomic operations likeload
,store
, etc. -
Added the
AtomicOptionalWrappable
protocol. Similar to getting rid of theAtomicStorage
protocol, we really didn't want the optional conformance constraints to be implementation details so we designed a protocol that captures being able to be an atomic value while being wrapped in an optional. Like before, users are not allowed to extend this list but we wanted to be open about the requirement. -
Added the
loadThenMin
,loadThenMax
,minThenLoad
, andmaxThenLoad
specialized integer operations onAtomic
. These atomic operations are supported on recent CPUs and LLVM has supported these for a while now.
You can find the updated text here:
In addition to revising the proposal, I've also been working on an implementation that can now be found here: [WIP] [stdlib] Atomics by Azoy ยท Pull Request #68857 ยท apple/swift ยท GitHub
Please let me know what you think!
Looking at the Alternative Designs for Memory Orderings section, how about utilizing static members of protocols and type inference, like SwiftUI? Something like this:
protocol AtomicLoadOrdering {}
extension AtomicLoadOrdering {
static var relaxed: Self where Self == _RelaxedAtomicLoadOrdering
static var acquiring: Self where Self == _AcquiringAtomicLoadOrdering
static var releasing: Self where Self == _ReleasingAtomicLoadOrdering
}
extension Atomic where Value.AtomicRepresentation == AtomicIntNNStorage {
public borrowing func load<O: AtomicLoadOrdering>(
ordering: O
) -> Value
}
There is still a danger of unspecialized generics degrading performance. Perhaps some underscored attributes like @_specialize
could help prevent that.
I don't feel the top-level API surface is so vast that it actually requires a separate module. The only part that could be seen as a bit bloaty are the Atomic{X}Storage
types living at the top level -- but then, that's a trivially solvable problem.
Atomic operations are fundamental building blocks -- at least as fundamental as anything else the standard library offers (like Int
, String
, pointers, and SIMD
types). Especially when the proposal says things like:
The memory orderings introduced here define a concurrency memory model for Swift code that has implications on the language as a whole.
I think the proper place for such a fundamental thing is the language's lowest-level library.
We want to be able to use these atomics to implement pieces of the stdlib, so in practice they will have to go in the stdlib themselves.
Yeah that's what I asked for, but it's all about how that happens. Alejandro's answer from before:
I'm not really thrilled with like the idea of the stdlib having internal copies of all of this (I'm sure nobody is). It's additional code to maintain, and it means that all of the future directions mentioned in the proposal (atomic strong references, FP atomics, tearable atomics, consume ordering, etc...) will have to be duplicated both for the public atomic types and the internal copies used by the stdlib -- or, more likely, they will diverge, with the stdlib only copying what it happens to use, and making it harder for contributors to use any of those other features should they have reason to.
And so it's worth revisiting the presumption that atomics should require some sort of import statement to use: why must that be true? What exactly is the criteria that means atomics shouldn't live top-level in the stdlib?
-
Is the argument that they are too specialised? Well, most people don't use SIMD, but we have a whole suite of vector types in the stdlib, without requiring additional imports. I think atomics are at least as useful to your average programmer as SIMD. A simple atomic counter or boolean flag can be very easily understood and is useful in all kinds of applications.
-
Is it that they are too complex? It's true, memory ordering introduces complexity that programmers will need to learn if they are to achieve their desired results -- but then, floating point numbers are also full of complexity, and don't even get me started on Unicode. Many programmers are much more comfortable handling atomic values than they are handling Unicode strings
So, I think this module split gives us costs, without any tangible gains. It seems fairly arbitrary, and I don't think we need it. Just have the atomic types in the real standard library. Make Atomic<Int>
just as available (even internally throughout the toolchain) as regular Int
is.
If only we had some sort of way to seperate names that we don't want to be global into their own explicitly accessed section. A "name space" if you will. Then atomics could be part of main standard library module without cluttering up the top level with niche symbols.
So we actually experimented with a different approach where the orderings are overloads and have static properties that return values of the thing:
public enum AtomicMemoryOrdering {
public struct Relaxed {
public static var relaxed: Relaxed {
Relaxed()
}
}
...
}
extension Atomic where Value.AtomicRepresentation == AtomicIntNNStorage {
public borrowing func load(
ordering: AtomicMemoryOrdering.Relaxed
) -> Value {
// No switch statement, there's only a single kind of atomic op
// that this overload could be.
}
}
In fact this is what the initial implementation does.
I think this approach has several benefits. Each overload has a single canonical atomic operation because of both the combination of the specific atomic storage extension (which gives us the size of the atomic op) and the ordering overload. We preserve the nice "enum-like" API when using these operations and we'd only have overloads to each ordering that is supported (this was already the case with AtomicLoadOrdering
and such, but atomicMemoryFence
wouldn't have a relaxed overload resolving the no op case you mentioned @wadetregaskis ). Additionally, we can remove any special handling done by the compiler for applications of these functions to make sure the orderings are constant expressions (which is not itself a bad thing, but it does feel a little better to be less magical).The big reason why this approach is desirable in my mind is that in debug builds there's no question about whether the optimizer will eliminate the ordering switch statement. The switch statement doesn't exist. Of course this does put more pressure on the type checker to resolve the overload, but we always get either a right or wrong answer from the type checker vs. the optimizer giving us right, wrong, and technically right but not what we wanted. (In fact in debug clang doesn't eliminate the memory ordering switch for std::atomic
in -O0
Compiler Explorer)
Considering this, I'm going to update the proposal to support this design to align with what's currently implemented.
Our intention with the separate module was due to the fact that using atomics should be explicit choice and not something that can occur out of the blue when reading some source. You're right that these APIs being in the stdlib proper means the stdlib doesn't need to duplicate anything for its own usage (but implementation details of the stdlib don't really affect how users interact with the language nor should it drive how we design APIs). Atomics have the potential for misuse like any complex or hard to understand stdlib API, but using atomics can still lead to data races if used incorrectly so if they were in their own module you've made an explicit choice to import the module and use atomics giving readers some understanding that this file uses synchronization primitives.
All of that said though, I think a separate module is not really what we want either. If the language had support for submodules we could have atomics in the stdlib proper so there's no duplication AND require an explicit import on something like import Swift.Synchronization
giving us the best of both worlds. Considering this, I think you're right and it would be best if they were in the stdlib instead of a separate module which would give us the flexibility to at least make a source breaking change in another major version of the language if we get submodules.
But, do actual atomics really encourage any worse behaviour, than what people are probably already doing by just assuming things like Int64 are atomic?
I imagine that most Swift programmers today don't even know what an atomic is. At best they're familiar with locks and GCD queues or (increasingly) actors. There's got to be fewer and fewer people left that still even know about atomics, right?
I suppose it's possible that, if they're imported by default, Xcode et al will surface them in auto-complete suggestions, and people might learn of them through that. Would that actually lead to more abuse, though?
I also think it'd be odd (echoing what @Karl said earlier) if SIMD types are "in" by default but atomics aren't, given the latter are surely more widely used?
Ultimately I don't care much at all where they land in this respect - just that we get them! - but I do think it'd be odd to go out of ones way to put them into a separate module, if there's not really a clear need to do so.
Ok, I've updated the pitch for the removal of the Synchronization
module in favor of just putting these in the standard library as well as the memory ordering change I mentioned here: Atomics - #35 by Alejandro
Link to the proposal: Low-Level Atomic Operations ยท GitHub
(I'll update the implementation shortly)
i think including these types in the standard library is perfectly sensible and this change is an improvement to the proposal.
Will this stdlib targeted revision include the performance annotations (@noLocks
and @noAllocations
) even if still the underscored versions?
The current Swift Package couldn't include them because they're still underscored and using them would require unsafe flags in the package manifest.
But I'm assuming that wouldn't be an issue here. That way projects with the freedom to use unsafe flags could begin to make use of them.