It sounds like the proposal is mainly interested in the cases where reads and/or writes have side-effects, or some other reason why the exact number of reads & writes is significant? As opposed to shared memory (e.g. multi-threading) for which we already have atomics and similar, or more general instruction ordering for which we [can] have separate memory barrier methods?
The size of volatile accesses
What about the other pointer types, e.g. UnsafeMutableBufferPointer?
Is that accurate? Device memory (in ARM parlance) is volatile but isn't necessarily registers (e.g. memory-mapped but non-coherent SRAM or DRAM). There can still be a need for the semantics that [C-style] volatile provides, most notably: ensuring reads & writes happen exactly as written, with no repetition (e.g. reloading a spilled register) nor eliding. Repetition might "merely" be a performance concern, but I think it is important even as such. And elision is of course a functional concern.
Granted you can maybe build the necessary abstractions atop primitive word-sized reads, but that might not give you the performance you need. e.g. what about SIMD loads & stores?
Type vs variable vs access modifiers
I agree that C has a lot of flaws in its implementation, but that doesn't prove to me that the type modifier approach is inherently wrong. I can see how tying volatility only to the pointer types could help prevent some types of misuses, but it's also pretty limiting - it's intuitive to me to declare a type to represent a section of address space and mark either some or all of it as being volatile. It's also much nicer to have few[er] high-level pointers to composite types, than having to deal with myriad pointers to individual, piecemeal values.
Swift has a more powerful type system than C; perhaps it could do a better job of a volatile type modifier (or equivalent). e.g. maybe it could just not do problematic things like coalesce volatile accesses (re. your UInt8 + UInt32 struct example) or other things that change the load / store width.
I'm not saying the pitch is wrong in this respect, I'm just saying I don't think it proves it's right, yet. It'd be helpful to have some elaboration as to why the type & variable modifier approaches are intrinsically wrong, not merely implemented wrong in some existing languages.
Relationship to atominicity
As @Joe_Groff mentioned, volatility and atomicity are quite often required together. So I find Joe's suggestions appealing in that respect.
But, volatile is not necessarily about shared access, in the way atomics are. Often it's just to prevent the compiler from unwittingly creating problems even in simple serial code. So I do think the concept needs to be applicable to more than just AtomicRepresentables (unless I misunderstand what types can actually, plausibly conform to AtomicRepresentable?).
Conceptually, I don't think all volatile accesses have to be atomic. It could be perfectly fine to split up a read into smaller pieces, even to reorder their loads. The point may simply be to ensure the overall read actually happens when & where it's supposed to.
Conversely, not all atomic accesses are volatile. It is a valid circumstance to merely need a consistent view of a given set of bits, not necessarily to care how often or precisely when those bits are read from memory. (If I understand correctly, Joe is not proposing that constraint - I'm just noting it.)
What is this 'volatile' thing, anyway?
This might be a chance to break from the name 'volatile', given it has very inconsistent meanings across popular languages. A new, distinct name might also help streamline the proposal & review, by preventing people bringing assumptions and preconceptions, about what the intent of the proposal is, based on their personal experience with other languages.
Depending on who you ask, 'volatile' is used to:
- Ensure reads & writes are not reordered (and w.r.t. to what is a further source of disagreement).
- Ensure reads & writes are not elided or duplicated.
- Ensure reads & writes are atomic (as in not torn).
- Ensure reads & writes are not cached (not merely conceptually, as implied by some of the other points, but explicitly bypassing processor caches a.k.a. non-temporal loads & stores).
- Communicate that reads and/or writes might have unspecified side-effects.
- Prevent speculation & prefetching.
- Probably other stuff I'm not thinking of right now, or not even aware of.
Whether it actually does some or all those things in any given language and toolchain is yet another fun question. Suffice to say it's a mess. Maybe Swift can avoid that confusion entirely.