Exposing the Memory Locations of Class Instance Variables


I’ve thought we needed something in this space for a long, long time. Bravo.


Why not implement this safely-packaged version of the functionality ourselves? Or implement the with* version of it while documenting that it’s okay to escape the pointer as long as you keep the base alive? An unsafe-by-default API seems like it’s inviting mistakes.

1 Like

Thanks for addressing this important problem! I don't think this is quite the right approach, though, because fundamentally the storage you want to allocate isn't a property from Swift's perspective. The exclusivity expectations are fundamentally at odds with concurrency primitives like locks. And in what little memory model Swift does have, it is not allowed to mix formal accesses with accesses through pointers; our existing escapes like withUnsafe*Pointer must be used mutually exclusively with the property that's "locked" by the with. This means that, if you need raw memory inline in a class instance, and you allocated that storage using a property, it would pretty much always always a bug to access that property directly. In other words, in:

final class Counter {
    private var _lock = os_unfair_lock_s()

It would never be correct to access self._lock, because it might copy, and it would assert exclusive access on the storage and interleave formal accesses with raw pointer accesses.

I think it would be better to expose raw storage inside a class instance as just that, raw storage, and make the pointer the primary interface to the storage, since that's what you want anyway. We could express this as something that looks like a property wrapper, maybe:

final class Counter {
    @RawStorage private var _lock: UnsafeMutablePointer<os_unfair_lock_s>

That makes it much harder to misuse, because there's no way to accidentally access the storage as a property, and also means we don't have to expose similarly brittle APIs for getting pointers to arbitrary class properties. We can also relax the requirements on backing storage allocated this way so that it isn't subject to interference by exclusivity semantics, making it more appropriate for building low-level primitives or interfacing with existing ones.


Thanks so much @lorentey, I'm very strongly +1 on this. And for me, often doing some low-level, performance focussed programming, this is one of the most important proposals ever.

That's exactly the plan -- but I felt important to first introduce the primitive on which the safe(r) construct would be built on. The @Anchoring property wrapper is going to be running into some pretty hard limits on what's reasonably achievable in this area on the library level; I expect (hope!) it will spur people to design something better.

This pitch highlights some lower-level issues that deserve serious consideration, and I don't want to distract attention from those by introducing them at the same time as a brand new, somewhat shaky abstraction.

If the followup abstraction is successful enough, we may well end up not needing to expose this API as public. (I think that would in fact be the outcome I prefer.)

I was wrestling with this very question for days. What put me over the edge is that I think it would be unwise to teach people that it's sometimes okay to escape pointers out of closure-scoped pointer APIs. I prefer to add one unsafe API instead of slightly weakening a general pattern.

We could in theory have the new MemoryLayout API return a "safe" pointer:

struct AnchoredPointer<Pointee> {
  private let _anchor: AnyObject
  private let _pointer: UnsafeMutablePointer<Pointee>

  init(_ pointer: UnsafeMutablePointer<Pointee>, in anchor: AnyObject) {
    self._anchor = anchor
    self._pointer = pointer

  func withScopedAddress<Result>(
    _ body: (UnsafeMutablePointer<Pointee>) throws -> Result
  ) rethrows -> Result {
    try withExtendedLifetime { 
      try body(_pointer)

extension MemoryLayout where T: AnyObject {
  func anchoredPointer<Value>(
    to key: ReferenceWritableKeyPath<T, Value>,
    in root: AnyObject
  ) -> AnchoredPointer<Value> { ... }

But this doesn't seem like particularly useful API on its own. (It would need to be wrapped into constructs that give it the high-level operations that are actually useful.) I'm especially worried about the retain/release traffic around _anchor -- which would clearly be unacceptable for atomic operations. I have a hare-brained scheme to try eliminating this overhead in the upcoming @Anchoring pitch, but I don't think we can reasonably do that if the underlying primitive insists on returning a strong reference.

This is the sort of thing that it'd be nice to have explicit invocation of accessor coroutines for. If you were forced/strongly encouraged to access the property in a scoped access, using strawman syntax like this:

with lockAddr = self.anchoredProperty {

where anchoredProperty would be implemented using a read coroutine, and with would force the body of the with statement to be run during the coroutine access, then that should naturally establish a lifetime relationship between self and the pointer provided by anchoredProperty without explicit retain/release traffic.

Without coroutines, it seems like a with*-style higher order function could still manage this, maybe exploiting coroutines under the surface.

SIL does have an instruction for introducing ad-hoc lifetime dependencies between values. Another possibility could be that we expose that to the standard library as a builtin, to increase the likelihood that naive uses of the storage pointer are bound to the lifetime of the containing object. The effectiveness of that approach would be limited by how far the optimizer inlines, though, so you'd still need withExtendedLifetime if you're returning the pointer up the callstack.


@lorentey Thanks so much for putting this together. Very happy to see the development in this area and very much looking forward to the nice things we can build on top.

1 Like

Agreed. However, the Law of Exclusivity, as introduced in SE-0176, doesn't care what form an access takes, or what sort of variable it touches -- overlapping mutating access is currently strictly verboten in Swift. Restricting such access to dynamic variables doesn't keep us out of jail at all -- they too are illegal under the Law.

Obviously, this cannot stand; we need to be able to deal with concurrently mutable shared state in Swift, even if we'd only use this to provide a higher-level concurrency model (such as actors) to Swift applications. In order to do this, we need to punch carefully shaped holes into the Law of Exclusivity; specifically, we need to seriously start talking about variables, memory locations, atomic operations and memory orderings.

Swift code is already using instance/global/static/captured/dynamic variables to represent mutable state shared across concurrent threads of execution -- it calls out to C & C++ code (such as Dispatch) to try escaping the Law, but this practice arguably still violates it. To wit, it implicitly assumes that C++'s memory orderings also apply to Swift code. (We need to explicitly state that this is indeed the case. The first step to doing that is that we need to admit that certain language constructs are mapped to specific memory locations, so that we can understand how to apply memory orderings to them. This pitch is really just an excuse to start talking about this.)

The holes we need to punch into the Law of Exclusivity will need to include allowances for overlapping calls to certain primitive atomic operations (i.e., we'll need to add a new "atomic" access to go with the current "read"/"assign"/"modify" set). I would find it weirdly asymmetric if this new access would be artificially restricted to dynamic variables, while the existing ones are defined on all sorts.

I may be missing something here, but as I understand it, overlapping read accesses are explicitly okay, no matter what form they take. So this code is perfectly fine:

class Foo { var value: Int = 42}

let foo = Foo()

// thread A

// thread B
print(foo[keyPath: \.value])

// thread C
withUnsafePointer(to: foo.value) { print($0.pointee) }

(Even assuming withUnsafePointer would provide the actual address of the ivar.)

Objection, your honor! There are two places in particular where access through the property syntax is not just correct, but also highly desirable: during init and deinit.

These are guaranteed to be outside of any overlapping access, and in fact going through the usual exclusivity checks may provide an extra layer of protection against lifetime issues. (If nothing else, TSAN could use the exclusivity assertion as an input signal.)

I did seriously consider this approach, but I found it raises more questions than it answers.

  • How would this storage get initialized? We don't want to lose the statically enforced guarantee that all ivars must get created before init returns.
  • How would this storage get deinitialized? Some synchronization constructs will definitely include strong references, and these should really just behave like regular strong references when it comes to deinitialization. (We'll introduce an atomically initializable lazy reference in the first batch, and I'm pretty sure we'll also build a full AtomicReference type when we add double-wide atomics. )
  • How would @RawStorage compose into other language features? How would it enable us to provide an ergonomic but lightweight UnsafeLock construct in the stdlib (or some concurrency module)?

The last point is essential -- we do not want to force developers to carefully manage initialization/deinitialization of every single synchronization construct, or to litter unsafe pointers throughout their code.

APIs like pthread_mutex_destroy will cause problems anyway, since property wrappers cannot currently hook into class deinitialization. However, there is a large swath of constructs that can be modeled with a general-purpose property wrapper built around unsafeAddress(of:in:).

Yes yes yes! If I understand correctly, my hare-brained scheme is going to try doing something very much like that.

class Foo {
  @Anchoring var counter: AtomicInt

let foo = Foo()

print("Current value: \(foo.counter.load())")

The @Anchoring property wrapper would yield (not return) the AtomicInt struct (which would effectively be a wrapper around AnchoredPointer), and this would hopefully be a constrained enough setup that we could add an optimization that gets rid of the retain/release in the common case where the yielded value doesn't escape the coroutine, and the operation doesn't call isKnownUniquelyReferenced(). (As long as everything is fully @inlinable and @frozen).

We'd still rely on the strong reference inside the AtomicInt struct (in its full retain/release glory) in case someone copies the value into, say, a local variable (which we cannot practically prevent in today's Swift).

let myQuestionableCopy = foo.counter

myQuestionableCopy.wrappingIncrement() // This needs to be safe

I'm working on formalizing the @Anchoring pitch as quickly as I can. (As always, writing these things is exquisite agony.)


The Law of Exclusivity only governs operations that fall under its formal model, like properties and variables. Things like raw memory accesses are mostly outside of the model, and don't rest on the semantics of any language-managed storage, which is why you can get away with putting locks in raw allocated memory, doing atomic operations on them, etc., and why things like withUnsafePointer are so restricted in how they allow managed things to be temporarily treated like raw memory. I think we should avoid involving the mechanics of regular stored properties in this mechanism as much as we possibly can, because it leads to a much more understandable semantic model—you just have the "usual" pitfalls of raw memory access, without an additional layer of higher-level semantic guarantees.

This is of a different nature to what you're proposing, though; the sequencing semantics we're getting by calling out to C happen around Swift formal accesses, and synchronize on entities that are not managed by Swift's semantics such as queues or locks stored on raw memory.

We try really hard not to saddle regular high-level code with unnecessary low-level semantics that are irrelevant to most people by default. We don't want to fall into the trap C and C++ have where they try to have their low-level cake and eat their high-level optimizations too with rules that are overly complex and satisfy nobody. By all means, objects are already effectively tied to a memory address, and it makes sense to be able to allocate unmanaged storage with a similarly-fixed address inline inside objects, but I think getting that sort of raw storage ought to be something opt-in on the storage and not something that potentially any property can be turned into.

When we get move-only types, atomic accesses should slot into the existing model naturally like they do in Rust, where atomic mutation operations apply to nonexclusively read-borrowed references.

Well-typed nonatomic reads are probably practically fine, but you would need more than that for anything where you'd want to use this feature. It's a bit tricky, though, because the compiler can generally copy its way out of situations where formal accesses might interfere with reads. We're more than likely going to copy foo.value to a temporary in this example so that we don't block off writes to foo.value during the withUnsafePointer block.

I think there are valid arguments in the other direction, that you want to be able to initialize/deinitialize the storage on your own terms in init/deinit without the usual restrictions on initializing properties. There could be other use cases for this feature where the object wants to allocate some raw storage that it leaves uninitialized in some conditions. Ideally we could obsolete at least some uses of ManagedBuffer for that purpose with this feature.

If you have a pointer to the storage, you can still use UnsafeMutablePointer's .initialize()/.destroy() methods to initialize and destroy the storage as if it were a regular property.

I don't think you can get there until we get move-only types. But it seems like you could associate RawStorage with a protocol for initializing types from pointers, like:

protocol PointerToStorage {
  associatedtype Storage

  init(storage: UnsafeMutablePointer<Storage>)

struct UnsafeLock: PointerToStorage {
  var storage: UnsafeMutablePointer<pthread_mutex_t>

It would also make sense to me to allow RawStorage inside structs, and have the same guarantees about the raw storage inside the struct when the struct itself is transitively allocated in RawStorage, which would allow you to compose other wrappers around it.


I argue that a dynamic variable created by allocating and initializing storage through an UnsafeMutablePointer isn't any less a variable than an ivar -- it must have just as well-defined semantics. The Law of Exclusivity must necessarily apply to it, like it does to all variables:

UnsafeMutablePointer.pointee is mutable memory. Array.subscript's addressor returns mutable memory. There is nothing truly raw about these -- it's true they have their own unique flow for creation/destruction, but so does every other form of variable.

(It's also true that exclusivity violations through unsafe pointers typically won't get diagnosed in regular production use. I don't see how this is a problem: these pointers are marked unsafe for a reason.)

There is a distinction to be made between variables that are backed with a stable memory location and variables that aren't. (E.g., the latter inherently cannot participate in any concurrent access.)

My point is that sadly this ship has already sailed. Swift encourages people to synchronize access on shared mutable state using C and C++ constructs, so Swift has de facto adopted the C++ concurrency model, including the scary bits about dependency-ordered-before this and inter-thread-happens-before that. We just neglected to specify exactly what this means -- which is a standing invitation to concurrency bugs.

How can os_unfair_locks or dispatch queues (or indeed, the existing internal atomic operations in the stdlib) be safe to use in Swift code without specifying how (say) a regular assignment to an everyday Swift closure capture variable interacts with a subsequent releasing store to an os_unfair_lock?

I don't think it's viable for us to keep being coy about how sharable variables (such as class ivars) work. By merely observing that Swift apps have been able to successfully use locks like os_unfair_lock to synchronize access to shared data stored in these language constructs, and knowing nothing whatsoever about the inner workings of the Swift compiler, we can already deduct that

  1. Shared variables must be associated with a memory location (because addressable memory is the only way to share mutable state across threads).
  2. All participating threads agree on the location of this memory (or it wouldn't behave like a shared variable at all).
  3. The last value assigned to a shared variable that precedes an unlock operation on the lock is guaranteed to get translated to a corresponding store to its known memory location, and that store is guaranteed to get executed before the unlock operation issues its releasing atomic store on the mutex variable. (Otherwise locks wouldn't work.)
  4. The first read of any shared variable that follows a lock operation (which we know issues an acquiring load on the mutex variable) must be translated into a regular load operation on the variable's memory location. (Otherwise locks wouldn't work.)

So class ivars, closure captures, global variables, etc. evidently do all have stable addresses and their accesses must be following a sensible memory model that is compatible with the memory orderings defined by all those smart people in C++ land. Again, all this follows from the mere fact that locks imported from C APIs demonstrably work in Swift.

I sincerely hope none of this is surprising or controversial!

(...is it?) :worried:

I wholeheartedly agree with this last part. Let's make the ivar thing opt-in, as long as we can agree on a nice way to implement it. (The API in this pitch is not critical to make public -- I have no objection to limiting its (direct) use to the stdlib, as long as we provide safer abstractions that provide the same benefits. We do badly need to access specific ivar locations within the stdlib.)

But I really really wouldn't want to treat synchronization constructs as unmanaged raw storage. These things aren't built out of special artisanal bits that are only meaningful to C++ code: they contain regular everyday types that we already model in the stdlib. They should be (and already are!) initialized/destroyed as regular Swift values. os_unfair_lock_s is imported as a boring struct holding an integer ivar; it's no more special than, say, NSRange is.

The only thing that makes these types special is the operations they expose for use between their init/deinit. Given that Swift is a "high-performance system programming language", my expectation is that I should be able to implement these operations directly in Swift, and if anything, this should be less difficult than it is in C++. Requiring people to mess around with raw pointers is definitely not the right direction for that.

I agree that non-copyable types would be the appropriate abstraction to represent these constructs. Unfortunately, we aren't currently modeling them in Swift. On the other hand, just as unfortunately, we're badly overdue for adding usable atomics, and I think it would be a mistake to delay it further.


One benefit of having this opt-in and explicit is that it would be possible to clearly define its "API resilience", and not relying of having the user of such field guess if it will work and have to check for every usages.

If this pitch is only about workaround current Swift limitation, I prefer this API to remain private, so we can replace it by the proper construct later.

I have assumed for a couple years now that native atomics would only happen after move-only types arrived. @lorentey and @Joe_Groff appear to agree that move-only types would be the better model.

This raises the question: do we have an idea how far away are move-only types?

What are other uses for the functionality pitched here? I personally have only wanted/needed this for atomics.

Another question, especially to Karol and Joe: if this is "only" about a workaround and it were to go through as an API private to the standard library, would it prevent a public atomics implementation from landing in the standard library? If not, would such an implementation be hamstrung later on?

It's kind of disgusting, but we can technically already do it with this, right?

{ lockAddr in

Edit: Just realized that only works for _modify, not _read

Like our other primitives, the transition from unmanaged to managed memory in these cases is temporary well-scoped. pointee and Array.subscript provides managed mutable storage backed by unmanaged memory only for the duration of the access, essentially the inverse of withUnsafePointer. They go back to being unmanaged memory when you aren't accessing them.

I suppose that means you could say something similar about object instance memory, that the instance storage's ground state is unmanaged memory, and that property accessors temporarily present a managed, exclusivity-governed interface to part of the instance memory. That at least brings the problem of specifying what happens when you interleave formal property accesses with other raw forms of access down to being the same problem with pointee (which is similarly ill-specified; I wouldn't recommend people interleave pointee accesses with os_unfair_lock_* API calls on a pointer either!)

I think we're talking past each other here. I agree with everything you say, but I see what you're talking about as an orthogonal issue to what's being discussed here. I agree, you can use C and C++ constructs to establish semantic ordering constraints that aren't yet precisely defined in Swift, and we need to more precisely define what that means, as well as unlock the ability to express these constraints in Swift. However, when you're using C or C++ to operate on those primitives, you're only using C/ObjC/C++ calls to interact with the storage for their underlying concurrency primitives, so there should not be overlapping formal Swift accesses and C operations outside of Swift's current purview on the same object. That's what I'm more concerned about.

I agree we should expose the ability to implement concurrency primitives directly in Swift, but without move-only types, I think we can only at best expose them as unsafe constructs over raw storage. The best "safe" API you could build over one would be a class wrapper, because objects are our only means for unique non-copyable data with destructors, and I think that's true whether we go with the "address of ivar" approach or the "raw storage" approach. It seems to me like anyone who wants to avoid that indirection is going to have to fall back to unsafe primitives either way, until move-only types give us provide a composable model for managing these things in inline storage.

Which is not to say this is a total dead end, though—it seems to me like either mechanism would still have a place in moveonly structs as a primitive implementation mechanism, and they definitely make the situation better in the interim, since it at least becomes possible to inline raw memory into objects. I just don't think we can make it safe to do so yet.


As I mentioned in my reply above, I think a mechanism similar to what Karoy is proposing would be similar to what we would want for move-only types, so it's not "only" a workaround. I would be concerned about taking up the "good names" for atomics, locks, and other standard library concurrency primitives before we get move-only types, though.


Sure. This should work for a _read as well, though it's hard to force the compiler not to copy and _read a temporary instead of the original memory today.

I had an impression that this pitch was very much a workaround; thanks for clarifying.

If we need to wait three more years, it would suck not to have atomics.
If it lasts only one more year, the current workarounds aren't so bad.
In any case, I would vote to reserve the "good names" for the feature in its (intended) permanent form.

1 Like

Well, we should be able to expose atomics in one form or another as operations on UnsafePointer if nothing else. The biggest immediate issue that I see in Swift today is that there isn't a way to allocate pointable storage as part of class instances, which would be what Karoy's proposal addresses. Maybe we can still make an incrementally friendlier interface too.


I think we need to introduce (at least rudimentary) support for concurrency primitives before Swift gains support for move-only types.

Indeed -- atomics and other synchronization constructs are the only use cases we care about.

No and no! However, any names we introduce now for the interim types won't be (easily) available to the eventual "proper" implementations of these concepts. This is a good argument for making these interim types not too fancy.

A set of explicitly unsafe UnsafeAtomicFoo types that are boring no-nonsense wrappers around unsafe pointers is still vastly preferable to not having atomics all -- and it would leave the AtomicFoo names available for properly move-only atomics later.

It seems we're in full agreement here, if from slightly different viewpoints. Mixing "regular" and "atomic" access is a big no no, independent of what construct we use to implement storage. One of my base expectations is that any interim concurrency solution still needs to fully protect against such mixed access.

Custom destructors would definitely be nice to have for things like POSIX mutexes, but I don't mind waiting until move-only types to get them. The "address of ivar" approach acts as a compromise between raw memory and custom destructors. While it would not let us automatically call pthread_mutex_destroy on destruction, at least it would still give us the default nontrivial destructor behavior for things such as atomic reference types.

(It also allows debuggers and heap analysis tools to (easily) understand that these things are always initialized (like ivars), and to figure out through the regular reflection facilities if some of them hold strong references.)

This is very much possible. My problem is that this doesn't satisfy my base expectation that any API we add needs to protect against mixed use of atomic and non-atomic operations -- mixing atomic operations along with things like pointee is very much the opposite of that:

let pointer: UnsafeMutablePointer<Int> = ...
// We should only be able to spell one of these, but not both:
let value1 = pointer.pointee
let value2 = pointer.atomicLoad(ordering: .relaxed)

This seem more appropriate to be an internal implementation detail than any actual public API.

A viable alternative would be to provide atomic operations on a trivial wrapper type around a pointer value; I had implemented this previously, and while it is unsafe, I still find it vastly preferable to the "method soup" approach. It does also have the nice property that the nicer AtomicInt etc. names would remain available for the eventual move-only approach.

let pointer: UnsafeMutablePointer<Int> = ...
let value1 = pointer.pointee

let atomicInt = UnsafeAtomicInt(pointer)
let value2 = atomicInt.load(ordering: .relaxed)

If we're happy this with approach, we should find a way to let us use these types to declare properties in class types. Joe's @RawStorage and PointerToStorage constructs (names to be bikeshed) would let us do that:

struct UnsafeAtomicInt: PointerToStorage {
  typealias Storage = Int
  let _storage: UnsafeMutablePointer<Storage>
  init(storage: UnsafeMutablePointer<Storage>) {
    _storage = storage
  func load(ordering: AtomicLoadOrdering) -> Storage { ... }
  func store(_ value: Storage, ordering: AtomicStoreOrdering) { ... }

class Foo {
  @RawStorage var counter: UnsafeAtomicInt  

foo.counter.wrappingIncrement(ordering: .relaxed)
print(foo.counter.load(ordering: .relaxed))

Question: how would we set an initial value for counter above? The obvious answer would be to require that @RawStorage properties get initialized with values of their Storage types.

@RawStorage var a: UnsafeAtomicInt = 42
@RawStorage var b: UnsafeAtomicInt

init() {
  self.b = 23

Is this weird?

Double-word atomics introduce more complications -- they sometimes distinguish between the `Storage` type and the logical `Value` that is returned by `load()`. (For example, we expect a fully general strong `AtomicReference` would use an `(T?, Int)` tuple for storage, but it would preferably load/store `T?` values.) This is mostly irrelevant at this point, except it complicates initialization, too -- ideally we'd want to use the `Value` type for initializing these rather than `Storage`. But we can live without all this: the `Value`/`Storage` distinction can definitely wait until we have move-only types.