Swift pointers to value types

I am trying to understand Swift pointers. I'm going to describe my needs, describe my understanding of swift, and ask questions about how to solve my problem:

I'm making a regressor in Swift. Various parts of my code have their own data model: some structs, some classes. I would like to make my regressor work everywhere and be agonistic to the data model. I also don't want to insert my regressor in the middle of the target function's input->method->output disrupting the local code. I want to probe arbitrary variables to mutate during regression. To make the code agonistic, I want to pass variables into the regressor as UnsafeMutablePointer(s) which can then mutate arbitrary probed variables. A trivial example of the API I want would look like this:

struct MutableData {
    var variableA: Double
    var variableB: Double
}

Regression.add(Function: SomeFunction(MutableData))
Regression.add(Pointer: &MutableData.variableA)
Regression.add(Pointer: &MutableData.variableB)
Regression.fit()

It appears to me that Swift will not give me a pointer to a struct element that is guaranteed outside of a withUnsafeMutablePointer closure. My assumption is Swift cannot guarantee the lifetime of value types because instances are being deleted aggressively. Is that true?

  1. Then the first question is can one "bind" a pointer to a value type to mark it as in-use?
  2. Or can we extend its lifetime of a value type til we are done with our pointer?
  3. Or If not, can we introspect the memory of the pointee, and tell if the memory metadata is still the same?

Hi,

First of all, in Swift pointers are a low-level construct not meant to be used in day-to-day programming. They are usually wrapped in types that can be safely used, not exposing the pointer to the user of that type.

There are multiple reasons for why a pointer is not guaranteed to be valid for using it after the expression that obtained it. The value instance could have been deleted as you mentioned, but it could also for example have been moved to a CPU register, which means it doesn't have an address anymore.

I don't know the details of your project, but it sounds like you need reference types. Another thing you might want to have a look at is keypaths:
https://www.hackingwithswift.com/example-code/language/what-are-keypaths

To answer your questions:

No, that's not possible

You already mentioned withUnsafeMutablePointer, which is meant to be used for that. As an alternative you could allocate the memory for your value types yourself and deallocate when you're done, but that's not what you want.

I don't think that's possible and it sounds like a dangerous thing to do. Also, if it's not the same, then what?

1 Like

The withUnsafe*Pointer APIs are the only ways to get a pointer to a variable. If you want to do anything more involved than that, you'll need to allocate the memory yourself. That's probably not a good user-facing API, though. It seems to me you could provide a safer API that doesn't require pointers. You could have a key-path-based API instead, for instance, where you add the keys you want to regress and then apply the fit function to the data separately:

regression.add(function: f)
regression.add(key: \MutableData.varA)
regression.add(key: \MutableData.varB)
regression.fit(to: &mutableData)

which fits the value semantics model without pointer manipulation.

4 Likes

Hi Joe,

Thanks. nice use of of key-paths.
Question: on the face, it is not clear that the final inout "fit(to: &mutableData)" would still be referencing the same original "MutableData" that the keypaths would access, and not just be a copy of the struct.

Orobio,

I understand your concern. However, pointers are all we systems programmers have had for decades.They weren't awesome, but they worked. If Swift can provide full featured high level, and safe, constructs with systems-level guarantees, that is awesome. Where we hit edge cases, we need to a least be able to implement it ourselves using pointers. If we can get higher-level smart pointers, even better.

My concern is that value types work 95% of the time, and we want those guarantees. But there comes an edge case for nearly all value types in practical code... that make value types, well, impractical. Which could make everything a reference type over time as you refactor.

What we need to make sure, is value types have good enough ergonomics that they fulfill their promise. We still may be still learning how to architect with idiomatic Swift with regard to value types. That said our team is highly experienced in many languages, and are 2 years into Swift, and we are still hitting walls on relatively trivial problems.

Part of the problem I think is there is not enough documentation or examples showing how to use structs in the real world. Structs at first look like very conventional ergonomics, but then you quickly hit a wall using them without drastically changing all your APIs into very quirky and harder to follow closure APIs, and even then you may not be able to do it. It also means the users of the API must work very differently with an API if it is class or struct based. The isn't very ideal, and makes developer uptake slower.

My example above is trying to have a universal API for either class or structs with relatively straightforward economics. I'd think this should be a general language goal. I'd be curious, before keypaths in 4, how would you have solved this?

The keypaths don't refer to any specific value; they refer abstractly to the property relative to a type. You'd be providing the value when you invoke the fit(to:) method.

what you need to do is something like this

regression.add(f: function)

// cannot use let, no guarantee you won’t cast the pointer to a Mutable pointer 
var mutableData:MutableData = .init(varA: ..., varB: ...)
withUnsafePointerTo(&mutableData.varA)
{
    (a:UnsafePointer<Double>) in 
    
    regression.add(a)
    
    withUnsafePointerTo(&mutableData.varB)
    {
        (b:UnsafePointer<Double>) in 

        regression.add(b)
        regression.fit()
    }
}

it looks ugly but it’s really the “swift way” and in /theory/ it will run more efficiently than anything you could write in C or C++ (assuming the Swift compiler were perfect which it obviously isn’t) since Swift by design makes no guarantees about anything unless you explicitly ask for them (hence the 2 layers of lifetime barriers).

My statement probably wasn't correct. I was just thinking about how in your example you wanted a valid reference (pointer) to a value. Using reference types everywhere just for that doesn't seem to be a good idea, so I think keypaths make the most sense here.

It's hard to say without knowing the tradeoffs you're willing to make. One thing I'm thinking is that perhaps the nesting withUnsafePointer solution shown by @taylorswift could be rewritten in a functional programming style to make it less ugly. Don't know what problems you will run into though. I think we should just be grateful that we have keypaths now :wink:

i'm not sure why they need references to two doubles in the first place. why not just store the Doubles inside the Regression structure and modify them inplace?

Fair question.The goal is to have generalized regressor.

The architectural goal is that a regressor shouldn't get in the middle of the relationship between an arbitrary fitness function and its arbitrarily complex data model. Its none of the regressors business. The regressor should only be involved in having an n-deminsional list of data model probes (to free parameters) given to it, and evaluating the outputs.

That requires a process of adding probes to the value type data model at one time, and regressing it at a later time.

One option would be to create a class wrapper over a type to make it a reference:

class Ref<T> {
    
    var value: T

    init(_ value: T) {
        self.value = value
    }

}

This allows your api to explicitly state that it needs a reference:

struct MutableData {
    var variableA: Ref<Double>
    var variableB: Ref<Double>
}

Regression.add(Pointer: MutableData.variableA)

This way you may assume that anything that is not a Ref is a value type.

Joe,

I really appreciate the insights here. But, I am trying to understand some ambiguousness (to me at least) of keypaths. Keypaths would seem to shine in this type of use case, where you have a Any object, and want to specify a path to some value in the object.
If I have:

func API (object: inout Any, key: AnyKeyPath)
{
    doSomthing(any[keyPath: key])
}

I can get that to work (sometimes).

The 1st question is normally 'Any' isn't resolved at compile time. But with the keypath, it seems the compiler has everything is needs to resolve it statically. So first question is: does it? Obviously generics could be used instead, but the syntax gets messier.

The 2nd question is related to the "sometimes" part. I'm trying to understand the apparent inconsistency with regard to compiler errors using inout of "Any". This is what is see:

func mutate(data: inout Any) {
        // do something...
}
struct Wrapper {
    mutating func mutateCaller(data: Any) {
        mutate(data: data)
    }
}
var object = 10
mutate(data: &object)     // ERROR! when used directly "cannot pass immutable..."
var wrapper = Wrapper()
wrapper.mutateCaller(data: object)  // Just fine when object is passed through a struct 

I'm not sure what is different here

You can't mutate an Int value as an inout Any argument, since that would mean the caller could change the dynamic type of the argument to something that isn't an Int. You may want to make these functions generic to be able to match the types up at compile time. Instead of:

func api(object: inout Any, key: AnyKeyPath)

you could write:

func api<Object, Property>(object: inout Object, key: KeyPath<Object, Property>)

And instead of:

func mutate(data: inout Any)

You could write:

func mutate<T>(data: inout T) 

var object = 10
mutate(data: &object)
1 Like

Thanks for clarification...

I just noticed Chris's "@parameter" in Swift for Tensorflow, which appears at first glance (documentation is sparse) what we've been trying to build as a generalized probe mechanism for optimizers. Need to dig into it to better understand what they are doing there

Where do you see a @parameter attribute? It's news to me (unless it was part of the autodiff prototype, in which case it is super subject to change).

-Chris

Ha. Thanks for the input Chris, sounds like its just my misunderstanding... I haven't been able to play with Swift Tensorflow yet (i'm on Xcode 9.4/10, waiting for compatibility).

Here is the code examples I found.

From Parameterized Protocol:

  public protocol Parameterized
  The type representing all parameters, synthesized from stored properties marked with @parameter

From Learnable Protocol (Appears to synthesize a getter/setter pair for a marked variable) example:

  public struct Perceptron : Learnable {
  @parameter var w: Tensor2D<Float>
  @parameter var b: Tensor1D<Float>

  // The synthesized `Parameters` struct is:
  // public struct Parameters : Differentiable {
  //     public var w: Tensor2D<Float>
  //     public var b: Tensor1D<Float>
  // }
  //
  // public var parameters: Parameters {
  //     get {
  //         return Parameters(w: w, b: b)
  //     }
  //     set {
  //         w = newValue.w
  //         b = newValue.b
  //     }
  // }

Hi @Troy_Harvey, what you saw in the code example was part of the "high level deep learning API prototype" that got removed (see announcement). @parameter is not related to autodiff, but it is a proposed solution to the "aggregate parameter update" problem in designing ML optimizers in Swift for TensorFlow. It is not directly related to the "pointer to value type" topic.

That said, I hope the following will be possible someday, with inout x as a valid pattern:

struct Neuron {
  var w: Float
  var b: Float

  var parameters: [Float] {
    get { return [w, b] }
    set { w = newValue.w; b = newValue.b }
  }
}

var neuron: Neuron = ...
let gradients: [Float] = ...
for (inout theta, g) in zip(neuron.parameters, gradients) {
  theta -= g * learningRate
}
1 Like

Thanks Richard. It is related in that my original question in this thread is based on our effort to build generalized code probes for the same purpose: optimizers and regressors. Looking at the best means for ML meta-programming and optimization of Swift algorithms.

We started with building an getter/setter class (like the one autogenerated by @parameter). It felt convoluted and verbose. Currently looking at keypaths, or breaking our original plan for not getting the optimizer in the middle of the relationship between a data object at its consuming method.

Saw your optimizer APIs and thought maybe we should hop on what you're doing rather than build our own thing. But since its been removed, that sounds a ways off.

In traditional metaprogramming ML frameworks, optimizer APIs have never been a hard problem because an optimizer builder always runs a host language loop to build graph components for each parameter update at runtime. However in Swift for TensorFlow, there is no way to build generalized ML optimizers today because the TensorFlow extension is a "non-metaprongramming ML framework". Beyond synthesizing a computed property setter/getter and supporting inout pattern, we also need the compiler to be able to reliably unroll loops like for inout theta in parameters. Here are some of the problems:

  1. If we replace floats with tensors in the snippet I gave above, we'd expect Graph Program Extraction to turn each parameter update statement into TensorFlow graph operators without generating any send/receive communication with the host Swift program. This would require the compiler to unroll the loop before the partitioning phase of Graph Program Extraction. The foundation for that is compile time evaluable constant expressions.

  2. Not all parameter tensors have the same element type. So if we had the computed property synthesis via @parameter:

    struct Model : Parameterized {
      @parameter var v1: Tensor<Float32>
      @parameter var v2: Tensor<BFloat16>
    }
    // ...
    for (inout theta, g) in zip(model.parameters, gradients) {
      theta -= g * learningRate
    }
    

    We would expect the type of each parameter to be a protocol existential type, generalized over math and other tensor operations. This alone requires generalized existentials. To support existentials, we'll also need to extend constant expression model further.

Therefore, high-level ML APIs, including optimizers, are non-trivial in Swift for TensorFlow. We decided to remove the optimizer prototype to make clear that we have not solved the ML optimizer problem, and the project has to focus on the fundamental technology, including some building blocks mentioned above, to make generalized high-level APIs possible in the future.

2 Likes
Terms of Service

Privacy Policy

Cookie Policy