[Pitch #3] SE-0293: Extend Property Wrappers to Function and Closure Parameters

Does the proposal support or extend casting syntax for disambiguation?

If so, then the above example I made would actually have a workaround solution:

let value = 42
foo($bar: value as W<String, Int>, b: Int.self)

// or do it manually
foo($bar: W<String, Int>(projectedValue: value), b: Int.self)

The value that automatically gets wrapped would require an explicit wrapper type for disambiguation.

That would not be possible. The type of $bar is B. The storage type isn't involved at all in foo($bar:). In order to solve that disambiguation I think we need to be able to explicitly set foo's generic parameter types on the call site:

foo<String, Int>($bar: 42, b: Int.self)
2 Likes

Well I disagree. We're talking about compiler magic and code synthesis here.

The proposal has this example:

let history: History<Int> = ... 
log(value: 10) 
log($value: history)

The compiler will inject a call to the appropriate property-wrapper initializer into each call to log based on the argument label, so the above code is transformed to:

log(value: Traceable(wrappedValue: 10)) 
log(value: Traceable(projectedValue: history))

Therefore log($value: history) doesn't really exist, but is reinterpreted as log(value: Traceable(projectedValue: history)).

That's why in my opinion log($value: history as Traceable<Int>) should probably be valid.

If I'm not mistaken the proposal doesn't state that the compiler will generate an overload func log($value:) which would call into func log(value:).

The explicit cast will tell the compiler how to properly interpret the wrapper type from the value to wrap (do not mix with the actual wrappedValue here).

let history: History<Int> = ... 
log(value: 10) 
log(value: 10 as Traceable<Int>) // same as above
log($value: history)
log($value: history as Traceable<Int>) // same as above

Furthermore, what if we had another func log(@MyWrapper value: ...) which would create a collision?

We either say that the above would be possible, or if not we would need to always write the boilerplate code manually again.

1 Like

The issue is that a regular coercion on the argument expression still needs to work. I don’t see a compelling reason to add a different kind of special property-wrapper coercion that looks the same as a regular coercion (and causes the compiler to figure out which one the programmer meant) when I don’t expect this to be an issue in practice anyway.

5 Likes

My opinion on this matter is that a casted expression should give you an expression of the casted type and that this feature doesn't behave like a string-based-replacement-macro. Furthermore, the ability to specialize a generic function is more general and valuable on its own (its absence is one of the reasons we have Task.withGroup(resultType: SomeType.self) { ... } instead of Task.withGroup<SomeType> { ... } in the Structured Concurrency proposal), it doesn't apply only here.

This is a point not yet discussed from what I know. I don't think there should be overloads. With n wrapped parameters having init(projectedValue:), you'd have 2n overloads. OTOH Swift code completion doesn't provide incremental/partial completions, so it looks like it's user's duty to add $ where needed?

Property wrappers do not change the variable (in this case parameter) type, so the following functions will trigger an "invalid redeclaration error":

func log(@WrapperA value: Int) {}
func log(@WrapperB value: Int) {}

I'm pretty sure this is not correct. The type of each of these overloads is different. The above is sugar for:

func log(value _value: WrapperA) {}
func log(value _value: WrapperB) {}

The collision I meant in my previous post though, was the call type ambiguity for log(value:). As per @hborla's reply, the solution would require us to type out the entire boilerplate manually:

log(value: WrapperA(wrappedValue: 42))
log(value: WrapperB(wrappedValue: 42))
1 Like

That's right - this feature isn't modeled as overloads in the implementation because of the performance impact. Everything is done via name lookup and argument-to-parameter label matching.

I think we can still make code completion suggest results with $.

Yeah, overloading by property-wrapper attribute would result in an error. Even though these functions have different types under the hood, they have the same signature as far as the Swift type system and overload resolution are concerned.

I'll add a section about overload resolution of these functions in the proposal to clarify, thanks!

To clarify, I was not implying that manually initializing the property wrapper should be used for disambiguation (and this would have the same problem as trying to implement a special coercion feature). Manually initializing and passing the property wrapper storage is intentionally not supported in this proposal.

Wait a second, why? Manually passing a property wrapper is the only solution for the overloading collision as shown above. Not being able to do it manually would block you from implementing any reasonable workaround. You cannot straight reject collisions to ever happen in the real world codebases.

You just need to have two different modules that extend the same type with an equally looking method but with a different PW. From what I just read. That can‘t raise an error as the collision will only occur during import. So the compiler won‘t just let you use that API anymore as there is no solution to solve the resolution ambiguity, not even manually as I just suggested.

What am I missing?

I may have misunderstood the proposal, I thought that the compiler "backing" function was not available to users:

OTOH, in some circumstances it is referred as sugar, meaning that it would be available to users.

@hborla, should the backing function be spelled in a different way? Should it be impossible to call/refer to?

I don‘t see a reason why the underlying synthesized function with the correct signature should be hidden. It‘s only the sugared call side that gets automatically unfold into the boilerplate.

func foo(@W value: Int)
// compiler would create
func foo(value _value: W) 

foo(value: 42) // okay
// the above line is interpreted as
foo(value: W(wrappedValue: 42))

// writing the above line manually should be totally valid as well

Here is the example which I think would require manual boilerplate to resolve:

// module A
extension Int { public func foo(@X value: Int) {} }

// module B
extension Int { public func foo(@Y value: Int) {} }

// module C
import A
import B

Int(0).foo(value: ...) // now what without the manual boilerplate?

The solution is really straight forward:

Int(0).foo(value: X(wrappedValue: 1))
Int(0).foo(value: Y(wrappedValue: 2))

It‘s not pretty but it resolves the collision.

2 Likes

I don't see how that is different from

// module A
extension Int { public func foo(value: Int) {} }

// module B
extension Int { public func foo(value: Int) {} }
2 Likes

You missed that the parameter type differs. It‘s either X or Y which is a valid overload even in a single module.

I just wanted to show that the call side collision doesn‘t only occur when you write the code on your own. You could be importing two modules which happen to extend fairly similar functionality with a different wrapper types. However, as been said before this will end in a compiler error or something and doesn‘t let you use the API at all. In other words the current proposal creates an artificial restriction that is totally unnecessary as it will definitely cause situations as I just mentioned.

It does reminds me of the original PW restriction where each PW type had to be generic to expose the wrapped type. This is artificial and not needed as we can let the compiler figure out the type just by aligning the PW and wrappedValue access modifiers.

So this proposal, it generates the correct base function but artificially hides it from you.

Please correct me if I misunderstood something, but that‘s my impression from the above conversation.

To reiterate backwards.

Above it‘s been said that passing an entire property wrapper to the base function is not possible / artificially restricted.

Here is a code sample which I could write today:

@propertyWrapper
struct S<V> { var wrappedValue: V }

func baz(value _value: S<String>) { 
  var value: String {
    _value.wrappedValue
  }
  ...
}

baz(value: S(wrappedValue: "swift"))

After the proposal I might want to iteratively clean up this boilerplate.

We would only clean up the main function for the sake of the discussion.

func baz(@S value: String) {
  ...
}

That part was trivial, however suddenly my code line which used to pass the PW manually to the function would throw a compiler error and no longer work. Why? The new form of the function declaration does generate the same boilerplate function as I would write today. This is a restriction that would break my code if I had written it that way.

1 Like

This came up in the previous pitch/review as well and I think I'd also prefer this formulation. I think it was @hborla who brought up the issue of preserving reference semantics with init(projectedValue:), since init would always allocate a new object, but Wrapper.make(fromProjectedValue:).

Of course, I would also like to see less init(projectedValue:) or Wrapper.make(fromProjectedValue:) be less inexorably tied to projectedValue at all—to me, this feels unnecessarily restrictive. The justification from the proposal makes a good case for why there's often a sort of synergy between a wrapper's projected API and the wrapper storage itself, but I think the init(projectedValue:) feature is potentially useful far beyond just the projectedValue of a particular wrapper.

Suppose, for instance, that we had a Box reference type. Binding vends itself as the projectedValue, but it's perfectly reasonable to initialize a Binding from a box:

takesBinding(arg: Binding(get: { someBox.value }, set: { someBox.value = $0 }))

IMO, it wouldn't be unreasonable for Binding to offer this as projected initialization API via an additional init(projectedValue:) overload, so that users could just write:

takesBinding($arg: someBox)

@hborla is right, of course, that simply lifting the same-type restriction between projectedValue and init(projectedValue:) could be done at a later time in a (probably, mostly) source-compatible way, but the restriction feels artificial to me and I'm curious if there's a good reason not to simply lift it as part of the initial feature.

Changing init(projectedValue:) to Wrapper.make(fromProjectedValue:), though, is not a change we'd be able to make later.

Not according to the rules under this proposal:

Although, I am curious about what happens in the following case, where there's an overload which conflicts with the argument's storage type:

func foo(@Wrapper arg: Int) { ... }
func foo(arg _arg: Wrapper<Int>) { ... } // error, or no?

This is related to the point I raised in the previous review, which was about which type functions with wrapped parameters are intended to traffic in semantically: the storage type, or the wrapped type. Based on the core team feedback and the latest iteration of the proposal, it seems as though the answer has come down solidly on the side of the wrapped type.

IIUC, that means that in the general case, if you already have func baz(value _value: S<String>), it is not necessarily the case that it will be suitable to replace this with func baz(@S value: String). In particular, if the ability to pass the storage type directly is an important conceptual part of your API, and not supported by S via init(projectedValue:), then your API is not a good fit for function argument wrappers.

I invite the authors to correct me if I'm off-base on any point here. :slight_smile:

1 Like

If you have more than one overload of init(projectedValue:), you would have more than one type for projectedValue. Are you suggesting to implement variable/property overloads?

If OTOH you just want to be able to initialize a wrapper instance from different type instances, then a Wrapper.make(from:) would be more appropriate, but you would lose the ability to use $-prefixed argument labels and projected values semantics.

Today there's a unique error for redeclarations of property wrapper backing wrapper instances:

struct S {
  @W var x: Int
  var _x: SomeOtherType  // Invalid redeclaration of synthesized property '_x'
}

I suppose the same will apply.

2 Likes

I see your point I don’t think it outweighs the tradeoffs of passing the backing storage, which unless explicitly indicated is meant to be private. Nevertheless, I think what you wrote is not going to be a major problem in most cases. That is, if your function is only called 2-3 times, then you'd probably be fine if you updated your function declaration and calls to use the behavior outlined in the proposal. On the other hand, if your function is called multiple times, you'd probably be better off doing the following:

@propertyWrapper
struct S<V> { 
  var wrappedValue: V 

  var projectedValue: Self { self }
  init(projectedValue: Self) { self = projectedValue }
}

func _baz(@S value: String) { 
  ...
}

func baz(value: S<String>) {
 baz($value: value)
}

I don't think this is a lot of boilerplate if this function is called multiple times, considering this is a rare case since the author of a commonly-used function would probably expose just the wrapped type to the call-site.

Can you please elaborate on the tradeoffs you've mentioned? What kind of tradeoffs are there when exposing the func foo(value _value: W) function to the user which the compiler generated from func foo(@W value: Value)?

The user can pick if he wants to pass an instance of type Value or an instance of the wrapper type. The compiler will either call the function directly if you passed the wrapper type or it will wrap the instance into the wrapper for you. What's the problem with that?

Exposing the synthesized function solves more problems than it creates doing otherwise. You can make the tooling present it as func foo(@W value: Value) for convenience, but you as the user will be granted with more options for the call site than restrictions.

Hmm, I don't follow. I'm proposing that the same-type requirement between the parameter of init(projectedValue:) and the projectedValue property be eliminated, so that any overload of init(projectedValue:) would enable the foo($arg: val) call syntax. I.e.:

@propertyWrapper
struct W {
    var wrappedValue: Int
    var projectedValue: Self { self }

    init(projectedValue: Self) { ... }
    init(projectedValue: Double) { ... }
}

func foo(@W arg: Int) { ... }
foo($arg: W(wrappedValue: 3)) // ok
foo($arg: 0.0) // also ok

Seems totally reasonable to me!

Functions with different argument types are allowed, though.

func foo(arg: Int) { ... }
func foo(arg: Wrapper<Int>) { ... } // ok

However, if they have the same ABI, we may need to disallow such redeclaration :thinking:.

2 Likes

What I said is that if you want to be able to define an init(projectedValue: T) with arbitrary T, you aren't initializing a wrapper instance from a projected value anymore. There would be no semantic connection between the projected value and the initializer. So the $-prefixed sugar for passing the projected value would just be a syntactic sugar for passing anything you provide. It would be no different from the concerns arisen during the previous pitch phase, regarding the overlapping semantics of $-prefixed variables for projected values and $-prefixed parameter labels for wrapper instances.

2 Likes