BTW, that might be yet another alternative for syntax for calling super
init
. But, personally, I would prefer this to be somewhere on top.
All of your examples of the Java style syntax have included property declarations with captured initial values and none included a capture list. For example:
let eq = struct Equatable & Foo {
var x = x
var y = x + 1
func doIt() {
print("doing...")
}
}
instead of:
let eq = struct Equatable & Foo { [var x, var y = x + 1] in
func doIt() {
print("doing...")
}
}
or
let eq = struct Equatable & Foo {
func doIt() {
// use x somewhere in the body here, implicitly capturing it
print("doing...")
}
}
Are you suggesting this syntax should support a capture list in addition to property declarations that are able to use context in their initial values? Are you suggesting that implicit capture in code blocks in the body of the anonymous type should be supported?
IMO, implicit capture is a useful shorthand in short closures where the relationship to the context is immediately clear. In larger closures with multiple declarations I think it becomes more questionable.
The property declaration syntax is slightly more verbose than a capture list but it’s not that much additional syntax. If we go with the Java-like syntax I think we should restrict what code is allowed to cause a capture to happen. For example, the meaning of context used in the initial value of a stored property is straightforward and it’s clear how capture happens.
If we want to support capture list syntax for multi-declaration anonymous types I think the syntax included in the proposal is the right direction. It just adds an additional layer to the many layers of syntactic options that already exist for closures. The use of the capture list for stored properties is a natural fit in this approach.
I don't think anything's been decided yet. I like that options are being discussed, but a few agreeing opinions expressed within a few hours, certainly doesn't constitute consensus.
Yep, that's what I meant. With properties created for implicitly captured variables not accessible as regular properties to avoid conflicts in shadowing. And using capture list as it is used now for closures - to specify what kind of reference should be used or to enforce capture by value. The explicitly declared properties were planned to implement protocol and/or support private logic of anonymous type. Meaning that you can have them, but you don't have to.
Currently both anonymous and named local functions can capture context arbitrary deep. The following code compiles and prints 6
:
func makeF(x: Int) -> () -> () -> Void {
let y = x * 2
func f() -> () -> Void {
func g() {
print(y)
}
return g
}
return f
}
let f = makeF(x: 3)
let g = f()
g()
Sorry, I didn't mean to offend anyone. Just trying to make sure discussion keeps going - there are other aspects that also need to be discussed. If you have any comments on the topic of disambiguating self or any other aspect of the proposal, feel free to share them. I, and, I think, other participants too, would be glad to hear them.
Then I think we disagree on this point. I don't think it makes sense to support implicit capture or the capture list with the Java style syntax. Here's an example of the kind of thing I'm concerned about:
let initalOffset = 42
let delegate = class DelegateBase & UIScrollViewDelegate {
var lastOffset: CGPoint
init() {
lastOffset = .zero
super(data: data)
}
override func reset() {
// use of `initialOffset` from the context is potentially surprising here
scrollView.contentOffset = lastOffset + initalOffset
self.scrollView.flashScrollIndicators()
}
func scrollViewDidScroll(_ scrollView: UIScrollView) {
...
}
}
Using the capture list for explicit capture is better:
let initalOffset = 42
let delegate: some DelegateBase & UIScrollViewDelegate = class { [initialOffset] in
// ...
}
But if we support this syntax, I think it should be an alternative to or replacement for stored property declarations. With that change, the only difference from the proposal as written is the syntactic location of the struct
/ class
keyword.
The above syntax with explicit capture in the capture list doesn't really save much relative to writing out the property declarations explicitly with capturing initial value expressions:
let initalOffset = 42
let delegate: some DelegateBase & UIScrollViewDelegate = class {
let initialOffset = initialOffset
}
The only issue with the above is that this syntax currently gives a "Variable used within its own initial value" error. If we could teach the compiler to recognize and use the contextual initial value I don't see a reason to support capture lists in this syntax.
I love this. I can think of a huge amount improvements in library design and general Swift usage if this feature is implemented.
The main axis I see this feature moving along is the contraposition between using, as function parameters, actual functions vs instances of (nominal) types or protocols (namely, defunctionalization). To draw from one of the examples used in the proposal, consider the familiar map
method:
extension Array {
func map<A>(_ f: (Wrapped) -> A) -> Array<B> { ... }
}
requiring a function as parameter allows for using a lamba, that's short and clean, and makes this code expressive. Still, this approach is usually met with skepticism by long-time OOP developers, which would consider the Iterator
approach preferable, that is, passing an instance of some Iterator
class that processes the Wrapped
values. It would be much more verbose, to the point of blurring the underlying transformation logic, which is what matters here, but the traditional, pattern-based approach has some advantages, that I find myself assessing from time to time.
To see how, let's consider a more relevant example. Let's assume we want to validate the elements of an Array
, and return a Result
that's successful only if all the elements are valid:
extension Array {
func validate(_ validator: (Wrapped) -> Result<Wrapped, Error>) -> Result<[Wrapped], Error> { ... }
}
Again, the closure-based syntax is short and clear, but using a protocol-based approach here has advantages, because a "validator" is often some kind of entity that contains business rules and that we might want to identify, compare with others and store.
But if it was like the following:
protocol Validator {
associatedtype ToBeValidated
associatedtype ValidationFailure: Error
func validate(_ value: ToBeValidated) -> Result<ToBeValidated, ValidationFailure >
}
extension Array {
func validate<V>(_ validator: V) -> Result<[Wrapped], V.ValidationFailure > where V: Validator, V.ToBeValidated == Wrapped { ... }
}
we would lose the expressivity of closures in cases where we don't care about validator identity.
This proposal allows to get the former's expressivity with the latter's flexibility, at basically zero cost. Honestly, when reading the proposal, many times came to my mind when I started designing an API with just functions, and then I gradually shifted towards wrapping them in structs because I needed to identify and compare functions (for example, assigning them some kind of ID).
I also agree that this proposal's main goal is to make protocol-oriented programming more doable, and I think one place where Swift protocols fail is precisely when dealing with functions as "first-class citizens": the protocol Function
considered in the proposal, if the latter is implemented, can help solving at least some of the limitations that undermine Swift in this area. I can't count the times I wanted to add "methods" to a function, or make it "conform to a protocol". For example, in Kotlin you can actually extend a function type with methods, and it really helps when defining some kind of combinator or composition operator, because it allows to use the nice "dot" syntax, along with autocompletion, on a function. But if this proposal is implemented, designing APIs with protocols and using anonymous structs would make the code equally expressive but much more extendable and composable.
I really like this idea coming into the Swift ecosystem but I don't like the syntax it took. Although I would prefer the Typescript shape type
typealias Anonymous = {
var x : Int
var y : Double?
lazy var closure : (Any) -> ()
}
Then finally, it should not have methods or conform to protocols unless Swift should allow extensions on typealiases to make them conform to protocols, and shouldn't be extended by extensions.
Let me know if you like the idea.
Thanks for your time☺
With regard to the "closures conforming to protocols" side of this, it still makes sense to me to constrain this to protocols with callAsFunction
requirements.
For instance, I can see this getting really interesting when combined with one of the "Future directions" items discussed in the proposal for callAsFunction
, namely Functions as a type constraint, which would allow for a protocol's callAsFunction
requirement to be declared like this:
protocol MyProtocol: (Int)->(Int) {
...
}
Here it seems really interesting if I would be able to write a function like this:
func foo(_ block: (Int)->(Int)) { ... }
And call it in any one of these ways:
Using a standard closure:
foo { x in
x + 1
}
Using a protocol instance:
struct Bar: MyProtocol {
func callAsFunction(x: Int) -> Int { ... }
}
foo(Bar())
Using an anonymous struct conforming to MyProtocol
:
foo { x in struct: MyProtocol // strawman syntax
x + 1
}
And maybe even an anonymous struct only conforming to the function signature itself:
foo { x in struct // implicitly conforming to (Int)->(Int) based on the signature of foo
x + 1
}
edit:
What gives me pause about using single-requirement protocols to infer what is the meaning of the body of the anonymous struct, is that it seems like it would make local reasoning at the call site much more difficult.
For instance, to take the example given in your proposal, consider if we have a protocol like this which is used to infer the meaning of the closure body:
protocol Predicate: Hashable {
associatedtype ValueType
func evaluate(_ x: ValueType) -> Bool
}
At a very basic level, if I'm reading code which uses Predicate
in an anonymous struct, I have to look at the actual protocol definition to understand what method is being called. In this case that's pretty clear, but what if my protocol has ten function requirements, with nine of them covered by default implementations provided by extensions implemented in different source files or even different modules? It starts to become difficult to trace exactly what is happening.
Now imagine that this is used in a large project, where this protocol is used in several dozen places in a few different modules. What's going to happen if I modify this protocol to add a function requirement, for instance:
protocol Predicate: Hashable {
associatedtype ValueType
func evaluate(_ x: ValueType) -> Bool
func someOtherFunction()
}
What error message is going to appear at all the call sites where evaluate
was being inferred as the single-function-requirement? Is it going to be clear and easy to understand what's going on here?
As another example, consider the case I described above, where the single unfulfilled requirement occurs as the result of a protocol with all but one of its functions covered by protocol extensions:
protocol Foo {
func bar1()
func bar2()
func bar3() // bar3 is the single un-fulfilled requirement
func bar4()
func bar5()
}
extension Foo {
func bar1() { ... }
func bar2() { ... }
func bar4() { ... }
func bar5() { ... }
}
Now let's say for some other reason, in some other place, we introduce an extension which provides a default implementation for that last method:
extension Foo {
func bar3()
}
What is going to happen to any anonymous structs which were inferring bar3
as their body?
Basically it seems like this feature has the potential to make protocol design fragile, and to create a lot of spooky "action at a distance" issues when modifying protocols.
It seems to me that restricting the feature to callAsFunction
requirements obviates a lot of these issues, and makes it much more explicit what's happening with an anonymous struct used in a closure.
Regarding, the Functions as a type constraint - you may want to check out original discussion in Equality of functions. That's what we tried initially but it raises of a lot of additional questions that we were not able to answer.
Can you concisely explain what the sticking-points were? Is it just this issue of ambiguity with respect to function types and argument labels? I am reading through this thread but I'm not quite getting what would be a show-stopper for this approach.
I strongly disagree with this. There was already a subthread discussing this topic.
Fwiw, this future direction is orthogonal to the issue of whether the proposal is restricted to callAsFunction
protocols.
I agree, which would be a good reason to avoid using the sugar in a context where the design of a protocol and / or library doesn't make it abundantly clear which requirement needs to be implemented explicitly. Syntactic sugar in Swift is always optional, never required. It is up to our judgement to decide when it adds clarity or when it is better avoided.
I would expect the compiler to produce an error informing you (in programmer friendly language) that more than one requirement must be implemented explicitly, thus precluding the use of the single-requirement syntax. If the compiler was able to match the body with one of the requirements it could even offer a fixit to refactor the existing code, including stubs for the additional requirements that need to be implemented.
If the compiler was able to match a singe requirement based on the types involved in the body one option would be to continue inferring that requirement. But there is a good case that this behavior would be too subtle.
Aside from that possible inference, the compiler would require you to write out the more verbose syntax that includes the requirement name. Fixits could be provided as mentioned above.
It shouldn't be used with protocols that include ad-hoc lists of requirements that are likely to change in time. It is best used with protocols like Swift UI's View
and Monoid
from this proposal where the structure of the protocol is extremely clear and stable.
I don't think it does. Remember that a protocol could in theory include as many callAsFunction
requirements as it wants. All of the issues you raised would still exist, when rephrased in terms of callAsFunction
overloads. I think a restriction like this will result in code that uses callAsFunction
where some other name would be more clear, access to the closure-like sugar being the only reason callAsFunction
was used. This is a situation I wish to avoid.
There aren't any showstoppers. The issues @Nickolas_Pohilets is referring to have to do with argument labels. For example, consider a constraint (Int, Int) -> Int
and a type that provides callAsFunction(foo: Int, bar: Int) -> Int
. Does the type meet the constraint despite the fact that it includes argument labels that are not stated in the constraint? We need to make and justify an answer to this question before we can introduce the ad-hoc function constraint syntax.
I think this constraint sugar is also an important feature. But it's orthogonal to the current proposal. Generic code tends to be library code so sugar for constraints mostly impacts library authors whereas the current proposal mostly impacts user code, and in doing so opens up design space for library authors. This makes the current proposal a much higher priority in my mind.
I don't find it a terribly convincing argument to say that we should only consider the best-case usage of a given language feature. It's worth considering worst-case usage as well, as well as the problems which might present themselves when a feature like this is used in a large and evolving codebase.
What I see this pitch, as well as the equality of functions thread where this pitch has come out of, is that it seems to be heavily shaped by how nice it would be for SwiftUI and for another FRP framework which is being discussed in the other thread. The risk I see here is that this feature could be over-fit for a specific use-case at the expense of others and at the expense of the clarity and readability of language as a whole.
I understand what you're saying with respect to the possibility that more than one callAsFunction
requirement exists, but I strongly disagree that these issues exist in the same measure while the scope is limited to callAsFunctionRequirements
. The big advantage I see in limiting the scope in this way is that when I see a protocol being used for an anonymous trailing closure, I can look at the protocol and immediately know with a high degree of certainty which protocol requirement is being instantiated by that anonymous closure. If it can be any of the requirements, including functions or variables (as in the case of the View.body
requirement), then I as the developer have to mentally model the compiler's inference process to understand which one of these requirements is being instantiated. This places significantly more cognitive burden on the developer.
As you say, it's possible to have protocols with more than one callAsFunction
requirement, but this is going to be significantly more niche than protocols with more than one requirement in general. Also in the case of protocols with a mix of callAsFunction
requirements and other requirements, this limitation already lets you limit the possibilities to consider when trying to parse this type of code.
The other motivation I can see for this limitation is that it makes this feature a more incremental change in the language. As @Joe_Groff has described it, with the callAsFunction
requirement, this feature is just essentially adding more power to closures. It would be possible to look at code using the feature, and it would look almost identical to current closure usage, with only one new concept to absorb. There's also no reason this limitation couldn't be relaxed in the future if it becomes obvious that broader inference would be useful. But it might be worth introducing the feature in a more limited way at first, and see how APIs evolve to react to the feature before introducing it in a way which is hyper-targeted to the APIs we have now.
The risk I see is that this feature has the potential to inject a lot of magic and cleverness into Swift code. Flexibility comes at a cost. For instance, I would like for Swift to avoid the type of situation you have in a Javascript codebase where someone has done something very clever with the object model, or for instance walking into a very esoteric Scala codebase. I think there's evidence in this thread that the feature as currently proposed is a step in that direction, based on the initial confusion this proposal was met with by some posters.
I agree. I'm not arguing that we shouldn't consider worst-case usage. We should. But many features are subject to abuse. We have to make a judgement call whether the benefits of a given feature outweighs the potential drawbacks. In this case I think they do.
This is a misperception. I have had countless use cases for this feature in the last several years. It is a very general feature that would support protocol-oriented library designs with user syntax that is equally convenient as closure-based designs. IMO, this is a significant step forward for the language that fits well in the theme of Improving the UI of Generics.
I first had the idea for this feature long before I knew of SwiftUI. The FRP framework is a private framework @Nickolas_Pohilets works on, one which I have never even seen.
Tooling should be able to address this by showing what requirement is implemented. Aside from that, as I have mentioned previously this is a feature which is best used in generic contexts that have been intentionally designed for use with it. In those contexts it would be obvious what requirement is being implemented. Imagine a Monoid & Hashable
context similar to the example in the pitch. It is not hard to see that combine
is the requirement that is implemented by the body.
This would be disappointing to me. It would immediately cut off most of the use cases I have, or require me to compromise the protocols by changing meaningful names to callAsFunction
. I would strongly prefer not to go down this path.
Hey, I‘m also interested in this feature, but I saw that there is no activity since a couple of years, is this topic still relevant?
I was trying to implement this, but eventually got completely lost in the depths of constraint solver. And after adopting SwiftUI and dropping custom framework it became not relevant for my company.
I ended up pitching this as part of a larger strategy to recast closures as syntactic sugar over protocols. The thread is here.
I really want to be able to create anonymous types that conform to protocols or are subclasses of a class, and that close over their context, like this:
protocol MyProtocol {
func doThis()
}
func doSomething(aLocal: Int) {
let anonymous = struct: MyProtocol {
func doThis() { print("Look at my capture! \(aLocal)")
}
}
Since I have a growing list ideas for language features and I want to be more helpful than just talking about them excessively, and I want to get more familiar with compilers, I'm planning on trying to implement one of them in the near future. This feature, at least a minimal form of it (that doesn't deal with escape, so that all anonymous types are "escaping" the same way nominal types are, I do want the MVP to support closure though), is on the top of my list for what to start with.