With regard to the "closures conforming to protocols" side of this, it still makes sense to me to constrain this to protocols with callAsFunction
requirements.
For instance, I can see this getting really interesting when combined with one of the "Future directions" items discussed in the proposal for callAsFunction
, namely Functions as a type constraint, which would allow for a protocol's callAsFunction
requirement to be declared like this:
protocol MyProtocol: (Int)->(Int) {
...
}
Here it seems really interesting if I would be able to write a function like this:
func foo(_ block: (Int)->(Int)) { ... }
And call it in any one of these ways:
Using a standard closure:
foo { x in
x + 1
}
Using a protocol instance:
struct Bar: MyProtocol {
func callAsFunction(x: Int) -> Int { ... }
}
foo(Bar())
Using an anonymous struct conforming to MyProtocol
:
foo { x in struct: MyProtocol // strawman syntax
x + 1
}
And maybe even an anonymous struct only conforming to the function signature itself:
foo { x in struct // implicitly conforming to (Int)->(Int) based on the signature of foo
x + 1
}
edit:
What gives me pause about using single-requirement protocols to infer what is the meaning of the body of the anonymous struct, is that it seems like it would make local reasoning at the call site much more difficult.
For instance, to take the example given in your proposal, consider if we have a protocol like this which is used to infer the meaning of the closure body:
protocol Predicate: Hashable {
associatedtype ValueType
func evaluate(_ x: ValueType) -> Bool
}
At a very basic level, if I'm reading code which uses Predicate
in an anonymous struct, I have to look at the actual protocol definition to understand what method is being called. In this case that's pretty clear, but what if my protocol has ten function requirements, with nine of them covered by default implementations provided by extensions implemented in different source files or even different modules? It starts to become difficult to trace exactly what is happening.
Now imagine that this is used in a large project, where this protocol is used in several dozen places in a few different modules. What's going to happen if I modify this protocol to add a function requirement, for instance:
protocol Predicate: Hashable {
associatedtype ValueType
func evaluate(_ x: ValueType) -> Bool
func someOtherFunction()
}
What error message is going to appear at all the call sites where evaluate
was being inferred as the single-function-requirement? Is it going to be clear and easy to understand what's going on here?
As another example, consider the case I described above, where the single unfulfilled requirement occurs as the result of a protocol with all but one of its functions covered by protocol extensions:
protocol Foo {
func bar1()
func bar2()
func bar3() // bar3 is the single un-fulfilled requirement
func bar4()
func bar5()
}
extension Foo {
func bar1() { ... }
func bar2() { ... }
func bar4() { ... }
func bar5() { ... }
}
Now let's say for some other reason, in some other place, we introduce an extension which provides a default implementation for that last method:
extension Foo {
func bar3()
}
What is going to happen to any anonymous structs which were inferring bar3
as their body?
Basically it seems like this feature has the potential to make protocol design fragile, and to create a lot of spooky "action at a distance" issues when modifying protocols.
It seems to me that restricting the feature to callAsFunction
requirements obviates a lot of these issues, and makes it much more explicit what's happening with an anonymous struct used in a closure.