Allowing non-public methods to satisfy public protocol requirements

TLDR: Should we relax the access restriction for satisfying protocol requirements, at the cost of a small bit of indirection? Jordan reluctantly votes yes.

Back in Swift 1, we had a rule that for a method to satisfy a protocol requirement, it must be at least as visible as either the protocol or the conforming type:

  • public type conforming to internal protocol: method must be at least internal
  • internal type conforming to public protocol: method must be at least internal
  • public type conforming to public protocol: method must be public

The rationale here was that the method effectively was public, since you could always access it through the protocol.

In Swift 2, however, along came protocol extensions, and suddenly people (particularly @dabrahams) wanted to do things like this:

public protocol ImportantProto {
  func difficultRequirement() -> String
}

internal protocol SpecialCaseImplementationDetail {
  func muchSimplerRequirement() -> Int
}
extension SpecialCaseImplementationDetail {
  /*!*/ func difficultRequirement() -> String {
    return "\(muchSimplerRequirement())"
  }
}

public struct ConcreteTypeA: ImportantProto, SpecialCaseImplementationDetail {
  internal func muchSimplerRequirement() -> Int { … }
}
public struct ConcreteTypeB: ImportantProto, SpecialCaseImplementationDetail {
  internal func muchSimplerRequirement() -> Int { … }
}

That is, share the implementation of ImportantProto across multiple types that happen to have a common implementation, but not expose that common implementation. The problem? The default implementation of difficultRequirement isn't public. It can't be, since it depends on the internal protocol SpecialCaseImplementationDetail.

In Swift 3, SE-0025 came along and included this particular clause:

The compiler should not warn when a broader level of access control is used within a type with more restrictive access, such as internal within a private type. This allows the designer of the type to select the access they would use were they to make the type more widely accessible. (The members still cannot be accessed outside the enclosing lexical scope because the type itself is still restricted, i.e. outside code will never encounter a value of that type.)

Whether or not this is a good thing isn't something to discuss right now. The important point is that there were bugs in the implementation. If you mark the default implementation of difficultRequirement above as public…the code compiles. Despite the method not actually being public. Uh oh!

Did it always work? No, this very often resulted in linker errors. But it worked in enough cases that we couldn't just add a diagnostic in Swift 4.2. So @Slava_Pestov put in a practical fix: if the user wrote this, they wanted it to work. Therefore, count the requirement as satisfied, but require any clients use dynamic dispatch when calling the method through the protocol (no "devirtualization" optimization).

At this point, though, we've got a public that SE-0025 says is supposed to have no effect, but which does actually change compilation. And we also now have a reason to allow a distinction between "is public" and "satisfies a public requirement": library evolution. That is, if the requirement is public but my implementation is not, then it's perfectly okay to remove my implementation later and fall back to the default. If the implementing method is public, that's not an option, because clients could be using it directly.

(As with all library evolution / "resilience" features, this only applies to the standard library and overlays at this time—code planned to be shipped by Apple within our OSs, rather than bundled with an app.)

So, what do people think of removing the access check for satisfying protocol requirements? When the compiler encounters such a scenario, it will make sure any calls to that requirement use dynamic dispatch if they aren't able to access the implementation directly. That handles both the "shared implementation" pattern shown above and the newer "library evolution" enhancement, and it gets us back to consistency with SE-0025 while still having an explicit declaration of intent: the conformance to the public protocol.

P.S. I stuck this in Discussion because I'm not sure it needs a full-on proposal. Once I've gotten some feedback, I'll ask a core team member.

10 Likes

+1. I think I already ran in to this as [SR-6433] Cannot declare public default protocol impls based on shadowed associated types · Issue #48983 · apple/swift · GitHub

+1. I have ran into this, too.

This definitely seems far from ideal. I've got an alternative suggestion and one question about the proposed solution:

Firstly, would it be possible to work around this by making ConcreteType re-export difficultRequirement from the protocol extension as a public method on the type? Since SpecialCaseImplementationDetail is internal, ConcreteType must have access to the implementation of difficultRequirement in the extension; it seems like it should therefore be able to copy the implementation and implicitly promote it to public to satisfy the protocol requirement of ImportantProto.

Under the proposed solution, when exactly would dynamic dispatch occur where static dispatch happened previously? For example, if SpecialCaseImplementationDetail were public, would the protocol requirement (difficultRequirement) still be statically dispatched? If so, that would lessen my concerns about this feature.

The following compiles in a playground:

internal protocol SpecialCaseImplementationDetail: ImportantProto {
    func muchSimplerRequirement() -> Int
}
extension SpecialCaseImplementationDetail {
    public func difficultRequirement() -> String {
        return "\(muchSimplerRequirement())"
    }
}

Note that Special... extends Important... so that it is dynamically dispatched and the compiler knows that its difficult... method is from Important....

Not sure if it should compile or if it makes the extension public - only have iPad handy!

Therefore two possibilities:

  1. The above is valid and does not make the extension public - i.e. all good, it solves the problem.
  2. However; I suspect that the above makes the extension public, which is not what is wanted. Therefore suggest that when implementing an existing protocol method that is public, but not for a new method, the implementation is allowed to be public but the extension itself can have more restrictive access.

Static dispatching and inlining have always felt like they should be side-effect free performance improvements and when the compiler cannot guarantee that: dynamic dispatch and let the runtime do its job and ensure correctness. So +1.

1 Like

I've added to this implementations implementation from the topic begging (ConcreteTypeA) and put all this in playground by creating file called ImportantProto.swift:

public protocol ImportantProto {
    func difficultRequirement() -> String
}

internal protocol SpecialCaseImplementationDetail: ImportantProto {
    func muchSimplerRequirement() -> Int
}
extension SpecialCaseImplementationDetail {
    public func difficultRequirement() -> String {
        return "\(muchSimplerRequirement())"
    }
}

public struct ConcreteTypeA: ImportantProto, SpecialCaseImplementationDetail {
    public init() {}
    internal func muchSimplerRequirement() -> Int { return 1 }
}

Then, inside main playground file I wrote next:

let impl: ImportantProto = ConcreteTypeA()
impl.difficultRequirement()

This code compiles and returns "1". And SpecialCaseImplementationDetail does not visible. But what is interesting, if I write let a = ConcreteTypeA(), I does not see any methods available in ConcreteTypeA. For me it looks like not designed behaviour.

This relaxation feels a bit strange to me, and I'd be curious to see further concrete examples (esp. of library evolution) where this is truly necessary, bugs in the implementation aside.

For instance, as @hlovatt's comment alludes to, I would expect that for SpecialCaseImplementationDetail to correctly offer a default implementation for ImportantProto, it would need to inherit from ImportantProto in the first place. The following seems totally reasonable to me:

internal protocol SpecialCaseImplementationDetail : ImportantProto {
  func muchSimplerRequirement() -> Int
}
extension SpecialCaseImplementationDetail {
  public func difficultRequirement() -> String { ... }
}

and I would expect that internal protocols could vend public default landing spots for dynamic dispatch for other public protocols. I would not expect for this to be possible were SpecialCaseImplementationDetail and ImportantProto unrelated, as internal extensions could end up exposing unrelated methods by accident, and I would expect at least a warning, if not an error from this:

internal protocol SpecialCaseImplementationDetail {
  func muchSimplerRequirement() -> Int
}
extension SpecialCaseImplementationDetail {
  public func difficultRequirement() -> String { ... }
}

As @hlovatt also mentions, it's not clear whether the current behavior (first code snippet, which works today) is an exemplification of the bug you mention, or the correct behavior. It does work, but it would be good to know why.


More generally: you don't mention specifically, but would the intent here be to relax just public and internal restrictions, or allow exposition of private and fileprivate methods as well? That I feel would definitely be going too far, and I assume would not be what we're doing here. [Rather than "internal can sometimes expose things as public so let's make things consistent by making that happen for private and fileprivate too", I think we should put restrictions in place to fix the underlying issue for internal in the first place.]

The idea is that there are several concrete types that could in theory all share the same implementation, but were prevented from doing so directly in Swift 2. (You could always rename the shared implementation and write little forwarding methods in each concrete type, but that's still a pain.)

The simplest implementation would say "if the implementing method is less visible than the requirement, the compiler cannot guarantee static dispatch" and leave it at that. This doesn't seem like an unreasonable approach to take, but it is a little funny when the compiler allows you to write public without a complaint.

It is possible to implement the feature so that if the implementing method is visible at the use site (with some caveats around inlining), it will be used whenever the requirement is called on a concrete type. It's slightly trickier but not impossible to allow this to happen when a protocol value or generic parameter is statically known to have a certain type. It's not possible to have direct dispatch all the time because the implementation simply isn't going to be visible outside the module (because it depends on the internal protocol).

It actually does neither. According to SE-0025, the method is still formally internal, with the public being ignored. However, we forgot to account for that in Swift 3 when checking protocol conformances. That's where this whole problem comes from.

I won't comment at the moment on whether I think that's a good or bad idea, only note that that would be a language change from what SE-0025 says explicitly, with all the process that entails.

That's a possible restriction of the rule, sure: only allow this if the protocol containing the default implementation is a refinement of the protocol with the requirement. It feels a little weird to me if there's one common property that allows default-implementing a whole bunch of protocols, though, and it doesn't address the "resilient struct/class/enum wants to provide an implementation but not promise it" part. (I'll admit that's pretty niche, though. No one has actually asked for it yet.)

I was definitely proposing that. internal isn't special.

I'll note that this is one of those things I wouldn't normally propose, but in this case (1) it's been asked for by several different people, and (2) it already exists, by accident, in a form that doesn't match how the language is supposed to work. I'd really like to get to something consistent.

3 Likes

I realise that, but is there any reason why the compiler couldn't duplicate the shared implementation automatically and export it as public on each concrete type? It must already know that method satisfies a public protocol requirement, and it has access to the method body.

Ah, the shared implementation might be in another file, in which case it does not have access to the method body. It's also unclear whether that's the right code size / performance trade-off; normal default implementations aren't duplicated around like that (though they are permitted to be inlined and specialized when the body is visible, and these would be too).

If this is the direction we're going in, I think this would be a completely non-trivial change to the semantics of access control in the language. Allowing dynamic dispatch from public call sites to private or fileprivate while preventing that same access statically feels to me like it dilutes the meaning of access control:

// -- File 1 --
public protocol Foo {
    func foo()
}

fileprivate struct Bar : Foo {
    private func foo() { print("Bar!") }
}

public func getAFoo() -> Foo {
    return Bar()
}

// -- File 2 --

// Can't access Bar statically:
let b = Bar() // <- use of unresolved identifier 'Bar'
b.foo() // <- even if so, foo is inaccessible due to 'private' protection level

// This would still be allowed
let f = getAFoo()
f.foo() // Bar!

Why should these be accessible dynamically but not statically? Are the existing issues we have enough to justify a change like this to the language? (I think a change like this would warrant an actual Swift-Evo proposal)

Theoretically possible in WMO builds, then, but not in general? Or I suppose, since the compiler has access to at least the signature of the extension method, it would be able to automatically insert those forwarding methods, which could be automatically optimised in WMO.

e.g. to do:

public protocol SomePublicProtocol {
    func printTest()
}

protocol SomeInternalProtocol {
    
}

extension SomeInternalProtocol {
    func printTest() {
        print("Protocol forwarding test.")
    }
}

class SomeType : SomePublicProtocol, SomeInternalProtocol {
    public func printTest() { // Automatically inserted by the compiler; fulfills the requirement of `SomePublicProtocol`
        (self as SomeInternalProtocol).printTest()
    }
}

The design really seems like it should be:

If a default method in a protocol extension satisfies the requirement for a public protocol, the access level for that default method should be promoted to public, regardless of the access level of the protocol it's extending.

It does feel like I'm designing a tangle of complexity for the compiler implementation to achieve that, though, and it’s not much different from a dynamic dispatch for callers outside the module.

In that case, the proposed approach seems reasonable. It definitely falls into the category of unexpected and surprising behaviour, but it's also a problem I've run into in the past, so perhaps it's less surprising to have it simply work than to have to explain why it doesn't work.

You'd be allowed to use foo statically too, because it's available on the protocol. It would formally do a dynamic dispatch to get to it, though.

Isn't this defeating access control? As a developer, why would I mark a method as private if it's still both statically and dynamically callable (but at a perf cost)? Worse, if I accidentally mark the method as private instead of public, will it "just work" (again, at the perf cost of dynamic dispatch) without warning me?

What would happen if I retroactively conformed a type to a new protocol with methods that were previously hidden at my access level, but now it conforms — are those methods then made public too? I would really hope not.


The dynamism here feels very "Obj-C" to me, and I say this in a good way :slight_smile: But to me this also feels like the tail wagging the dog — I'd rather call things as they are and require the method to be public as it is today (because it would effectively be public) and instead resolve the ambiguity we have with internal right now. I think a language change to require developer clarification on their meaning of internal would be more useful than going in the other direction.

These are definitely fair concerns. I guess your proposal would be something like:

Within a protocol extension, a member may be marked with a broader level of access control than the protocol being extended. This allows the member to be used to satisfy requirements in inherited protocols only with that broader level of access control, permitting the following: [example from above]

Does that sound correct? Note that this also allows members in extensions of private protocols to be marked internal or even public for the purpose of satisfying internal or public requirements.

7 Likes

Yes, thank you! I’d add the clarification that the scope of the default implementation being provided must follow the general rules we have today — the method implementation must be at least as public as the target protocol (e.g. a private protocol can add a public default implementation for a public protocol requirement because types implementing that protocol would themselves need to satisfy that requirement publicly too). Essentially, I think we can offer a clear extension to the current rules here: protocol requirement has access level x? A protocol inheriting from that can offer a default implementation at access level y >= x, same as other types would have to offer.

2 Likes

I’m also in favor of this solution proposed by @itaiferber +1

This sums up my feeling well.