Global functions can't be used if a member function shadows its name, even if the argument labels are different

I was quite surprised to discover that the following code fails to compile:

struct Demo {
    func foo (bar: Int) -> Int {
        foo(notBar: bar, baz: true)
func foo (notBar: Int, baz: Bool) -> Int {
    baz ? notBar : -notBar

The error is:

Use of 'foo' refers to instance method rather than global function 'foo(notBar:baz:)' in module 'MyModule'

Use 'MyModule.' to reference the global function in module 'MyModule'

I assume this isn't a regression (because I see how much effort goes into avoiding breaking changes), but I'm extremely surprised that I've never once stumbled upon this restriction in my many, many years developing exclusively in Swift.

Can anyone shed some light?


It is unexpected indeed. I've seen this before when tried to declare
func print(param: Int)

This is indeed not a regression, but rather one of the longest-standing open swift "bugs", e.g.:


You wrote the word “bugs” in what I assume are scare-quotes, which would generally indicate that you are not sure if it should truly count as a bug. However, the discussion of this issue in SR-2450 points out that, while this behavior might originally have been technically working as intended, after SE-0111 was accepted it became a clear bug.

Since SE-0111 makes argument labels part of a function’s name, it follows that foo(notBar:baz:) and foo(bar:) have different names. Thus, when the compiler looks for a function whose name matches the call-site, it is incorrect to stop that search after finding a function with non-matching argument labels.

The labels are part of the name, so the search must continue until a function with a matching name including argument labels is found (modulo potentially missing labels due to trailing closures and/or default parameter values).


Not so much unsure that it should count as a bug, as it's behaved like this for so long that at this point it's on the cusp of being defacto intended behavior.

1 Like

I don’t quite understand the logic here. Is the current behavior something that a programmer could “depend on”? At first glance it looks to me like a purely additive change with no discernible downsides (other than the possible complexity of implementing the fix, which I couldn’t speak to)


Overload resolution in Swift is underspecified; Hyrum’s law applies, and any nontrivial change in this area can and must be presumed to break existing code until evidence can be produced to the contrary, which is a difficult and expensive task. That these very serious downsides are indiscernable beforehand makes it even worse.


Overloading involves functions with the same name. The functions involved here have different names.

The Swift compiler today incorrectly treats functions with different names as if they were overloads. This is a clearcut bug, that needs to be fixed.

I fail to see how taking something that does not compile, and making it compile successfully with the obviously correct meaning, could possibly affect existing code.


Your phrasing cannot, of course, but changing this behavior can:

func foo() {
struct Wrapper {
  func foo(i: Int = 1) {
    print("method \(i)")
  func test() {

Today this does compile, even though the full name of the method is foo(i:) rather than foo(). If outer functions were naively added to the overload set, this code would change meaning; if foo(i:) continues to be the selected overload, the rule has become even more complicated than just “nearest base name wins”. I haven’t worked out what it would take, but someone would have to to even propose a change here.


I am not sure what you mean by “naively”, but that code absolutely should not change behavior. As soon as the compiler sees a function whose name is capable of matching the call site, it should stop expanding the search just like it does today.

The rule is, “Can the function name match the call-site?”

1 Like

This is, unfortunately, not how the compiler collects overload candidates, at least partially for better diagnostics where the caller has left out arguments, or made typos, or has contextual type information that suggests they may have meant to call a different overload-by-name. As Xiaodi said, this area is underspecified, and changing it can have broad implications.


Yes, obviously that’s not how it works today. That’s the problem. That’s what causes the bug we’re talking about.

I am under the impression that the compiler currently starts at the inner-most scope, looks for candidate functions with the same base-name as the call-site, and if it does not find any then it expands its search to the next-larger scope, and so forth.

The fact that it only considers base-names, and not full names, is what creates the spurious errors that prevent code that should be valid from compiling when, for example, “max(a, b)” is written in a Collection algorithm.

Changing “Does the base-name of the function match the call-site?” to “Can the name of the function match the call-site?” can only reduce the number of candidates found in any scope.

If for diagnostic purposes we want the compiler to also make a list of “near-miss candidates”, that’s perfectly fine. But it should not affect the actual list of functions whose names are capable of matching the call-site.


I think--broadly--everyone agrees that this should work. Given that, and given that it hasn't changed in the better part of a decade despite causing pretty frequent frustration for a number of high-profile contributors, this suggests that fixing it isn't quite as straightforward as it might seem. The number of people who deeply understand these pieces of the type-checker is pretty small, and they're all busy with other stuff too. The challenge here isn't so much convincing people that the current behavior is undesirable, it's actually doing the work of fixing it, and convincing ourselves that in the course of fixing it we haven't inadvertently broken something else (type-checking is full of spooky actions at a distance).


It’s possible this is good enough. It’s also possible that this will still result in worse diagnostics in more complicated expressions, where the choice of overload in one sub-expression affects the type constraints for another sub-expression. (Even Rust’s much simpler type system and overloading rules are not immune to this.)

It’s also possible that your prosposed change will result in better diagnostics in such cases. Or a mix. Whole-statement type-checking plus overloading plus subtyping and implicit conversions makes for a really complicated, interconnected system.

So this hasn’t all been “your idea is bad and you should feel bad”; it’s “your idea is reasonable but requires a lot of work to validate (after whatever work there is to implement it)”. Which you can take as an indictment of the complexity of the Swift type checker if you want to, but it’s true nonetheless.


I’m a little surprised to see mentions of type-checking here.

I was under the impression that finding candidate functions proceeds on a purely lexical basis. Essentially string-matching, with no type-checking involved until after the candidates are gathered.

If the tradeoff is between “Evidently valid code fails to compile” and “Some diagnostics might be degraded, but then again they might not”, the balance seems pretty decisive to me.

I’m not currently knowledgeable about the internals of the compiler, nor set up to work on it, but I’m willing to learn. If someone could point me toward the code which identifies candidate functions, I’d like to take a look.


Regarding the possibly quite difficult task of validating @Nevin’s proposed approach - is it possibly acceptable to approach this in an empirical way, as opposed to looking for a guaranteed logical proof?

What I’m imagining is introducing the change as an experimental feature and then asking the community to, at their leisure, flip the feature on and see if it breaks anything. If after (let’s say) two years we have collected a huge number of testimonials from swift developers far and wide which indicate that in this case there was no lurking, unexpected consequence, then we can make this change official.

This came to mind because it seemed like it would be a shame to not make what looks like an obvious improvement only because we can’t figure out how to completely prove that it is logically sound 0%-source-breaking.


The folks who work on the type checker understand how to fix shadowing in the implementation (@xedin in particular). Overhauling the implementation of shadowing in the constraint system is a nontrivial task, and we need to do that in order to evaluate the source compatibility impact.


Name lookup is part of semantic analysis. A lot of people use the term "type checking" and "semantic analysis" interchangeably. We generally call the overarching compiler subsystem that performs semantic analysis "the type checker".


Currently name lookup is done early during expression "pre-checking" phase - swift/PreCheckExpr.cpp at main · apple/swift · GitHub. In order to change shadowing behavior we need to move this logic into the constraint solver and produce a special constraint (similar to a member constraint) to be able to control solving and re-run lookup if set of overloads for one context didn't match. (Note that we have this hack at the moment NameLookupFlags::IncludeOuterResults). We also need to make sure that diagnostics for such cases are reasonable which might mean some special "fix" logic that would have different impacts depending on what declaration context the overload has been found in.

Change like this would have major performance implications because there are (currently) quite a few performance optimizations that depend on declaration references being resolved early - swift/CSGen.cpp at main · apple/swift · GitHub, called via optimizeConstraints).


Are you suggesting that we might want to always include outer-scope functions as candidates, even when an inner-scope function with matching name is found? I had been under the impression that such a change was a non-starter on account of it being source-breaking.

For example, do you envision the following becoming valid?

func foo(_ n: Int) {}

struct Bar {
  func foo(_ s: String) {}
  func test() { foo(2) }

And do you envision the following to start printing “Outer” if Bar().test() is called?

func foo(_ n: Int) { print("Outer") }

struct Bar {
  func foo<T: Numeric>(_ t: T) { print("Inner") }
  func test() { foo(2) }

In contrast, the approach I described would not affect either of these examples. The first would fail to compile, and the second would print “Inner”, just as they do today.

1 Like