Type nuance

Hi all,
Is there any significant difference between defining a function in these two ways?

func foo (_ bar: any MyProtocol) -> String

func foo<T:MyProtocol> (_ bar: T) -> String

Yes, those are very different. However, these are the same:

func foo (_ bar: some MyProtocol) -> String

func foo<T:MyProtocol> (_ bar: T) -> String

(In general, the rule of thumb is: you want some, not any. There are some specific cases where you may need any, but most of the time it's not what you want)


Could it be that existential "any" types were used more before but no longer are used so much? Could they end up being removed from the language, like a failed experiment?

Swift has a strict policy of not breaking source compatibility, so even if they were a failed experiment (which I think it putting it too strongly; they're a niche tool, but they do have uses), we still probably couldn't remove them.

To put it a bit more strongly than David, existential types can't be removed from the language because they are inherently the only way to represent certain patterns in code (e.g., storing one of several possibly-unknown types conforming to a protocol in a property).

It'd be conceivable that in certain use cases where any wasn't providing any benefit, the compiler could warn (e.g., with implicitly opened existentials, having a method take any Protocol has significantly lessened benefit than taking some Protocol), and if ABI stability weren't a concern, even possibly compile certain usages of any Protocol into some Protocol — but they are useful on their own.


In fact, unless I’m misremembering, this sort of transformation can indeed take place today as an optimization when not compiling in library evolution mode.

1 Like

I wonder if there's a risk of any getting confused with some. As the language becomes more baroque, this one nuance may be forgotten due to the burden of learning other nuances.

1 Like

It depends on what you mean by "risk": where any and some are interchangeable, there is only a mild performance degradation to using any; where they are not interchangeable, the compiler won't let you get mixed up because the code won't compile.

1 Like

to go a bit further, i’m actually not aware of any rigorous study into the performance differences between unspecialized some and any. with each passing day i grow more suspicious that my own personal aversion to any comes from cargo-culted conclusions drawn from encountering well known performance bottlenecks like Codable that happen to use existentials.


I suspect that much of the general aversion to existential types comes from repeated claims of unspecified slowness as opposed to generic types. I know you know this, but for others, there are two benefits that generics can provide:

  1. IIRC, when unspecialized, generic types may avoid the cost of an additional pointer indirection that existential types have
  2. When specialized, code can be optimized for the specific type passed in

The benefit ranges from "likely completely insignificant on modern hardware compared to anything else you're doing" to "possibly quite meaningful", but my feeling is that most claims lose this nuance entirely, boiling down to "existentials bad, generics good" or "avoid existentials at all cost"


i had always assumed that unspecialized generics are boxed just like existentials, perhaps @Slava_Pestov could shed some light?

I had originally qualified this claim, and lost that during editing — I could very well be wrong on this. Let me see if I can dig up some documentation.

Unspecialized generics are not boxed, but they are generally stored in memory rather than registers, and any operation on them has to be indirected through the type's value witnesses.


In my experience, when paid professionals are creating software, they're given near-top of the line computers that don't experience slowness doing much of anything.

If you want to measure slowness, you can run Swift programs on a more basic computer like a Raspberry Pi. There Swift in general is visibly slower than C/C++.

1 Like

At the risk of going off on a tangent, I've heard this sort of argument before and I don't really buy it. I've used both slow computers and fast computers to develop code. Developing on a slow computer didn't encourage to me to write better-optimized code; it just meant I had to wait longer for my code to compile.


You may be unaware that a lot of the regular public is not using high end computers. To be a good software architect, one has to have knowledge of their experience and empathy for the suffering caused by your software. It sounds like they are not on your radar at all.

Given that many popular languages use existentials extensively, such as Objective-C, and some are very performant, it's quite clear that they're not inherently slow. In the big picture sense.

Missing existential optimisations in Swift?

Swift does lack the means to optimise existentials to the same degree as e.g. Objective-C. To my knowledge, there's no way to do IMP caching, for example.

IMP caching, for those unfamiliar with the terminology, is where you fetch & cache the actual function pointer for an otherwise "virtual" or "dynamically dispatched" method call, so that you can do a direct, vanilla, C-style function call instead of the full overhead of witness table lookups (or worse).

A billion years ago I worked on a charting framework (as used in Shark) that used the normal Objective-C data source pattern, so in principle fetching every point in the data series was a full message send, and therefore surely there's no way you could plot hundreds of millions of data points in (effectively) real time (on a single-core G5 - that's right kids, computers used to have just one core!). Ten minutes of basic optimisation later, using IMP caching, and you could.

The point of this nostalgic tangent is that in Swift I'm hesitant to use existentials because I fear I'll unwittingly end up in a performance quagmire that the language doesn't provide me the tools to optimise out of. But I too get the feeling I'm way off on how likely this really is, and I find Swift's obsession with generics over existentials to be pretty frustrating to development speed, sometimes.


Existentials in Objective-C can only abstract over class references, so the value is always is just a single pointer. The Swift equivalent is an existential that's AnyObject-constrained.

A witness method call loads a function pointer from a fixed offset in an array and performs an indirect call. IMP caching is more profitable in Objective-C, where the lookup operation is comparatively more expensive.

I miss the jet engine sound of booting up my quad-core PowerMac G5.


This is not a very charitable reading of Slava’s post. That using worse tools while making tools is not a good way to get people to care about making better tools need not imply that making good tools is unimportant.


Maybe Slava isn't personally at fault, and yes it's better to write code using faster technology (although I've also done the opposite and been fine).

But my concern is that there is a bad problem that the software industry is ignoring, that computers can't handle the burden of inefficient, bloated code. This is especially pronounced with Javascript frameworks like Electron that make some web apps simply unusable on older hardware. Swift is slower than Objective-C, C++ and C, as I suspect Apple's software performance team (whatever it's called) has mentioned the problem.

It's not OK to push the problem onto poor users, saying hey just upgrade with money you don't have.