Type nuance

It depends on what you mean by "risk": where any and some are interchangeable, there is only a mild performance degradation to using any; where they are not interchangeable, the compiler won't let you get mixed up because the code won't compile.

1 Like

to go a bit further, i’m actually not aware of any rigorous study into the performance differences between unspecialized some and any. with each passing day i grow more suspicious that my own personal aversion to any comes from cargo-culted conclusions drawn from encountering well known performance bottlenecks like Codable that happen to use existentials.


I suspect that much of the general aversion to existential types comes from repeated claims of unspecified slowness as opposed to generic types. I know you know this, but for others, there are two benefits that generics can provide:

  1. IIRC, when unspecialized, generic types may avoid the cost of an additional pointer indirection that existential types have
  2. When specialized, code can be optimized for the specific type passed in

The benefit ranges from "likely completely insignificant on modern hardware compared to anything else you're doing" to "possibly quite meaningful", but my feeling is that most claims lose this nuance entirely, boiling down to "existentials bad, generics good" or "avoid existentials at all cost"


i had always assumed that unspecialized generics are boxed just like existentials, perhaps @Slava_Pestov could shed some light?

I had originally qualified this claim, and lost that during editing — I could very well be wrong on this. Let me see if I can dig up some documentation.

Unspecialized generics are not boxed, but they are generally stored in memory rather than registers, and any operation on them has to be indirected through the type's value witnesses.


In my experience, when paid professionals are creating software, they're given near-top of the line computers that don't experience slowness doing much of anything.

If you want to measure slowness, you can run Swift programs on a more basic computer like a Raspberry Pi. There Swift in general is visibly slower than C/C++.

1 Like

At the risk of going off on a tangent, I've heard this sort of argument before and I don't really buy it. I've used both slow computers and fast computers to develop code. Developing on a slow computer didn't encourage to me to write better-optimized code; it just meant I had to wait longer for my code to compile.


You may be unaware that a lot of the regular public is not using high end computers. To be a good software architect, one has to have knowledge of their experience and empathy for the suffering caused by your software. It sounds like they are not on your radar at all.

Given that many popular languages use existentials extensively, such as Objective-C, and some are very performant, it's quite clear that they're not inherently slow. In the big picture sense.

Missing existential optimisations in Swift?

Swift does lack the means to optimise existentials to the same degree as e.g. Objective-C. To my knowledge, there's no way to do IMP caching, for example.

IMP caching, for those unfamiliar with the terminology, is where you fetch & cache the actual function pointer for an otherwise "virtual" or "dynamically dispatched" method call, so that you can do a direct, vanilla, C-style function call instead of the full overhead of witness table lookups (or worse).

A billion years ago I worked on a charting framework (as used in Shark) that used the normal Objective-C data source pattern, so in principle fetching every point in the data series was a full message send, and therefore surely there's no way you could plot hundreds of millions of data points in (effectively) real time (on a single-core G5 - that's right kids, computers used to have just one core!). Ten minutes of basic optimisation later, using IMP caching, and you could.

The point of this nostalgic tangent is that in Swift I'm hesitant to use existentials because I fear I'll unwittingly end up in a performance quagmire that the language doesn't provide me the tools to optimise out of. But I too get the feeling I'm way off on how likely this really is, and I find Swift's obsession with generics over existentials to be pretty frustrating to development speed, sometimes.


Existentials in Objective-C can only abstract over class references, so the value is always is just a single pointer. The Swift equivalent is an existential that's AnyObject-constrained.

A witness method call loads a function pointer from a fixed offset in an array and performs an indirect call. IMP caching is more profitable in Objective-C, where the lookup operation is comparatively more expensive.

I miss the jet engine sound of booting up my quad-core PowerMac G5.


This is not a very charitable reading of Slava’s post. That using worse tools while making tools is not a good way to get people to care about making better tools need not imply that making good tools is unimportant.


Maybe Slava isn't personally at fault, and yes it's better to write code using faster technology (although I've also done the opposite and been fine).

But my concern is that there is a bad problem that the software industry is ignoring, that computers can't handle the burden of inefficient, bloated code. This is especially pronounced with Javascript frameworks like Electron that make some web apps simply unusable on older hardware. Swift is slower than Objective-C, C++ and C, as I suspect Apple's software performance team (whatever it's called) has mentioned the problem.

It's not OK to push the problem onto poor users, saying hey just upgrade with money you don't have.

FWIW this is not what we've found as we rewrite pieces of Foundation from ObjC to Swift (as described in Swift.org - Foundation Package Preview Now Available). Individual cases vary of course, and it's not always simple to work out why, but in general we've had quite good results perf-wise.


i once had a conversation with a colleague about why we didn’t have wider (server-side) adoption of swift at our company. his answer was that the swift code ran slower than the corresponding C++ code.

at first, i was befuddled, because i have been writing fast swift code for many years. but then he explained that it was hard to recruit experienced swift developers who don’t write awfully inefficient code — not because the average swift programmer is dumber than the average C++ programmer, but because there are fewer of us than there are C++ developers in the world.

it really just comes down to relative market share.


That makes sense. Performance predictability and ease of diagnosing performance issues are also known issues as discussed in this post about Swift 6 design priorities: Design Priorities for the Swift 6 Language Mode

I've certainly found myself having to look at disassembly to determine what the optimizer did with my code more often than I would personally hope for, which is why work like @Michael_Gottesman is doing on optimization remarks/"assembly vision" is so exciting: PSA: Compiler optimisation remarks - #5 by Michael_Gottesman


I think this hits the nail on the head. Even as an experienced swift developer, it is hard to know when you've hit a performance cliff. Ownership features are one part of it, but we definitely need more tooling in this area. One thing I would personally use is a "no implicit copy/copyable" mode for performance sensitive modules.


It can go both ways. Which probably shouldn't surprise anyone. Objective-C is (essentially) a superset of C, which means you can drop down to basically writing glorified assembly if you want (or literally assembly, inline, in fact). Swift doesn't really let you do that. Even using things that nominally are C-like, such as Unsafe*Pointer, don't necessarily give you C-level performance (and they create code that is way more difficult to read than the C equivalent).

On the other hand, one of the first substantial programs I rewrote in Swift a bajillion years ago relied heavily on mapping strings to strings and was about a thousand times faster in Swift, because even back in Swift 2 Dictionary & String had some advantages over NSDictionary & NSString.

On the other other hand, to this day there's still plenty of basic String methods that are slower than their NSString equivalents (especially if you use the true String methods rather than the replacements you get from importing Foundation, e.g. for contains) and/or otherwise have terribly inefficient implementations (e.g. contains again).

And that's where I think there might be a disconnect - I suspect when some people say "Swift is fast like C++!" they're thinking about the naked language, not accounting for the libraries.

That's pretty much my feeling, too. I use Swift for a lot of stuff - happily - but for anything that I anticipate being performance-sensitive, I'm hesitant. For instance, lately I've been discovering that string processing in Swift is full of landmines (relating to both functionality and performance) and I've been resorting to workarounds as gross as rewriting Swift CLI tools into shell scripts.

The recent improvements to C++ interoperability make me less concerned about Swift's performance, in a way, but also worry me as potentially leading to a Python-like "answer" to performance concerns - essentially "don't use Python". :confused:

Though I'm somewhat reassured by much experience using Objective-C++, which was mostly the best of both worlds, so maybe Swift++ will be too.


You probably shouldn't extol the performance virtues of ObjC IMP caching without mentioning that it has to be done manually by users because it's technically semantics-breaking because ObjC lets you add, remove, and replace method implementations on any class at any time.


True, but practically never a problem. I vaguely recall once having to write "make sure to call setDataSource: again after you swizzle". An easy workaround, and IIRC it was common practice to ensure IMP caches were either very transient (e.g. only for the duration of a core loop) or could be reset in obvious ways (e.g. re-setting the relevant object property).

Incidentally, I always wished Objective-C would add an "auto-IMP-cache" feature, although I also always suspected it just wasn't possible given the fluidity of the runtime environment. But maybe Swift could? Granted witness table lookups are fairly cheap anyway, but there's plenty of other situations involving slow 'dynamic' lookups (e.g. key paths). Does Swift (today) do any optimisations of those, e.g. hoisting? I haven't come across any - but then, maybe I have to encourage it somehow? Or does Swift achieve this only incidentally through inlining?