Swift Performance

Benchmarks are flawed but we need something to go off of. Additionally since Apple is on techempower with SwiftNIO they must think it has some value no? Not to mention that every framework known to man is also on there.
I think all this don't trust benchmarks stuff is more like excuses for poor performance. Sure if your framework is pretty close in perf to some other on techempower we can call it a draw. But if your framework is near the bottom guess what your framework is slow period.


swift5.3 has been released. Has it been improved?

Which thing are you referring to as "it"? Swift performance in general? There were definitely some performance improvements in Swift 5.3, though it depends on what you're measuring whether they're be applicable.

1 Like

We get great performance from Swift.. We built a new memory subsystem that allows concurrent high frequency memory allocation without touching ARC. We would make the argument that doing so is very similar to generational GC. ARC is much harder on performance than modern GC, despite the mythology, so these concurrency aware memory allocation methods are even more important.

1 Like


We built something like a generational heap, like in Java. We've found in operations that do heavy workloads, they need ultra high frequency allocations for their operation, and that operation semantically knows how to manage allocations safely. We provide facilities for that operation to allocate without ARC, and to graduate select allocations to ARC land, where other "cells" can safely interact with them. What's great is we can do this on n concurrent threads.

Think of a generational heap as a ptr to memory, some pooling, some allocators into that memory, and some syntactic sugar on top of the allocators.


The key is to iterate and get more performance and scalability into the language, and give swift developers the tools to build bigger, more performant systems, just as we did with Java 10-15 years ago. ARC is much worse for performance than generational GC ever was. With our work, we manage method invocation rates in the hundreds of millions. Without it, ARC dominates our profiles, and the code can't sing. We prefer singing.


Separately, we built an Actor model for course grained concurrency. You can see a snapshot of an actor definition here

Why not use go?

I am thinking about java, go, and swift. Is swift comparable to go?

Your question is quite vague, and I think the most one can realistically answer would be to try benchmarking it (assuming you're even talking about performance).

1 Like

Yes, I need to create a chat room. Need good performance. thank you. I will try to benchmark it.

When creating an app like a chat app, the performance problems usually aren't in your backend unless your backend code is written poorly. They're more likely in transport and the database itself.

If you ever run into a performance problem with any of the mentioned languages, my first recommendation is to scale the amount of servers handling requests. Your server should always be architected in a way that you can expand horizontally. If that's not the case, you're destined to run into problems anyways. Horizontal scaling also ensures that if one instance is down (even for maintenance), that the rest can proceed helping customers.

1 Like

I've written a small chat system using SwiftNIO (has it all, web clients, swiftui clients, server, chatbots), you can find it over here: GitHub - NozeIO/swift-nio-irc-server: A Internet Relay Chat (IRC) server for SwiftNIO. I'll demo it in this video: SwiftNIO on the Raspberry PI - Helge Heß - YouTube. Though it might be interesting :woman_shrugging:

P.S.: With chat systems you often need to address the scalability, not the performance (how many connections you can handle, not how fast you handle a single one). I think the default model of Go is a little worse here (green threads), while you will hardly run into issues with something like SwiftNIO (or Netty on Java for that matter).

Just curious: why is your system doing so much allocation? Heavy use of classes, perhaps?


Can we please try to put a cap on these abstract discussions of performance?

"I heard Go was fast ... what about Java? or Rust? or Swift?"

These discussions are near meaningless without a more grounded discussion of the system, its requirements, its constraints, and its bottlenecks.


Heavy use of classes would kill us ;-). Imagine a system where there is no heap allocation per compute, and we engage all available cores on a host system for a whiff of 200ms, and we asynchronously recognize tens to hundreds of millions of possible inputs in a dynamic "parse". We are doing probably 4 orders of magnitude more compute per second than you are imagining.

I’m not really imagining anything. Why so much allocation then?

1 Like

It's worth remembering Swift's implementation of ARC is still far from optimal. Swift 5.3's optimizer significantly reduces the number of ARC calls in optimized code; we've seen up to 2x improvements in hot parts of SwiftUI and Combine without code changes. There will continue to be ARC optimizer improvements as we propagate OSSA SIL through the optimizer pipeline as well. (However, the main benefit of ARC over other forms of GC will always be lower-overhead interop with non-GC-managed resources, such as the large volumes of C, C++, and growing amount of value-oriented Swift code that implements the lower level parts of the OS, rather than the performance of the heap management itself.)


Joe, happy to look at Instruments profiles with you sometime.

Think of the SwiftUI/Combine case you mentioned as an important and also lock bound high message count overhead case, with not much actual compute. I was the chief architect of JavaFX, SwiftUI's great grand dad, and am aware of the case you are dealing with.

Think of Brighten AI, our system - as as a wide open concurrent compute system all cores engaged for 200ms, and unlike SwiftUI/Combine doesn't have a bunch of threading constraints (SwiftUI and main), so compute can go wide open. What we found building it is that ARC tends to insidiously get in the way, either because of variable sized structs, collection use, ownership through stack pops, etc. To be clear, we also built new storage sub systems to manage our systems streaming inputs. SQL/CoreData/Realm/etc. all had the same stomp on the allocator and locks performance issues that would have killed us.

We use actors for course grained concurrency, and the ultra high perf stuff for our streaming recognition inside one of the actors.

1 Like