Swift Performance

Correct. This is enforced by the runtime.

What is this? I can’t find any reference to it elsewhere. Is this just a memory leak by too many retains?

read up the thread

Thanks, I had read that, but it was two weeks ago and the search on page didn’t find it. I still have a question, in that it seems like it is theoretical when @Joe_Groff brings it up, but then @David_Smith says it is currently used by empty collections. Is this a concept that already exists in the language or is it a theoretical one?

I'm not aware of it already being in place, but from what I gather in this thread, it seems hypothetical

The runtime support is real and actively used by the Swift standard library implementation. The hypothetical part would be exposing it as API to user code, and/or adding optimizer rules to automatically immortalize object graphs when they're stored to immutable globals.

5 Likes

I added immortal object support in Allow Swift objects to mark themselves as ignoring refcounting by Catfish-Man ¡ Pull Request #22569 ¡ apple/swift ¡ GitHub

9 Likes

Sidenote: the CoreFoundation & Objective-C runtimes have supported immortal objects for essentially ever. It’s a well-proven mechanism, with many practical applications (and some crutches - e.g. abusing it for singletons so you can ignore reference counting errors, though ARC largely obsoleted that particular misuse).

The only ‘difference’ in this case would be that, unlike with CF & Objective-C where immortality was granted technically at runtime and in any case had zero effect on compiler output, in Swift it does, and does (or it’s proposed will?) be propagated to referenced objects too. Sounds like a fine optimisation to me, as long as it’s implemented correctly (including w.r.t. how memory debugging & profiling tools deal with immortals - in CF / ObjC they’d show the reference count as INT32_MAX, IIRC, which was a clear signal in practice - and infallible since you could achieve immortality ‘the hard way’ by calling retain INT32_MAX times; that the retain count was INT32_MAX was the precise condition that granted immortality; the retain/release/etc methods refused to modify a retain count with that value).

Maybe it was the case on old 32bit runtime, but with the 64bit runtime, immortality is granted by setting all RC bitfields flags to 0

__CFHighRCFromInfo(info) == 0; // This is a constant object

And the CFGetRetainCount is special cased for constant object to return 0x0fffffffffffffffULL

Calling retain INT32_MAX times will just cause the retain count to be high enough that there is few chance your object will ever be deallocated, but does not make it immortal.

1 Like

With a well funded and technically strong company like Apple behind it I'm quite certain it will improve over time. I also think that now that they own the cpu some optimizations there will happen as well.

However at the current time there is no doubt that Swift is not as strong performance wise as some competitors.

I would take the results from this site with a salt mine.

One example:

The Go-Swift comparison page lists k-nucleotide as a benchmark. The bottom of the page claims the Swift compiler used was: Swift version 5.2.4 (swift-5.2.4-RELEASE).

However, if you follow the link for that benchmark, you find that the Swift version didn't build using the same compiler. So which compiler was really used for the Go-Swift comparison?

4 Likes

I'm cautious about these web-server benchmarks, because they reward senselessly simple, fast web servers (obviously).

There's a bunch of "fat" you can cut in the pursuit of optimizing throughput on a simple metric (e.g. JSON objects served per second). For example, you can:

  • Not use any dynamic dispatch
  • Not modularize any code with any expensive abstractions. Interfaces which introduce indirection increase CPU cache misses, and slow performance
  • Not use any aspect-oriented programming
  • Not use any reactive streams
  • Not use a dependency injection container
  • Not have any logging/tracing
  • Don't use any exceptions (the kind that involve stack unwinding, different from Swift's errors)

Of course you'll get a fast result, ...but who in their right mind would program real applications in such a restricted environment?

1 Like

I didn't take a look at the results but Swift claim to be fast so it has to be fast especially on simple naive applications.

Whether we like it or not those are what people will take a look at when having to choose between one language and another

There's nothing particularly simple and naive about a Web Server framework.

There are better performance benchmarks that test the languages more specifically (e.g. implement RSA encryption from scratch). These web server benchmarks are testing frameworks like Vapour/Kitura, not just Swift in isolation

Wow, some of the Swift code they're comparing here is extremely non-optimal. The worst Swift result on the Swift/Go comparison is the regex-redux test. The implementation there is performing the work on a background Dispatch queue, which is the lowest possible priority for work. It's not at all surprising that it would be slow.

It's also calculating the full unicode grapheme count on the entire (very large) input by using input.count, and then passing that to NSRange as if it were a UTF-16 code point count, which is incorrect behavior that happens to work here because the input is all ASCII data. And it's unnecessary work anyway.

I'm always happy to see Swift performance improve, but the particular benchmarks here are nothing like a fair comparison.

17 Likes

regex-redux is also going to remain a worst case scenario for Swift unless and until we do some sort of native regular expression support. ICU's regex support (which underpins NSRegularExpression) isn't the fastest, but more importantly it operates on UTF16, which means bridging and transcoding when starting from Swift*.

But yeah, putting it on a background queue is baffling.

*though hopefully we're hitting all the ASCII fast paths in the bridging code. If not, that's a bug I should fix.

7 Likes

I used the background queue because when I didn't the performance was similar to the last Swift program on the benchmark list (I don't understand why). As a Java user, looking to move to Swift, I am desperate for someone to submit a much faster implementation (so I can learn from it). But I can't move to Swift if my text processing routines don't at least match Java's performance. I'm glad to know that people on the team are thinking about the problem and I'm certain that when it becomes a priority (I know text processing isn't as important as iOS development) I'll be able to reap the benefits.

Apart from the potential correctness issues here (full Unicode text processing vs UTF-16 processing) - required or not - Micro benchmarks involving GC languages are always a little weird, because they won't usually hit the GC.

P.S.: FileHandle.standardInput.readDataToEndOfFile, maybe just change that to Data(contentsOfFile:), probably a little faster.

3 Likes

I considered submitting an improved implementation, but the documentation on the website seems to suggest it isn’t regularly being updated anymore. Is that incorrect?

They are still active. Submissions are accepted on their GitLab.