Can my technology company use only Swift?

Without Apple's support, what would you choose?

Take this simple C++ ring buffer implementation:

#define count 10*512

class CRingBuffer {
    float samples[count];
    int readPos = 0;
    int writePos = 0;
    
    void put(float sample) {
        samples[writePos++ % count] = sample;
    }
    float get() {
        return samples[readPos++ % count];
    }
};

If you naĂŻvely convert it to swift:

class RingBuffer {
    static let count = 10*512
    var buffer: [Float]
    var writePos = 0
    var readPos = 0
    
    init() {
        buffer = [Float](repeating: 0, count: Self.count)
    }
    func put(_ v: Float) {
        buffer[writePos % Self.count] = v
        writePos += 1
    }
    func get() -> Float {
        let v = buffer[readPos % Self.count]
        readPos += 1
        return v
    }
}

you will have an implementation which has retain/release in its get/put methods (which can take a mutex lock) -- not good for use in real time code: see old but good links: link1 (49:12 -- 50:50), link2 and link3. That's the kind of minefields I am talking about. If you are extra careful (switch to unsafe pointers and verify the resulting asm (which can change from one swift version to another)) - yes, you can link4 (29:00 -- 32:40, 37:50 -- 42:25).

Relevant thread from the past.

1 Like

I non-naively converted the buffer to Swift, for reference. It is translated as literally as possible from C++. The problem was allocating a Swift array inside a class, which is already reference counted. Either change RingBuffer to struct, or change buffer to a pointer that deallocates upon RingBuffer.deinit. I chose the latter. Second, force-inline put and get if you're exporting this type as part of a public library. You don't want the overhead of extra function calls when it's obvious the assembly of RingBuffer.put and RingBuffer.get will be tiny.

Putting that aside, the main point is you should avoid Swift ARC types like Array and Dictionary inside an audio kernel function. The 2016 WWDC video (37:50 - 42:25) described a render function made entirely in Swift, using just pointers. This function touches only upon RAM reads/writes and ALU instructions, with all memory pre-allocated and no calls into swift_retain. It is easy to restrict yourself to these kinds of ASM instructions in plain C, which is probably why Apple recommended to use exclusively C/C++ for audio until 2016.

But to be helpful to someone trying to learn Swift, we can teach them about these performance characteristics of Swift. They might not find it super problematic. For audio code, you need to think like a C programmer whether you're writing the kernel in Swift or C++. The statement above gives the connotation that you "the reader" can't learn how to effectively work within this mindset. And that if they do, there's no way it's sustainable in the long term.

To restate the concerns from that paragraph:

  1. When working with audio, avoid synchronization primitives. This can be managed by pre-allocating your memory into raw pointers. In real-time code, only work with memory backed by those pointers.
  2. Experimental, custom code will bring glitches. This is something all programmers face, in every context. Bugs are no fun, but there are many ways to search for bugs or performance regressions. Swift package tests make it easy to test whether a code base runs correctly, but other options are also available.
  3. The compiler's internals can change at any time. In general, the compiler gets faster over time. If a section of code runs fast now, it will probably run fast in the future. New features like explicit stack allocations are subject to change, but the documentation/forums are good at explaining these areas for change when they do appear.

Nice. You may want to change from Int32 to a bigger type or to UIn32, otherwise the buffer will overflow in a few hours and array index will go negative and crash. Also worth changing += to &+=.

I don't think that per se will fly: you need the same struct for reader and writer (which are in different threads), meaning it has to either be in a class or in a global variable. In both of these cases get/put will have retain/release if the underlying storage is left as swift array.

Very true.

That was the question I had at one point: what materially changed between WWDC 2015 and WWDC 2016 in this regard... I never found an authoritative answer.

I brought up realtime audio as an answer to "can I do everything in swift". The answer I was conveying "perhaps you could, just there are better tools for specific jobs".

A few things for your third bullet point:

  • realtime programming peculiarity is that it's not about overall speed or average latency -- it's about worst case behaviour. Often times latency is sacrificed for speed. Speed can be much faster but if there is an extra lock (where there was no lock previously) or, say, if the worst case behaviour changed from O(N) to O(N²) (even if the average behaviour got improved from O(N) to O(log₂N) - that would be a problem for realtime code.
  • Just recently we were discussing a speed regression reproduced in SwiftUI(Seven seconds to render SwiftUI - #33 by tera) observed on new macOS versions. Perhaps there was a good reason for this slowdown, just note: newer is not necessarily faster.
  • "The compiler's internals can change at any time" - that's why the idea of @realtime keyword discussed in the other mentioned thread is so appealing, it would remove any guesswork or trial and error. With this keyword swift could become better than C IRT realtime programming.

You started to sound reasonable, good job.

This is an unwarranted, sarcastic remark about my character, not a refutation of my argument. Please make a best effort to stick by this document. It's okay to feel strongly about a particular stance, but that statement didn't contribute anything of value to this discussion.

I would have used Int, but stuck to literally translating the C++ code. In C, C++, and Java, int means a 32-bit signed integer. This was an intentional choice for the sake of demonstration. Good catch on using &+= though, which should reduce an extra clock cycle. To prevent overflows, I prefer to reset the counter of a ring buffer whenever it reaches the maximum.

This is my fault for being a bit vague. I often use "fast" to describe latency, throughput, and worst-case complexity collectively. When working with just C pointers for audio kernels, there are no operations that can block. Worst-case behavior should be immune to changes in the compiler regarding synchronization, as no locks are present.

Setting that aside, in real-time computing environments where you can expect synchronization locks, absolutely optimize for worst-case performance. I did a ton of that in ARHeadsetKit and AR MultiPendulum, even making a frame take longer on average by offloading certain work to the GPU. In the end product, the only time my framework/app went under 60 fps was when the OS froze the run loop.

I think this is far enough off a tangent that it's not contributing to the main purpose of the forum thread anymore. I don't think the unique performance patterns of real-time computing is even a concern for the poster, unless I am mistaken.

2 Likes

Perhaps this is a productive talking point to restart discussion from. Swift happens to not be used much outside the Apple ecosystem, but that doesn't mean it should be avoided. Saying so is an argument to the people, the idea that because everyone does something, it must be the best option. Here, it's in the reverse form: Swift is "unpopular", therefore it is the "worst" option.

The statement characterizes everyone who deviates from the norm - by using Swift on Linux/Windows/Android - as "hav[ing] too much free time on their hands". I happen to have lots of free time (this forums thread is proof) and use Swift extensively on Linux/Colab, but not everyone is that way. Perhaps the discussion here could be: could someone use Swift for X specific project, given a finite amount of time?

1 Like