Realtime threads with Swift

May see some sample code of how you use unsafe memory allocation?

Just bumping this thread because I happened upon this:

/*!	@property	renderContextObserver
	@brief		Block called by the OS when the rendering context changes.
	
	Audio Units which create auxiliary realtime rendering threads should implement this property to
	return a block which will be called by the OS when the render context changes. Audio Unit hosts
	must not attempt to interact with the AudioUnit through this block; it is for the exclusive use
	of the OS. See <AudioToolbox/AudioWorkInterval.h> for more information.
*/
@property (nonatomic, readonly) AURenderContextObserver renderContextObserver
	API_AVAILABLE(macos(11.0), ios(14.0), watchos(7.0), tvos(14.0))
	__SWIFT_UNAVAILABLE_MSG("Swift is not supported for use with audio realtime threads");

Here we are 2+ years later. Still can't do realtime audio in swift. Sad.

4 Likes

We do also want to add fixed-size arrays at some point (and there have been discussions about the best approach). Which one is "normal" depends on your perspective, but AFAIK there has been no rejection of fixed-size arrays as a concept, even if that requires a new type.

Range is a struct, so its values are stored directly. Most of the time, it will just get inlined and vanish. It is safe to use in realtime code. I would expect integer literals, as you've shown here, to compile to exactly the same code as C. For non-literals, I would expect the differences to be limited to range correctness checks (checking that lower <= upper when you write for x in lower..<upper) and integer overflow checks.

3 Likes

Neither does C on systems that overcommit memory like Linux. Even under extreme memory contention, malloc() will happily return a non-null pointer and then the VM system will kill your process when you try to wire those pages by writing to them.

Practically speaking, your opportunities to handle memory conditions are restricted to direct interactions with the VM system. Swift could definitely make it easier to allocate instances out of manually managed memory, but a system which was completely safe in the presence of VM would require handling OOM on every memory access.

1 Like

How realistic is it to use Swift for simple real-time audio processing today?

I see that AVFoundation's AVAudioSourceNode and AVAudioSinkNode released in 2019 both mention realtime audio processing applications with a caveat about avoiding mutexes and libdispatch. But no mention now of avoiding Swift explicitly.

In fact, the video that mentions avoiding Swift seems to have been removed from the archives (Session 501 WWDC 2017) and is quite hard to find now.

With Swift Atomics now available, would it be reasonable to use ManagedBuffer to preallocate a buffer upfront to implement various producer-consumer queues suitable for realtime applications? If not statically, is there a simple way to instrument our realtime sections to ensure the compiler isn't adding something that may obstruct progress of our real time threads?

EDIT: And also, how much of an impact will the realisation of Swift's ownership manifesto have on using Swift for realtime applications?

Don't know the current official word on that, last time I paid attention it was: "don't do this" in 2015 and "yes you can" in 2016 (see the proof-links in the message above).

I think - practically you can. Just remember some audio glitches are unavoidable even if you do it in C (e.g. the data or code you are touching happen to be paged out and need to be loaded from disk - that's I/O → which can take unbound time (or just enough time to miss the deadline). Good idea is to count the number of glitches per a time interval to see how good / bad it works in practice.

1 Like

Yeah, I suppose if you program like you are programming in C, then it should be fine. After all, why would the compiler need to generate runtime calls?

So you can't use swift arrays (gotta preallocate anything and use UnsafePointer), no escaping closures, etc. It's fairly restrictive but I can get quite a bit done.

Still, apple blocked us from trying with that __SWIFT_UNAVAILABLE_MSG. I've gotten around it using class_replaceMethod: AudioKit/EngineAudioUnit.swift at 3e5003bde0ea9d36e48e8512778cbfbb9d3ae243 · AudioKit/AudioKit · GitHub

I'm sick of waiting around for this. We're doing AudioKit v6's DSP with Swift.

4 Likes

Yeah doable. We wanted to use Swift Atomics but they use a C shim and we wanted compatibility with the Swift Playgrounds app since AudioKit is beginner-friendly (Swift Atomics can't be used in Swift Playgrounds.app · Issue #62 · apple/swift-atomics · GitHub)

Instead I used the (deprecated) OSAtomic APIs: https://github.com/AudioKit/AudioKit/blob/v6/Sources/AudioKit/Internals/Engine/RingBuffer.swift

Swift's Array is not normal :slight_smile:?

I implemented a few "stack" based fixed sized arrays via tuples. It's far from convenient:

struct Array16<T> {
    var elements: (T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T)
}

and then you define your own subscript operations, etc. But what if I need an array of 1234 elements? I guess I can define it as Array64<Array32> – quite a hassle to maintain and lot's of wasted space.

I assume you only have a single writer (e.g. a microphone) and a single reader (e.g. a speaker) hence it is ok to not protect the head/tail variables, only the fillCount variable. However it looks like your code is missing a load barrier when reading "fillCount" variable. For example this (a bit overkill but should work): "OSAtomicAdd32(0, &fillCount)".

OTOH, with that barrier in place, or with all barriers (atomics) removed completely it won't sound too different I guess – What happens with proper overflow / underflow checks when overflow/underflow happens? A glitch. What happens on overflow/underflow without checks? Also a glitch :slight_smile:

That code is just adapted from TPCircularBuffer, which is used all over the place: TPCircularBuffer/TPCircularBuffer.h at master · michaeltyson/TPCircularBuffer · GitHub

That's really useful info, thank you.

Some further research on ManagedBuffer kicked up this issue by @Karl, which shows that unless you access ManagedBuffer's header through withUnsafeMutablePointerToHeader, the Swift runtime is used to enforce exclusivity.

So maybe it's better for me to do as you have done in your RingBuffer and use UnsafeMutableBufferPointer directly as I'm not confident that the implementation of ManagedBuffer won't change and suddenly rely on the runtime.

Looks like I'll be copying @Karl 's trick of pasting my real time code into Godbolt and checking for emission of Swift runtime calls to verify realtime safety.

Wish there was a better way!

There's this: Performance annotations - #64 by tera

I haven't gotten it to work yet.

1 Like

Yeah, that would be great. Looks like a pre-pitch though.

Any chance we might see the performance annotations mentioned here, coming to fruition in 2023, @Erik_Eckstein ? Lots of appetite for them here!

If anyone is curious, or wants to tell me the hundreds of ways I've violated realtime safety via the swift runtime, my WIP code is here (new engine for a popular open source framework) AudioKit/Sources/AudioKit/Internals/Engine at v6 · AudioKit/AudioKit · GitHub

5 Likes

These are already available as underscored attributes, aren't they?

2 Likes

Oh, interesting. Thanks! Will give these a try later this week.

Something tells me it doesn't yet work :rofl:

1 Like

Perhaps the error was always there and propagated from source to source over the years. Lock-free algorithms are very easy to do wrong and bugs reproducible on "weaker memory model" architectures could go unnoticed on "stronger memory model" architectures (e.g. x86 or first iPhones that weren't multicore at all). In case of audio – bugs (audio glitched) could get unnoticed when happen rarely.

It's easy to google to get results like here or here showing the need of load barrier on the reading side. By no means I am an expert on this huge topic so I have to trust experts and test results.

I see, you've changed from OSAtomicAdd32 that I saw in a previous version of your code to Atomics. Does it work ok in playground or did you drop that support?

Looking at RingBuffer file (and only that file) It feels almost alright to me (but I'd like to hear from real subject experts!) other than these two things plus one extra:

  • you are using "relaxed" for both reading and writing, it should be acquiring on read and releasing on write.
  • classes are huge no-no for realtime algorithms (retain/release are outlawed), and ManagedAtomic is a class. I'd use UnsafeAtomic which is a struct. (the nomenclature in atomics library could've been better). I'll live it with you to check the implications of having the RingBuffer itself as a class (does it cause ARM traffic when you call its instance methods? worth checking the asm).
  • something from my experience, but may be not important in your case: for audio I prefer to have APIs that give me ability to write audio frames (one or more channels), audio samples or just bytes, as sometimes the blocks of audio data I have to write is variable length (useful for things like Varispeed component or codecs like mp3 or opus with variable input to output ratio expressed in bytes). Your API enforces fixed size, unless the idea is to use T = UInt8 and use the "pop(to ptr: UnsafeMutableBufferPointer" / "push(from ptr: UnsafeBufferPointer" - which is not convenient (IRT variable the size still: would be quite hard to vary the count of existing UnsafeMutableBufferPointer).
1 Like

Yeah, I think you're right about that. I've been updating my code to take into account memory ordering.

Yeah, I've got a new version that I think its correct.

Oh sheesh. That's rough, since I'm using classes in various places. I can use std::shared_ptr in C++ (also reference counted), so why are retain/release a problem? In C++ I just need to ensure the object isn't deallocated on the rt thread (various approaches to that).

Good points. Here's the C++ version which I'm going to port over: https://github.com/AudioKit/AudioKitEX/blob/main/Sources/CAudioKitEX/Internals/RingBuffer.h

No, we dropped that support for now. I wanted to get it passing TSAN and thought maybe the OSAtomic calls weren't good enough.

1 Like

Hold up. Retain and release are not “outlawed”. The normal path for retain/release atomically increments/decrements the inline refcount. There are circumstances where retain will cause a side allocation (overflowing retain count), and allocations can take the malloc lock. But as we already know, if your program touches memory at all, it’s impossible to avoid every single thing that can cause a hitch.

(I’d be more concerned about the overhead of dynamic exclusivity checking.)

3 Likes