Realtime threads with Swift

Yep, that is an advantage. It’s been a while since I worked on SIL, so that might only be true early in the pipeline, but there are already diagnostic passes that work on SIL even in no-debug-info modes, so your pass should be able to work as well.

2 Likes

Ok so if I have var ptr = UnsafeMutablePointer<Int>.allocate(capacity: 1), then the SIL is:

// function_ref static UnsafeMutablePointer.allocate(capacity:)
  %9 = function_ref @$sSp8allocate8capacitySpyxGSi_tFZ : $@convention(method) <τ_0_0> (Int, @thin UnsafeMutablePointer<τ_0_0>.Type) -> UnsafeMutablePointer<τ_0_0> // user: %10
  %10 = apply %9<Int>(%8, %4) : $@convention(method) <τ_0_0> (Int, @thin UnsafeMutablePointer<τ_0_0>.Type) -> UnsafeMutablePointer<τ_0_0> // user: %11
  store %10 to [trivial] %3 : $*UnsafeMutablePointer<Int> // id: %11

and the IR is:

%8 = call noalias i8* @swift_slowAlloc(i64 8, i64 %7) #1
  store i8* %8, i8** getelementptr inbounds (%TSp, %TSp* @"$s8realtime3ptrSpySiGvp", i32 0, i32 0), align 8

Seems straightforward to recognize swift_slowAlloc. I'm not sure what to do in the case of SIL. Recognizing something like UnsafeMutablePointer.allocate would mean that I would have to recognize calls outside of the Swift runtime, which would mean a potentially long (and incomplete) list of realtime-violating functions, no?

EDIT:

Ok I ran with the SIL passes (swiftc -emit-sil -O) and I get:

%15 = builtin "allocRaw"(%14 : $Builtin.Word, %12 : $Builtin.Word) : $Builtin.RawPointer

so presumably the set of builtin functions could be categorized as realtime-safe/unsafe. Would that be sufficient?

I think you have to assume any non-inlined function is realtime-unsafe (or do an analysis of any called functions), so I’m not sure this case is worse. But yeah, some SIL builtins and possibly some primitive operations might be realtime-unsafe as well.

I was made aware of this thread's existence during WWDC, and I am also interested in seeing Swift progress forwards as a more realtime-friendly (or "truly systems level") language in the future.

Unfortunately, for an AudioUnit that's written in Swift right now, any progress made via the above work will be hampered by the fact that we're going to get additional swift_allocObject calls inserted into the code that's executed in the audio unit's internalRenderBlock. More details here: https://bugs.swift.org/browse/SR-9662

In addition to your checks that guard against allocations, we'll also want to have checks to ensure that there are no surprise locks that are taken behind the scenes in the runtime when we interact with certain reference types. Especially in the cases where someone might (as an example) accidentally try using an AVAudioPCMBuffer instance, which is backed by an Objective C implementation, and hence unsafe.

Perhaps what we need is a "runtimeless" subset (superset?) of Swift that would allow us to build out libraries/packages that don't have access to the standard library (or an alternate version/subset of it) before we could achieve something like this.

But that'd mean there's no heap-allocated types, no ARC, and I'm not sure that such a thing would even closely resemble the Swift that we know today. :smile:

Anyway, count me as a :heavy_plus_sign: for cheering on any efforts in this direction. I've got a ton of Swift-powered DSP that I use in a non-realtime context, and would love to get some more mileage out of it, and stop maintaining parallel C++ and/or Rust implementations…

7 Likes

I presented some of my investigation/progress in the AudioKit office hours:

5 Likes

Was just looking at https://bugs.swift.org/browse/SR-9662... I wonder if using an UnsafeMutablePointer to the DSP kernel gets around the issues you were seeing? It seemed to work, if you look at the code in my presentation above.

EDIT: ok I get it. The getter is returning a function pointer to a "reabstraction thunk helper" which allocates. Fixing that would have to be part of this work. As a work-around, could we use indirection to call a @convention(Swift) render block from an ObjC block?

1 Like

Not to overwhelm you with choices, but there might also be a middle approach here—if we emitted the diagnostics during IRGen, at the point we try to emit a call to a realtime-unsafe runtime function, then we should be close enough to the SIL instruction that triggered the emission to get diagnostic location info from it at that point.

2 Likes

@liscio ... coming late to the party and not adding much except a +1 to your comment. I use swift to program microcontrollers (swift for arduino) and have experienced exactly this pain. MCUs are basically hard realtime environments. The 'microswift' I made has a super trimmed stdlib and almost no runtime, it has (almost) no heap allocated types, no ARC, no classes, no closures (except convention(c) function callbacks). It was the only way I could get it to work really. I still struggle with swift unexpectedly emitting loads of rtti type 'metadata' unwanted. That all said, it's been rewarding and it's useable... but the complaint I'm always hearing is basically a polite version of what you said... 'this doesn't even closely resemble the Swift I know!'

1 Like

actually that's not true now, that was official Apple policy until 2016.

compare this: WWDC 2015 Session 508 - Audio Unit Extensions - ASCIIwwdc
to this: WWDC 2016 Session 507 - Delivering an Exceptional Audio Experience - ASCIIwwdc

i don't know if anything material change between 2015 and 2016, but my understanding is that for the last 5 years apple no longer non-recommending swift for realtime audio. (just remember not doing anything beyond reading memory, writing memory and math.)

Creating a new AU target with the latest version of Xcode uses C++ for the DSP code.

Anyway, I think we're all in agreement that Swift isn't currently good for realtime because of its dynamic allocation behavior.

really, watch that WWDC 2016 Session 507 video if you can still find it (or ask someone to send you the relevant snapshot): i remember it very well - a square wave generator with a render proc written in pure swift. if Apple's Doug Wyatt says it's good - that sounds good to me.

also this: apple systems are not hard realtime. page fault occurs - nothing will stop audio glitch. and this can happen regardless of what language you use C or swift.

you can measure how many glitches say, per hour you are actually getting - inputProc/renderProc has timestamp parameter and if the previous timeStamp.sampleCount + numSamples doesn't match the current timeStamp.sampleCount - that's a glitch. do the two barebones implementation of, say, square wave generator, one in C, another in swift and compare the actual results on a couple of platforms. i wouldn't be surprised if you get no glitches in both implementations. or if you get a few glitches per hour - again in both implementations.

take Apple advice of 6-10 years ago with a grain of salt, especially given they stopped non recommending swift for real time audio 5 years already. obviously i am not saying it's good to use swift containers, or async dispatch or mutexes, etc... just read memory / write memory and math.

Swift would be fine for your single oscillator example which will not put much stress on the system. In my app, for example, users are often pushing it to the limit, and if some dynamic allocations snuck into the audio thread, that would increase the probability of a glitch.

so, in those bare bones tests just put additional processing to push processing to its limits:

func renderProc(...) {
	generateSquareWave()

	for i in 0...N {
		some silly no op here
	}
}

and choose N so you spend, say, 70% of allowed time, which is IOSize / sampleRate.

measure!

That's also not a good test because doing that no-op will not allocate. Part of the problem here is that it's a harder to predict in Swift what will allocate, hence the impetus to have some validation.

allocations are easy to avoid, just do not call anything but memmove and math.
and it is equally easy to "validate" by counting those "glitches per hour" if any.

one practical problem you may encounter - looks like you've already have all that massive amount of kernel code in C... yes, you can call that code from swift's input/render proc but ... is that that important?

found the link, interesting bits are around 0:38: https://devstreaming-cdn.apple.com/videos/wwdc/2016/507n0zrhzxdzmg20zcl/507/507_hd_delivering_an_exceptional_audio_experience.mp4

compare and contrast to 2015 video, around 0:49: Audio Unit Extensions - WWDC15 - Videos - Apple Developer

Terms of Service

Privacy Policy

Cookie Policy