I'm currently trying to get into audio programming and have a very simple synthesizer setup, that uses AVAudioSourceNode for creating simple sounds.
My Synth class, where the sound generation happens, is structured like this:
class Synth {
private let audioEngine: AVAudioEngine
private let sampleRate: Double
private(set) var phase: Float = 0
private var sampleSource: SampleSource
private lazy var sourceNode = AVAudioSourceNode.init { (_, _, frameCount, audioBufferList) -> OSStatus in
let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList)
for frame in 0..<Int(frameCount) {
let sampleVal = self.sampleSource.sample(at: &self.phase, withSampleRate: Float(self.sampleRate))
for buffer in ablPointer {
let buf = UnsafeMutableBufferPointer<Float>(buffer)
buf[frame] = sampleVal
}
}
return noErr
}
init(withSampleSource sampleSource: SampleSource = Oscillator(withWaveform: .sine, frequency: 440)) {
self.audioEngine = AVAudioEngine()
let mainMixer = self.audioEngine.mainMixerNode
let outputNode = self.audioEngine.outputNode
let format = outputNode.inputFormat(forBus: 0)
self.sampleRate = format.sampleRate
self.sampleSource = sampleSource
let inputFormat = AVAudioFormat(commonFormat: format.commonFormat, sampleRate: self.sampleRate, channels: 1, interleaved: format.isInterleaved)
self.audioEngine.attach(self.sourceNode)
self.audioEngine.connect(self.sourceNode, to: mainMixer, format: inputFormat)
self.audioEngine.connect(mainMixer, to: outputNode, format: nil)
mainMixer.outputVolume = 0.5
}
public func setSampleSource(to sampleSource: SampleSource) {
self.sampleSource = sampleSource
}
}
Of course I only included the most important bits, in the real class there is a lot more going on. That brings me to my question: AFAIUI, this class is not thread-safe, as the AVAudioSourceNode closure uses e.g. the sampleSource variable, which could change at any time when setSampleSource(to:) is called, probably even from another thread. How should I model this to be thread-safe? Is using an actor from the new concurrency features in Swift 5.5 a useful way? Would it have to run on the same thread as the closure to actually be useful? Maybe someone can clarify these questions for me, as I always have a hard time to wrap my head around multithreaded programming.
2 Likes
shapsuk
(Shaps Benkau)
2
I’m working on something similar actually and would love to understand this too. In addition, any guidance on best practices for architecting and designing audio nodes in general, especially in relation to audio units.