Dear community,
I'm new to Swift and wrote my first App last weekend. I'm an experienced software engineer, so I adapted quickly. I wrote some C code (a low-level audio-visualization implementation, anachronistic reimplementation of the famous Geiss screensaver/Winamp plugin from 1998) to render a framebuffer (RGBA). Now I wanted to render it natively in a macOS App and therefore decided to use Swift. I somehow regret my choice atm, as I'm really struggling in my fight over control of parallelization.
I have everything working basically - Audio App Capture via Aggregated Audio Device and Audio Tap with Core Audio; SwiftUI with it's UI and input fields; C integration of my code with FFI and pointer arithmetic. I don't think that I did a bad job at learning Swift in those 2-3 days...
I even implemented the datatype conversion to C datatypes and back, the FFT and all that. But when trying to make Swift decouple two Tasks that have to work in true parallel manner, I almost break my hands ;)
Core Audio shows some insane behaviour in delivering the audio buffers. Sometimes they arrive at a rate of 2 FPS and sometimes at 60 FPS. Only god knows when they will arrive. But due to the apparent strict synchronization in SwiftUI, my C codes render function is only called, when audio data arrived, pushing the rendering down to 2 FPS... or up to 60 FPS.. depends on witchcraft I guess ;)
Well, I understand that my writing sounds weird. And that's why I prepared a code repo for you to check. Simply checking this out, you'll be able to find a beautiful new open source music visualization... that runs at snake speed.... and to reproduce the issue in a matter of seconds.
Issue details: The audioQueue is basically receiving data at a pace that I didn't find a way to control. However, only when data is received, would the renderQueue actually call updateData() and let the detached rendering window call my C-code and re-render:
A) The primary fix I need is to get Swift to detach the queues. I simply want the renderQueue to re-render at the configured frequency, and not to wait for the audioQueue at all. It should always call updateData() at that pace, pick the latest audio data and that's all.
B) If that's fixed, my C code would finally render at the highest speed it can. I know that this is possible, because sometimes Core Audio delivers at 60 FPS and then everything is fine. It's a true Heisenbug. To get back control over that, I'd like to force CoreAudio to hand me the buffers at the fastest pace possible so that the waveform would sync better with the rendering visually...
This is probably hanging here:
Could you please help me with this? I really did work hard on getting all of that done on my own. This project will be open source -- I also have a WebAssembly version working. It's a bit frustrating to struggle so much with native performance when the thing renders at 25 FPS with high res in a browser...
https://kyr0.github.io/Milky.js/
How does that let Swift look like? ;)
Thank you in advance!
kyr0