Is Swift really not performant enough for realtime audio?

It's not about performance, it's about latency (delay) in the worst case scenarios. For example if you were to use quicksort algorithm in a real time system you'd have to assume quicksort's worst case time complexity which is O(N^2), even if on average quicksort has O(N log N) time complexity.

Most strictly speaking, true realtime can't be achieved on a system with virtual memory or caches, unless you assume the worst case scenario for those subsystems, effectively assuming "cache hit" never happening. Thankfully in application to audio, strict realtime is not needed, and if you have a glitch once in a while, e.g. because you've launched too many apps and VM starts struggling – this is typically acceptable.

You can use Swift in audio realtime: Delivering an Exceptional Audio Experience, WWDC 2016 (relevant bits: 29:00 - 32:40, 37:50 - 42:20), but you are very limited in what you can do and it's kind of walking through a minefield. More details in this thread.

3 Likes