Currently, we have to write all our realtime audio processing code in C++. Swift can't be used because of its dynamic allocation behavior (this is Apple's policy, not just my opinion). There's quite a bit of extra boilerplate going between the two languages and it's a bit of a bummer. Plus, you're back in footgun C++ land.
I recently learned about region-based memory management and wondered if it could provide a way to use Swift on realtime threads. (There is a realtime extension for Java which uses regions)
So, in the audio case, a simple version might linearly allocate all objects for a given audio render cycle, while imposing some constraints to ensure safety. At the end of the render cycle, the region is freed.
I'm not sure how to best express this syntactically, but perhaps some sort of
region block (inspired by autoreleasepool) would do it.
Has anyone thought about this? Being able to write realtime audio code in Swift would be really great.