I'm researching ways to do some memory recycling on Swift by allocating memory pools for fixed-size blocks. I built a skeleton for it but I'm definitely missing some core knowledge to model it.
Is there a standard way to handle the allocation and deallocation (reusable) of contiguous fixed-size blocks in Swift or is it a better investment to pull some C or C++ library to handle this?
Fixed size is easy. Usually you put deleted nodes on a single linked list of nodes, and dequeue from that list when you need a new node. Once the list is empty - allocate a new area (e.g. malloc) and split it into nodes. For simplicity do not return the areas back to the OS. If you don't need thread safety (e.g. single thread only or allocator per thread) you don't need atomicity (mutexes, etc).
Awesome, one question though. Is there a strategy for the de allocation at some point of the allocated memory? Or is it ever present for the rest of its existence.
You wouldn't have ARC. ARC allows for objects to have their lifetimes extended arbitrarily, even across threads, with deterministic lifetimes and the ability to detect uniquely-referenced allocations. A lot of the point of manually managing memory is that you have some particular operations which don't need the full flexibility of ARC, and would prefer the performance of something more restrictive and tailored to your operation's allocation pattern.
That bespoke system won't have the compiler integration that ARC has, and what you need to do instead depends on how granularly you need to track the lifetime of the allocations you dispense. If you want to track individual values, you might want some kind of retain count, with retain and release operations. Or the lifetime might be simple enough that alloc and free are all you need.
Or (more likely), perhaps you don't want to track individual values at all; you know that by the time the operation is finished, all allocated memory is no longer being used (we can't guarantee that at the language level, but you can use programming conventions, etc). In that case, you can just wipe the entire arena.
Or, as you say, maybe just leak the memory. That can be fine; perhaps you just need to allocate a huge amount of memory to calculate a result, then the program will quit immediately anyway. If you want to build a bespoke system, all options are available.
IRT deallocation of arena chunks there are few options:
don't do arena block deallocation: it's not technically a memory leak so long as you are using your allocator - all arena memory allocated is still accessible for the new allocation from your allocator.
perhaps you use the allocated data structures solely in a certain scene or mode and once you are out of that scene / mode you can deallocate the whole allocator.
you can micro manage how much of a particular arena block is used and once that usage drops to zero you can deallocate that one arena block.
provided you can clone your data structures and the clone is as good as the original another option would be to track the fraction of currently used memory vs the maximum memory available from all arena blocks, and once that fraction drops to, say, 1/10 - you clone your data structures into a brand new allocator (this will give you compact allocation) and get rid of the old allocator altogether.
a combination of the above.
I'd start with the first option (do nothing) and do other options only if / when proved needed.
Whatever you do it's worth checking the performance against malloc/free, if your implementation is slower or only insignificantly faster¹ - there's no point doing it.
¹ "significantly faster" is subjective and hard to define. In some projects < 10x speedup is insignificant, in others even 5% speedup is significant.
PS. I remember implementing my own malloc once (not fixed sized) in the hope to speed it up and once I finished compared that with malloc/free: mine was slower