I may have misremembered; however, testing again that old project (an earlier experiments project, not published) and profiling it I end up with instances of outlined init with take of UInt?
, themselves calling __swift_instantiateConcreteTypeFromMangledName
. You're probably right: since those don't call anything, much less any allocation function, this is not actual memory allocation. Nevertheless, those show that the penalty is not just in the array itself, since that array was also preallocated at start, so those init were of something else than array elements.
This is a moot point anyway: there is no doubt that an, ahem, particularized sentinel value is preferable, as profiling the following revision of that unpublished project, where I switched to using them, confirmed that performance improved: by 40%…
This is a bit of an issue, as algorithms involving trees is where I'm most likely to want to extract performance out of Swift. Indeed, if I'm looking to optimize algorithms with more predictable execution paths, maybe I will write the first version in Swift, but them I am going to want to target a programmable GPU, since everyone has one of those and they do well on branchless number crunching; and Swift is not going to help me with that (even when counting the differentiable Swift experiments, since those were rather aimed at gradient descent, i.e. ML, than general purpose computations).