I’m attempting to make some optimizations to the Swift standard library that should enable more elision of bounds-checking. In the interest of not wasting everyone’s time, I’ve been trying to run benchmark comparisons on these changes before submitting a pull request.
Ideally, I’d like to replicate the @swift-ci's smoke benchmark” preset exactly. Unfortunately, I couldn’t figure out which preset that actually is, so I’ve defaulted to using
This has proven less than helpful, as the benchmark inexplicably shows (consistent!) improvements and regressions from the main branch even when no changes have been made. That is, running the benchmark on the main branch and simply running it again afterwards consistently shows the same “changes”.
I’m obviously doing something wrong here, but I have no idea what it could be. Would anyone care to shed some light on the issue? I can provide more information if needed.