I’m getting close to opening a pull request for improvements to the floating-point random
methods, and I’d like to add a benchmark to measure the performance change. I haven’t worked with benchmarks on the Swift project before, so I wanted to ask about it here first.
The benchmark file RandomValues.swift currently measures Double.random(in: -1000...1000)
using both the system RNG and an LCG. I would like to add similar benchmarks for the range 0..<1
.
My implementation fixes several bugs in the existing floating-point random
methods, but in order to do so it must perform additional work, and thus runs somewhat slower than the current versions. However, for certain ranges like 0..<1
, it has a fast-path which nearly matches the existing implementation’s speed.
Since 0..<1
is likely to be among the most common floating-point ranges for random number generation, I think it is worthwhile to include a benchmark for that range as well. By introducing this benchmark prior to changing the random
methods, the performance delta for the fast-path will be directly observable.
Does this seem reasonable?
If so, should I use the same legacyFactor
as the existing benchmarks on -1000...1000
, so that the measurements on 0..<1
are directly comparable to them?
Also, is there anywhere that historical results of the Swift benchmark suite (from some reference system) are available to be perused?