import RandomFloat
import XCTest
final class RandomFloatTests: XCTestCase {
override class var defaultMetrics: [XCTMetric] {
[
XCTClockMetric(),
XCTCPUMetric(limitingToCurrentThread: true),
XCTMemoryMetric(),
XCTStorageMetric(),
]
}
func test() {
let floatsPerRun = 10_000_000
var meanSum = 0.0
var numRuns = 0
var prng = WyRand()
measure(metrics: Self.defaultMetrics) {
var sum = 0.0
for _ in 0 ..< floatsPerRun {
let v = pseudorandomFloatInClosedUnitRange(using: &prng)
sum += Double(v)
}
meanSum += sum
numRuns += 1
}
print("Total mean float value:",
meanSum / Double(floatsPerRun * numRuns))
}
}
It depends on what you mean by simple. I'd say using XCTest APIs is not as simple as:
Write/download/copy-paste a single source file (command line) program
Compile (from command line according to instructions within the file)
Run
And XCTest is macOS (and Xcode?) only.
Also, according to my experience, testing for performance will almost always mean bumping into various context dependent optimizer glitches etc, and therefore it's important to be able to measure and profile some piece of code in many different contexts. XCTest forces your code into a specific context and is a lot of boilerplate.
The choice of ProcessInfo.processInfo.systemUptime is discussed in apple/swift-corelibs-xctest#109 (search for mach_absolute_time in the conversation).
For a single source benchmark, you could still have a separate measure function, to contain the boilerplate code.
You could also put assert(false, "Compile with optimizations") at the top of the file, if you wanted to stop accidental -Onone results.
I am no expert in benchmarking, so I am not especially confident in the optimal mechanism for measuring the passage of time for this purpose, however after some cursory research, I settled on using NSProcessInfo 's systemUptime property, which ultimately calls down to mach_absolute_time on Darwin, and clock_gettime with CLOCK_MONOTONIC on Linux. As far as I can tell, these are the appropriate primitives to use for this purpose in these environments. One potential source of error is introduced in that the time values are being converted to Double , but I'm unsure of the practical impact of that.
I'm assuming that this is the answer, because (1) ProcessInfo is part of the Foundation framework, and thus is the solution closest to being fully available on all platforms, and (2) its use by the XCTest framework seems to validate its reliability for the benchmarking use case.
Tests are run in whatever mode you tell the build system to run the tests. It’s certainly possible to write tests that are intended to be run in release mode.
@AlexanderM You can use swift test --configuration release on the command line; or edit the scheme in Xcode, so the Test action uses the Release build configuration.
And as I mentioned earlier, you can add an assert(false, "Compile with optimizations") statement to enforce this.
the GetSystemTime function on Windows, where the SYSTEMTIME structure has a resolution of 1 millisecond.
the "obsolescent" gettimeofday function on all other platforms, where the timeval structure has a resolution of 1 microsecond.
On macOS:
import Darwin
// Resolution of `gettimeofday`:
var rt = timespec()
clock_getres(CLOCK_REALTIME, &rt)
print(rt) //> timespec(tv_sec: 0, tv_nsec: 1000)
// Resolution of `mach_absolute_time`:
var ut = timespec()
clock_getres(CLOCK_UPTIME_RAW, &ut)
print(ut) //> timespec(tv_sec: 0, tv_nsec: 1)
I've opened SR-12124 to improve ProcessInfo.processInfo.systemUptime for BSD and Windows.
But I think DispatchTime.now().uptimeNanoseconds is the only cross-platform API (from the original list) which seemingly guarantees "nanosecond precision".