Recommended way to measure time in Swift?

I've never seen a clear answer, despite numerous related blog posts and conversations.

Here's a recent one (that can continue in this thread).

It mentions the following alternatives:

  • CACurrentMediaTime()

  • Foundation.clock()

  • mach_absolute_time() together with mach_timebase_info(…)

  • CFAbsoluteTimeGetCurrent()

  • clock_gettime_nsec_np(CLOCK_UPTIME_RAW)

  • DispatchTime.now().uptimeNanoseconds

  • ProcessInfo.processInfo.systemUptime


Is there a way that is accurate, discoverable, non-verbose and works reliably across most common platforms without specific imports etc?

If not, should/could one be added to the Standard Library? If so, how?



Edit: The accepted answer (as of 2020-02-05) is:

  • ProcessInfo.processInfo.systemUptime

It is used in XCtest and it …

… ultimately calls down to mach_absolute_time on Darwin, and clock_gettime with   CLOCK_MONOTONIC on Linux.

  1. Imprecision w.r.t. conversion to Double is a non-issue for this level of API.
  2. Using mach_absolute_time is the right answer on OS X, via a wrapper in Foundation seems reasonable to me.

(source)

And also:

13 Likes

macOS 10.15 has new XCTest APIs for measuring time and other performance metrics.

It makes the test method from your Improving Float.random(in:using:) post a bit simpler.

import RandomFloat
import XCTest

final class RandomFloatTests: XCTestCase {

  override class var defaultMetrics: [XCTMetric] {
    [
      XCTClockMetric(),
      XCTCPUMetric(limitingToCurrentThread: true),
      XCTMemoryMetric(),
      XCTStorageMetric(),
    ]
  }

  func test() {
    let floatsPerRun = 10_000_000
    var meanSum = 0.0
    var numRuns = 0
    var prng = WyRand()

    measure(metrics: Self.defaultMetrics) {
      var sum = 0.0
      for _ in 0 ..< floatsPerRun {
        let v = pseudorandomFloatInClosedUnitRange(using: &prng)
        sum += Double(v)
      }
      meanSum += sum
      numRuns += 1
    }

    print("Total mean float value:",
          meanSum / Double(floatsPerRun * numRuns))
  }
}
1 Like

It depends on what you mean by simple. I'd say using XCTest APIs is not as simple as:

  1. Write/download/copy-paste a single source file (command line) program
  2. Compile (from command line according to instructions within the file)
  3. Run

And XCTest is macOS (and Xcode?) only.


Also, according to my experience, testing for performance will almost always mean bumping into various context dependent optimizer glitches etc, and therefore it's important to be able to measure and profile some piece of code in many different contexts. XCTest forces your code into a specific context and is a lot of boilerplate.

1 Like

XCTest seems to use
Foundation.ProcessInfo.processInfo.systemUptime
to measure time.

The choice of ProcessInfo.processInfo.systemUptime is discussed in apple/swift-corelibs-xctest#109 (search for mach_absolute_time in the conversation).

For a single source benchmark, you could still have a separate measure function, to contain the boilerplate code.

You could also put assert(false, "Compile with optimizations") at the top of the file, if you wanted to stop accidental -Onone results.

3 Likes

I quote the relevant part here:

I am no expert in benchmarking, so I am not especially confident in the optimal mechanism for measuring the passage of time for this purpose, however after some cursory research, I settled on using NSProcessInfo 's systemUptime property, which ultimately calls down to mach_absolute_time on Darwin, and clock_gettime with CLOCK_MONOTONIC on Linux. As far as I can tell, these are the appropriate primitives to use for this purpose in these environments. One potential source of error is introduced in that the time values are being converted to Double , but I'm unsure of the practical impact of that.

And the reply from Daniel Dunbar:

The time measurement parts sound reasonable to me for an initial implementation:

  1. Imprecision w.r.t. conversion to Double is a non-issue for this level of API.
  2. Using mach_absolute_time is the right answer on OS X, via a wrapper in Foundation seems reasonable to me.
1 Like

So it looks like
ProcessInfo.processInfo.systemUptime
is the answer to the question of this thread then, thanks!

1 Like

I'm assuming that this is the answer, because (1) ProcessInfo is part of the Foundation framework, and thus is the solution closest to being fully available on all platforms, and (2) its use by the XCTest framework seems to validate its reliability for the benchmarking use case.

1 Like

Aren't tests run in debug mode, with minimal optimizations? How would you ensure you're measuring "real" (prod-build) performance using these APIs?

Tests are run in whatever mode you tell the build system to run the tests. It’s certainly possible to write tests that are intended to be run in release mode.

@AlexanderM You can use swift test --configuration release on the command line; or edit the scheme in Xcode, so the Test action uses the Release build configuration.

And as I mentioned earlier, you can add an assert(false, "Compile with optimizations") statement to enforce this.

2 Likes

Ooooo that's clever!

CFAbsoluteTimeGetCurrent() don't give any guarantee about its resolution (and don't use an high resolution clock under the hood AFAIK).

See edit of OP, I added the (currently) accepted answer, which is not CFAbsoluteTimeGetCurrent().

The documentation doesn't mention resolution. The implementation uses:

  • the GetSystemTime function on Windows, where the SYSTEMTIME structure has a resolution of 1 millisecond.

  • the "obsolescent" gettimeofday function on all other platforms, where the timeval structure has a resolution of 1 microsecond.


On macOS:

import Darwin

// Resolution of `gettimeofday`:
var rt = timespec()
clock_getres(CLOCK_REALTIME, &rt)
print(rt) //> timespec(tv_sec: 0, tv_nsec: 1000)

// Resolution of `mach_absolute_time`:
var ut = timespec()
clock_getres(CLOCK_UPTIME_RAW, &ut)
print(ut) //> timespec(tv_sec: 0, tv_nsec: 1)

I've opened SR-12124 to improve ProcessInfo.processInfo.systemUptime for BSD and Windows.

But I think DispatchTime.now().uptimeNanoseconds is the only cross-platform API (from the original list) which seemingly guarantees "nanosecond precision".

1 Like