Exit tests ("death tests") and you

Hello fellow test authors!

I've just merged a PR that adds an experimental feature to swift-testing enabling what we're calling "exit tests." You might know them as "death tests", "death assertions", or a number of other aliases: they're tests that verify a process terminates under certain conditions.

Using swift-testing, you can now specify such a test on macOS, Linux, and Windows:

@_spi(Experimental) import Testing

#if os(macOS) || os(Linux) || os(Windows)
@Test func fatalErrorWorks() async {
  await #expect(exitsWith: .failure) {
    fatalError("🧨 Kablooey!")
  }
}
#endif

There are a number of future enhancements possible to this feature such as support for passing Codable arguments or inspecting the contents of stdout. Again, this is an experimental feature, so not all possible functionality is implemented. :slight_smile:

While we're not prepared to promote exit tests to a supported feature just yet, I have a draft API proposal here that goes into more detail about the design of the feature, how to use it, and constraints that apply.

37 Likes

Very cool! Does this also work with assertionFailure and related actor assertion/precondition methods?

1 Like

Anything that causes the process to terminate should "just work", including assertionFailure() and actor isolation checks.

2 Likes

This is great news! And it has long been a requested feature for testing in Swift, so I'm really excited about this (it will help me achieve 100% code coverage in Sargon :nerd_face: ).

Question: I did not find any example that continues to execute the test after an #expect(exitsWith:), is that supported? e.g.

@Test func fatalErrorWorks() async {
  await #expect(exitsWith: .failure) {
    fatalError("🧨 Kablooey!")
  }
  // Can I continue the test case here?
  #expect(1 == 1) // continue with important expectations.
}

I guess that works, right? Since neither expect nor require returns Never and since the exitsWith spawns a new process (which might exit..), but that should not exit the 'parent' test process, right?

1 Like

Yes, your example should work as intended. The closure passed to #expect(exitsWith:) is the only part of the test that executes in another process, and the parent process is not terminated as part of exit testing (if it were, that'd kind of defeat the purpose of spawning another process.)

4 Likes

Nice!

Another question:
(I did not see it under "Future directions") Any plans to support "simulating" if tests are running in DEBUG mode or not: sometimes we wanna run unit tests with optimisations flag, where DEBUG is not set, but we might wanna be able to unit tests that the DEBUG only "exits" are triggered - all assert/assertionFailure ones. So would be great if we could allow #expect(exitsWith behave for both DEBUG and not.

Debug mode is a completely different compilation context and, to simulate it, would require recompiling your binary. If you want to run your tests in debug and release modes, and you'd need to compile and run them twice in order to do so, just call swift test and pass --configuration as appropriate. :slight_smile: I hope that makes sense!

1 Like

Right, I naively thought it would be somehow possible for swift-testing to affect the rules under which assertionFailure is evaluated or not, but on second thought that was a stretch and not something a package can do, how silly of me.

So given this method

func frobnicate(_ a: Int, _ b: Int) {
  assert(a != 42)
  precondition(b > 0)
  // important frobnicate logic
}

to test it we would need to:

@Test
func test_frobnicate() {
#if DEBUG
    #expect(exitsWith: .failure) { frobnicate(42, 3) } // works for DEBUG only
#endif

    #expect(exitsWith: .failure) { frobnicate(5, 0) } // works for DEBUG and RELEASE
}

That we exhaustivly test frobnicate as much as possible when test run both under DEBUG and under RELEASE configurations.

I wonder what llvm-cov tool will think of the assert(a != 42) line when run under release config? Might regard it as an untested line, preventing me from 100% coverage... :cry: because AFAIK unfortunately there is no way to ignore a code line of code block.

I guess code coverate is too low level an area for swift-testing to work with, as in, requires low level tools like llvm-cov?

As far as I'm aware (and I may be wrong here!), llvm-cov is able to combine code coverage statistics across multiple processes. While Swift Package Manager does not currently have the ability to run your code under multiple configurations in one go (and thus doesn't know how to tell llvm-cov to combine statistics from both runs), that's conceivably something that could be added in the future.

I would suggest opening a GitHub issue against Swift Package Manager. :slight_smile:

4 Likes

@grynspan thx, I created SPM issue - a feature request to add support for SPM to display coverage.

1 Like