[Accepted] A New Direction for Testing in Swift

OK I get it. I'm late to this party. No DSL-based Unit Test framework will be written. IMO It's unfortunate that a test framework like Spock won't (maybe can't, IDK) be made available for Swift.
It would be good if you list in the vision document all the things that can be done by the new syntax that can't be done with XCTest, or can be done more easily.

And make compile times pretty abysmal. Which hurts unit tests worst of all, since ideally they run immediately and automatically while you edit code, just like auto-complete and compiler diagnostics.

I worry about the potential similarities with SwiftUI previews, which have to be fast in order to be useful. But I buy a lottery ticket anytime I can get a SwiftUI preview - of a real-world use-case - to update faster than just running the whole app. I sure don't want unit testing to become like that.

I'm surprised there hasn't been more consternation about depending on Swift macros for such essential, performance-sensitive functionality.

It's not very elegant, though, is it? Lots of boilerplate.

In reality, what I [for one] am going to almost always write instead is:

func testTimesTen() {
    for (i, result) in [(2, 20),
                        (3, 30),
                        (50, 500)] {
        #expect(i * 10 == result)

How will this new testing framework treat that? Will it deduce the parameterised nature of the test and run all cases rather than failing at the first; will it present the results in a suitable form (e.g. a table) to help make clear which specific test cases are problematic?


From my perspective that is nothing like a Java code (can’t tell for Scala). And, as the discussion follows, it’s also transformations that are happening under the hood to make it possible.

Macros in Swift is still new, so we haven’t get used to them from perception point, but I’m pretty sure nobody will be confused on using them. The nicest property of them is ability to expand macros, so you can know what’s happening in checks if needed.

But comparing to examples in the thread, I say Swift version looks a lot more as regular code, you only need to add marker that this is a test.

I’d argue that this is a matter of what we have used to. Having inputs being function parameters is actually good separation of input and assertions that can be made not at the level of function body, but at the declaration.

Hope this dependency will make an impact on addressing this issue, as we see more and more frameworks depend on macros.

I believe SwiftUI previews here isn’t correct comparison, as they are a mix of somewhat dirty hacks and Xcode long history of issues, not just reliability on a hard-to-compile dependency.

1 Like

Looking forward to these changes, especially parameterized tests. I’ve worked with pytest, GoogleTest, Boost.Test, and others which all have this feature, but it’s always a little awkward. I think using macros here is a good idea. In fact I wrote a macro to do just this when Swift macros came out but it didn’t fully work with Xcode.


  • Will tests be skippable, based on a runtime action? For example OS version?
  • Can suites have setup and tear down? I’ve found some tests can have significant speed ups by doing setup once per suite and XCTests supports this well.
1 Like

I share this sentiment (and not only IRT to testing).

FWIW in C the use sites of macros are not polluted with those @ or #, only the declaration sites. The above would look, say:

void squareAcceptsZero() {
  expect(square(0) == 0);

with Test and expect macros appropriately coloured in the IDE.

1 Like

In the Benchmark package (which has an analogous requirement to be able to run many different permutations of benchmarks that are treated as separate runs), we ended up with parameterization on the outside too;

let parameterization = (0...15).map { 1 << $0 } // 1, 2, 4, ...

parameterization.forEach { count in
    Benchmark("IterationsParameterizedWith\(count)") { benchmark in
        for _ in 0 ..< count {
            blackHole(Int.random(in: benchmark.scaledIterations))

It feels nice to have the parameterization on the outside, but agree that the contrived test one was a bit hard to read. Now, as the tests are top level functions that are driven by macros I’m not sure what’s practically possible to make it a bit more readable.

I'm used to explicitly parameterised unit tests in other environments, that are very similar to what's proposed in this vision document. e.g. pytest. I have many years of experience with them - enough to know that their syntax is awkward and error-prone.

I don't feel strongly about it because I don't use unit tests much, I was just hoping to sway Swift's testing infrastructure to a new, better direction.

Consider also something like:

@Test(0 -> 0)
@Test(1 -> 10)
@Test(10 -> 100)
func timesTen(i: Int) -> Int {
    i * 10

That's become a common refrain in the forums anytime macros come up as a dependency. We're yet to see any progress in that direction, though. It's at the point now where it's a "fool me thirty-seven times…" situation.


pytest in my opinion is a two-sided example. I like their way of passing parameters, and that’s why I like this approach in Swift as well. While what’s is the source of awkwardness in pytest is dynamic nature of the language, that makes it hard to track all these dependencies and parameters in unambiguous way. I agree that it is easy to turn that tests into a mess with mistakes in it, but still account this more for a language nuances than testing approach. As an example, take JavaScript (merely due to its dynamicity here): there are several approaches I have seen in it, yet all of them suffer from dynamic nature, not some flaw in testing environment.

Which is a point of argument so far (not for Swift, but in general), isn’t it? Take BDD, for instance, it has gained popularity, yet I haven’t seen it becoming go-to strategy for new languages. Quite often it is more a complication rather than a solution.

That looks good, no doubts, but how it will transfer to other cases and scale as test complexity increases? What if there are several parameters? What if there are several equality checks? What’s about other assertions?


I don't know, but I think the better question is: does it have to?

It's okay to have multiple ways to write unit tests, where one is tailored for simpler, common cases, and the other caters to more complex situations.

Being able to neatly 'merge' some of the unit tests into the code¹, as lightweight and unobtrusive assertions / annotations / similar, can be helpful. Not just because it tackles problems of code separation (between tests and their targets), but because well-defined test cases can help document the function (not just to users, but also to modifiers - it's very handy to have the functionality legislated right there in the code you're editing, rather than off nebulously in some random other file).

There are also secondary but potentially major benefits, like subtly encouraging pure[r] functions by making it [even] more convenient to test them.

I also think it's naturally on the path to more powerful "tests", such as formally defining invariants and letting the test infrastructure prove them (either logically or empirically, such as by fuzzing or gradient-descent approaches). e.g.:

@Invariant(i.magnitude <= result.magnitude)
@Invariant(i.signum == result.signum)
func timesTen(i: Int) -> Int {
    i * 10

¹ To be clear, I don't mean just moving test code as-is into the same file as its target. That largely just pollutes the real code with boilerplatey test code that rarely needs to be looked at; it's not the right trade-off.


being “lightweight” is a secondary concern to me. the predominant concern is the simple fact that assertions, well, crash the process. i understand this is annoying but not catastrophic for client-side applications. therefore, when developing those sorts of applications, assert early and assert often is good practice. but the game is very different when you’re developing on the server. crashing on precondition failure is highly damaging to server-side applications, even in the best case scenario where the daemon comes back online promptly, because if you crashed while a request from say, Googlebot, was in flight, well you’ve got a really tough month ahead of you.

this isn’t as tangential as it might seem. one of the things that Swift lacks compared to other languages is a mechanism for recovering from an assertion failure. throws is not sufficient for this use case, because there are a lot of constructs in the language (properties, subscripts) that don’t support throws. this feels pretty closely related to testing fatal errors which is something that would fill a huge gap in our tooling.


I did not mean assertions in the sense of assert or precondition. This is all still test code that we're talking about, that doesn't exist at all in the real (non-test) binaries.

I used the word "assertions" in the more general sense of basically all the checks you write in unit tests. Those all boil down to an assertion that some observable behaviour is something specific.

Tangentially, if assert, precondition, et al could [also] be used by the test infrastructure to search for violations, I'm not sure that'd actually be a good idea, because it might encourage further use of them in libraries, which is very often wrong. But then, if the compiler & test infrastructure do a particularly good job of proving that they can't actually be hit, then it could conceivably still be a net win. :thinking:


It was very hard not to reply to everyone's comments over the weekend! :sweat_smile: You all had a lot of thoughts to share, and I've tried to answer a selection of points/questions here.

I don't think there's more boilerplate than necessary. This contrived example just shows a specific possible use case, and has no real test content, so the structure of the test dominates. Real-world tests using swift-testing have a better boilerplate-to-content ratio.

No, the testing library is not going to try to turn a test function into a parameterized test just because it has a for-loop in it, nor is that a direction we'd likely pursue in the future. Imagine a test like this one that is looking for a non-ASCII character in a string:

@Test func stringIsAllASCII() {
  for c in someString {

It would likely be a mistake to turn this into a parameterized test over the collection someString. This example is, again, contrived, so the danger may not be apparent: imagine that the for loop is over some non-sendable generated sequence rather than over a constant sequence. Because a parameterized test can run its individual test cases in parallel, automatically turning this for loop into a parameterized test could produce unexpected downstream effects.

Yes! Use the .disabled(), .disabled(if:), or .enabled(if:) traits to conditionalize test execution.

We're leveraging the features of the Swift language to support setup and teardown with init/deinit as well as with static let. We recognize there are usage patterns that may not be covered here, and we're still looking at how we can cover those patterns in a way that isn't ersatz. @smontgomery and I would be happy to discuss this more with you in a dedicated thread if you'd like.

We've discussed this particular topic with the core Swift team at length. :slight_smile: I don't presume to speak for them (@hborla or @Douglas_Gregor might be able to shed more light than I can) but in a nutshell, it's important that Swift macro invocations be visible at their usage sites rather than looking like function calls. This is why you have to write #expect() instead of expect(). As for @, well, that's how Swift spells attributes anyway like @Sendable or @MainActor or @TaskLocal, and we're consistent with the (rest of the) language here.

The macros used by swift-testing are not just wrappers around function calls. Having @Test and #expect() be spelled Test and/or expect() would require changes to the Swift compiler and the introduction of new language- and compiler-level features just to support test compilation, which is a non-goal for the Swift project at large.

If it helps, you can use the same sort of pattern with swift-testing:

let parameterization = (0...15).map { 1 << $0 } // 1, 2, 4, ...

@Test(arguments: parameterization)
func iterations(count: Int) {

You could go further and have the inner loop be part of the parameterization as well:

let parameterization = (0...15)
  .map { 1 << $0 } // 1, 2, 4, ...
  .map { 0 ..< $0 } // 0..<1, 0..<2, 0..<4, ...
@Test(arguments: parameterization)
func iterations(range: ClosedRange<Int>) {

I think that -> specifically is reserved by the language, but I would be happy to be wrong there. A pattern like this would be an interesting area to explore. We have intentionally not added any sort of support for return types other than Void so far, but that doesn't mean we couldn't in the future. (I imagine in this case that the custom -> operator would need its right-hand operand to conform to Equatable.) :slight_smile: We can start a separate thread to discuss further if you'd like.

This is something we're exploring for a future release. We call these sorts of tests "exit tests" and we have an experimental implementation already. See my post here for more info.


I'm very excited this is finally being worked toward! Testing that feels/is native to the language is IMO one of the biggest missing features for Swift.

As someone who likes Nimble I really like the direction of a simple expect being all-inclusive for different Test assertions

Like others I'm hesitant about the reliance on Macros. Not so much around @Test, but more around #expect. To me, the likely expected answer to "how do you write a test in Swift?" would be function syntax expect(/ *expression */). I'm curious whether choosing a Macro adds or removes complexity in the long run. While lots of different expect functions add to the maintenance burden, it's relatively easy to change or add a specific implementations. Macros on the other hand are significantly more complex making even relatively simple changes require a larger base-level understanding of both the language and testing library.

However, the capabilities that come with Macros are great. being able to easily inspect the passed expression for better test messages, supporting the many use cases with one entry point, all while most (all?) testing features being in a basic package (and backwards compatible?) is really great. I think the only other way to support these features would require significant additional work at the language/compiler level.

So at least at first glance the advantages of Macros seem worth that slightly-less-ideal (IMO) syntax. Regardless, I'm excited to start adopting this!

And even more excited for the possibility that XCUITest might see some related improvements soon :grin:

1 Like

IMO the language (and its 1st party libraries) shouldn't support mocking.

The biggest reason being "Mocking" is a vast and varied space with no obvious preferred solution.

What-and-how things are Mocked depend heavily on the details of the codebase. There's a lot of variety based on different architectures, design patterns, and codebase complexity. IME there isn't one implementation that would naturally support all or even most current approaches to Mocking.

But regardless of the approach, supporting a specific way of Mocking in Swift would mean implicitly endorsing both mocking in general, and the specific chosen approach. This would unavoidably lead to less diversity in testing approaches and in 3rd party solutions.

That effect of a 1st party solution isn't inherently bad. However, especially with this new Testing direction following closely behind Macros and Structured Concurrency, this seems like a bad time to define a language-supported approach to Mocking.

Instead, this is the time for users and library authors to experiment; building and trying different approaches using the new features available. Usage will naturally evolve and (likely) zero in on 1-3 common/popular approaches. Down the road once things have settled would be the time to discuss whether it makes sense to adopt an existing implementation, build a 1st party version, or continue leaving the space to 3rd party libraries.


Mocking is only really feasible in late-bound languages like Objective-C and Python. In languages with early binding, you wind up having to contort your programming style to always use late binding whenever you might want to mock an object. In Swift, this means maintaining a protocol in addition to your concrete types, which is understandably annoying and has performance impacts.

Some folks have suggested Swift could have a mode that effectively turns off early binding, perhaps by making every data type implicitly a protocol. Enabling such a mode only for testing would result in testing something other than what you ship, which seems dubious.


How or whether anyone should use mocking isn't really the question here. The reality is plenty of codebases do, including those at my last 3 employers.

Even so, it should now be possible to accomplish the codebase-ergonomics of an implicit protocol (or something like a generated MockMyObjectType) pretty easily using Macros. I'm excited to see what people come up with as that ecosystem grows and betters performance.

Really happy for this step in Swift testability. And off course this is not the end state. Testing fatalError and friends is still high up my wish list :wink:

BTW the integration of Swift-Testing in Xcode looked very slick in the WWDC videos. :star_struck:


I love the direction of swift-testing already, especially the expect API, it's a fantastic improvement on what we currently have access to.

One thing I would love to see with the new tooling is better support around test metadata/output control. Things like:

  • Options for showing only tests failures, rather than all output (the CLI output from the WWDC videos already look 10x better on this front)
  • Better exporting of test results to standard result formats like junit, with good information about failures for CI purposes
  • Better flags/output around test execution time to keep an eye on long running tests
  • Better ways to track test results over time/revisions

Of course those are very generalized ideas, but I think adding some of these extra output options/tooling would be super helpful - especially with all of the awesome integrations shown off in WWDC

1 Like

I think this is an important point that has been unanswered. Hasn’t there been a general problem with the performance of macros, and has this already been solved to a good extent?

The vision document has a Distribution section that addresses this: