A New Approach to Testing in Swift

Hi everyone,

I’m excited to announce a new open source project exploring improvements to the testing experience for Swift. My colleagues @briancroom, @grynspan, @chefski, @Dennis and I have been working on this in recent months and have some early progress we're excited to share.

Inspired by what’s possible with Swift macros, we’ve built a testing library API that can:

  • Provide granular details about individual tests using an attached macro named @Test. This enables many new features like expressing requirements, passing arguments, or adding custom tags, all directly in code instead of separate configuration files.
  • Validate expected conditions in tests with detailed and actionable failure information using an expression macro spelled #expect(...). This works by capturing the values of passed-in expressions and their source code automatically to inform failure messages, and is also easier to learn than specialized assertion functions since it accepts built-in operator expressions like #expect(a == b).
  • Easily repeat a test multiple times with different inputs by adding a parameter to the function and specifying its arguments in the @Test attribute.

Here's an example: It shows one test function, denoted using @Test , which includes two traits: a custom display name and a condition which decides whether the test should run. The test creates a food truck, stocks it with food, then uses #expect to check whether the quantity of food is equal to the value we expect:

@Test("The Food Truck has enough burritos",
      .enabled(if: FoodTruck.isAvailable))
func foodAvailable() async throws {
    let foodTruck = FoodTruck()
    try await foodTruck.stock(.burrito, quantity: 15)
    #expect(foodTruck.quantity(of: .burrito) == 20)

If the above test were to fail, #expect would capture the values of sub-expressions like quantity(of: .burrito) as well as the source code text. This allows rich diagnostic information to be included in the output:

✘ Test "The Food Truck has enough burritos" recorded an issue at FoodTruckTests.swift:8:6:
Expectation failed: (foodTruck.quantity(of: .burrito) → 15) == 20

Repeating a test multiple times with different inputs—known as parameterized testing—is also simple using this approach. A @Test attribute may contain arguments, and the function will be called repeatedly and passed each argument:

@Test(arguments: [Food.burrito, .taco, .iceCream])
func foodAvailable(food: Food) {
    let foodTruck = FoodTruck()
    #expect(foodTruck.quantity(of: food) == 0)

A New API Direction for Testing in Swift provides an in-depth look at our vision, describes the project's goals, and shows more examples of our proposed approach.

These ideas have been prototyped in a new package named swift-testing, which is currently considered experimental and not yet recommended for general production use. If you’re interested, we encourage you to clone it, explore its implementation, and try using it to write tests for your project. See Getting Started for instructions.

We would love to hear your feedback about these ideas or your experience using the experimental swift-testing package. Feel free to reply here, or create topics in the newly-created swift-testing forum category with your thoughts about this new approach. Some questions to consider when providing feedback:

  • What do you find difficult about testing in Swift today?
  • Does this address those challenges?
  • Are there additional features or improvements you'd like to see?

With this new experimental swift-testing package now open source, we plan to begin working on it in the open and are exploring work in closely-related components, such as the compiler, swift-syntax, and the package manager. Over time, we expect that these efforts will help this project mature with the hopes of providing a superior solution to XCTest.

If you're passionate about software quality, we invite you to join this exciting journey with us and help shape the future of testing in Swift!



This is so exciting! Congrats to everyone on the testing team for getting this off the ground, and I'm eager to see how it evolves.

One question that comes to mind is around the arguments parameter, where you can pass a bunch of possible inputs to run through the test. My question is, what does a failure message look like if one of the inputs fails? I have a library called TestCleaner that addresses this by wrapping each pair of input and expected output in a function that captures file/line information, so when a test fails, it can point at the specific guilty input. I'm curious how swift-testing handles this.


Hi Zev! Thanks for asking. Let's say you have this parameterized test function:

@Test(arguments: 0 ... 10) func f(i: Int) {
  #expect(i < 10)

As of right now, this is what will be emitted when you run swift test:

◇ Test run started.
↳ Swift Version: 5.9.0
↳ Testing Library Version: unknown
↳ OS Version: 5.15.0-83-generic (#92-Ubuntu SMP Mon Aug 14 09:34:05 UTC 2023)
◇ Test f(i:) started.
◇ Passing 1 argument i → 0 to f(i:)
◇ Passing 1 argument i → 1 to f(i:)
◇ Passing 1 argument i → 2 to f(i:)
◇ Passing 1 argument i → 3 to f(i:)
◇ Passing 1 argument i → 4 to f(i:)
◇ Passing 1 argument i → 5 to f(i:)
◇ Passing 1 argument i → 6 to f(i:)
◇ Passing 1 argument i → 7 to f(i:)
◇ Passing 1 argument i → 8 to f(i:)
◇ Passing 1 argument i → 9 to f(i:)
◇ Passing 1 argument i → 10 to f(i:)
✘ Test f(i:) recorded an issue with argument i → 10 at fooTests.swift:13:3: Expectation failed: (i → 10) < 10
✘ Test f(i:) failed after 0.001 seconds with 1 issue.
✘ Test run with 1 test failed after 0.001 seconds with 1 issue.

So far, this output captures the value of each argument passed to the function. We also imagine we could go farther over time and use the macro to capture the source location of arguments, when it's a set of literals which each have a distinct source location, for example. We're interested to know more specifics about what would be useful to you.


Nice! In terms of the test output, the XCTest output is pretty hard to scan/grep for failures (and parallel testing interleaves the output, which makes it basically impossible to figure out what actually failed/crashed). I like "error: ", but "Expectation failed: " will work too. Are there plans to include a test summary that lists just what failed, and/or a quiet mode that only outputs tests that failed? That would help improve the CI log readability immensely, since the only output would be items that need me to take action.

I look forward to seeing the standalone test-runner instead of having to wrap it in the XCTest scaffolding too. I think my biggest gripes with testing Swift with XCTest today revolves mostly around how XCTest handles tests that are crashing (disappearing without flushing stderr, stdout, or giving much indication at all) and attaching a debugger to the test on the command line (the dual-terminal lldb --wait-for --name XCTest followed by swift test dance) to figure out why it failed or crashed. I think I have to wait until there is a standalone test-runner to know whether these issues are solved though.


One reason I choose plain XCTest over third party testing tools when the choice is up to me is IDE integration. Quick doesn’t (can’t?) reliably mark It() blocks with a pass/fail icon in the editor gutter in Xcode.

I know it’s probably Xcode’s problem that it doesn’t expose hooks for third party testing tools to integrate properly with its test runner, but don’t discount what you lose when the tooling doesn’t work as it should.


Will definitely try this out. But before that, thank you in advance for taking the time to improve the testing experience in Swift! :pray:t2:


This is very exciting. I’ve been playing with Elixir, which also leverages macros to enhance the test output by leveraging compile-time information, and the experience is superior to runtime expectation APIs. Can’t wait to start using it in my projects!

Have you folks considered taking the opportunity to add support for having some sort of documentation testing? Developers could add examples in the documentation and the testing framework would ensure that the examples compile and work as expected.

Great job :clap:


We know that seamless integration with commonly-used tools and IDEs like SwiftPM, Xcode, and VSCode will be critical to the success of a new testing library like the one we're proposing here, and we mentioned that as a specific goal in our vision document. That said, initially our focus is on the core library and its feature set.


Tangential, but is there any plan to make testing easier in Swift with macros, by adding support for macro generation in other targets? Would allow generating mocks via macros into test targets.


Excellent! Very exciting to see this new & improved direction for testing with Swift, and one that's part of Swift's language evolution process.

1 Like

Here's an example of some test code for a crossword project I worked on. (Blah blah views blah blah employer, this was a side project.)

This code tests arrow-key movement in a crossword grid. Inside each Pair is an input (a board selection state plus a movement direction) and an output (a resulting board selection state). The assertCustom function comes from TestCleaner, and handles some boilerplate around unwrapping the test values and keeping track of what lines they came from so we can forward those to XCTAssert*.

// n.b. 'S' and 'C' are local test utility functions to make Selection and Coordinate values.

let testCases: [TestPair<(Selection?, MovementDirection), Selection?>] = [
    // Invalid state. Jump to top left corner.
    Pair((S(blackSquare, .horizontal), .up), S(upperLeftCorner, .horizontal)),
    Pair((S(blackSquare, .horizontal), .down), S(upperLeftCorner, .horizontal)),
    Pair((S(blackSquare, .horizontal), .left), S(upperLeftCorner, .horizontal)),
    Pair((S(blackSquare, .horizontal), .right), S(upperLeftCorner, .horizontal)),

    /* ✂️ snip about 80 similar lines ✂️ */

    // End of 29 Down

    Pair((S(endOfTwentyNineDown, .vertical), .up), S(C(col: 7, row: 8), .vertical)),
    Pair((S(endOfTwentyNineDown, .vertical), .down), S(C(col: 8, row: 5), .vertical)),
    Pair((S(endOfTwentyNineDown, .vertical), .left), S(endOfTwentyNineDown, .horizontal)),
    Pair((S(endOfTwentyNineDown, .vertical), .right), S(endOfTwentyNineDown, .horizontal)),
assertCustom(testCases: testCases) { (pair, file, line) in
    let ((selection, direction), expectedResult) = try (pair.left, pair.right)
    // 👀 This is the actual function being tested
    let next = nextSelection(from: selection, direction: direction, puzzle: initialState.puzzle)
    // ✅ And here's the assertion
    XCTAssertEqual(next, expectedResult, file: file, line: line)
1 Like

This all looks a hopeful step forward, thank you and others for your effort so far! Very excited for the renewed focus on testing.

Off the top of my head, the biggest and most obvious difficulty and challenge is designing and controlling dependencies so code can be testable. For example, let's say the FoodTruck has to make a network request or read files from disk to determine quantities of food. How should I design that dependency? How do I control that dependency for this test case? For another example, lets say the FoodTruck tracks a "quantity check" event so my employer can measure what are the most popular foods and understand our users better. How should I design the event tracker? How I control that dependency to verify side effects and general correctness of the code?

Perhaps another way of saying this is: the difficulty testing in Swift today is a lack of discipline with program architecture and dependency management. These problems are an obstacle to testing because they tend to be discovered after the fact when it is very difficult to change and increase the testability of code.

I would say another challenge is Apple. Their APIs are not particularly conducive to stubbing. Most of us here are probably developing for Apple platforms. I cannot easily stub a URLSession and I cannot stub the Core Location API at all. So an explosion of boiler platey wrapper types ensures. Also, related to the architecture problem, the way Apple encourages people to design apps does not help (e.g.: how do I verify the correctness of my SwiftUI view when it appears?).

I know some of what I've said is a bit more higher level where swift-testing seems to be laying foundations. However, I would wager that most of us are building features or SDKs for users, not writing algorithms. We want the confidence to make changes to our codebases while at the same time be sure our features will continue to work. I hope these foundations have that end in sight and can serve it.

If you and the team working on this have not already, I highly recommend you check out the work of PointFree: swift-dependencies and their Dependencies collection.


At the risk of going off topic, I wrap APIs like these in protocols and then provide various implementations of those protocols, including stubs, spies and fakes (and of course one with the real API implementation underneath). Are there situations where this isn't feasible?


The biggest pain point, for me, is around the build system and tooling. When developing in Java or Kotlin, I can write implementation code in the tests that doesn't compile (for example, a new function that I want to create), and the IDE will immediately recognize that this function doesn't exist in the implementation and provide shortcuts for its creation. The same thing goes for things like new parameters in functions, new properties in data classes/enums, differing parameter types, etc.

When testing in Xcode, live errors and warnings in the editor for test code sometimes don't match the reality of the main target (presumably because caches aren't up to date). I have to trigger a full build. When "fix this" helpers in Xcode appear (not as often as I'd like) they usually don't have the desired solution. When it is what I want, for example implementing protocol stubs, there doesn't seem to be a keyboard shortcut to do it quickly.


Thanks for sharing. This looks promising.

Here are a couple areas of opportunity that would be great to see:

  1. I wish the assertion functions were strongly typed to clearly disambiguate between what was expected vs what was the actual observed value. Just having an expectation that x == y gives no indication whether x or y was expected.

  2. (Mentioned earlier in this thread) Allow a mechanism to allow sample code in docc code snippets to be tested. It’s too easy to document an example usage where it either doesn’t compile or run, or over time gets stale when compared to the real API being documented.

Btw, both of these capabilities exist in C# tooling.


This looks really cool.

Have you considered supporting nested tests like Quick (BDD style)? XCTest supporting only a flat test structure was the number 1 reason we decided to migrate to using Quick/Nimble.

This is really exciting!

One of the primary motivations behind the proposal SE-0385 (custom reflection metadata) was to establish a discovery pattern for XCTest, and you have effectively addressed this by utilizing macros and compiling test cases into a static array.

Could we consider extending this discovery mechanism to a more general context? The motivation section of SE-0385 outlines several use cases that could greatly benefit from similar approaches.

Update: My mistake, it appears that the library utilizes a function called swift_enumerateAllMetadataSection that is exported by the Swift runtime that enumerates all metadata sections loaded into the current process.

While this approach successfully addresses a specific use-case mentioned in the original proposal, it does so at the library level, bypassing the formal Swift evolution process for wider language-level adoption.

The one thing that’s incredibly painful in swift testing is:


Find a way to do those at test runtime without generating dumb class files and I will hug you


This is really cool! Thanks to everyone who worked on it—I'm really looking forward to using it to replace XCTest, which is definitely showing its age when using it in Swift.

One thing that I wasn't able to glean from the documentation so far is what kind of capabilities will there be for customizing expectations, in the sense that other testing frameworks can have custom matchers? For example,

  • If I write an #expect statement that compares two multi-line strings, I'd like the failure output to be a diff of the two strings.
  • If I write an #expect statement that compares two protobufs, I'd like the failure output to be a bit more semantic (this field is missing, this field has the wrong value), and perhaps even be able to say "ignore this field but compare everything else".

Anyone can write a boolean function to do those comparisons but the desirable feature is being able to customize that to provide structured failure feedback. I'm really curious what your thoughts are about how that kind of functionality could fit into the #expect(...) syntax!



1 Like