Reviews are an important part of the Swift evolution process. All review feedback should be either on this forum thread or, if you would like to keep your feedback private, directly to the review manager. When emailing the review manager directly, please keep the proposal link at the top of the message.
Trying it out
To try this feature out, add a dependency to the main branch of swift-testing to your project:
Finally, import Swift Testing using @_spi(Experimental)
So, instead of import Testing, use @_spi(Experimental) import Testing instead.
(here is an example repo with that has all the setup)
What goes into a review?
The goal of the review process is to improve the proposal under review
through constructive criticism and, eventually, determine the direction of
Swift. When writing your review, here are some questions you might want to
answer in your review:
What is your evaluation of the proposal?
Is the problem being addressed significant enough to warrant a
change to Swift?
Does this proposal fit well with the feel and direction of Swift?
If you have used other languages or libraries with a similar
feature, how do you feel that this proposal compares to those?
How much effort did you put into your review? A glance, a quick
reading, or an in-depth study?
More information about the Swift evolution process is available at
From an API-naming perspective, #expect(exit: .failure)—or #expect(exitWith: .failure)—reads more fluently (IMO) without loss of clarity as compared to #expect(exitsWith: .failure).
In particular, “expect” naturally takes a noun as its object, and “with” here (as is often the case for arguments) is implied and generally omissible as vacuous.
But otherwise, this is a hugely useful facility for testing. Excited.
Thanks for the feedback! My thinking here is that there's an elided "this code" in the name of the macro, so that in full it would read "expect [this code] exits with failure". I hope that makes sense!
Edit: Or maybe it's more helpful to imagine "the following closure" instead, so "expect [the following closure] exits with failure"?
Sure. If this information (“this code” or “this closure”) is necessary for clarity, though, adding an “s” probably isn’t enough for that purpose?
However, it doesn’t seem particularly necessary to clarify that the expectation is of the closure—what else could it be?
#expectNoError(performing: noThrow)
#expectError(performing: `throw`)
#expect(error: .case, performing: `throw`) // While it's not important to change this, `throws` was not a good choice. It doesn't create sentences like `error` does.
The proposed ExitTest.Condition only exists to get around just having another overload spelled differently than another naked #expect. Please do continue with this approach anymore. It looks clever, but it's worse to read, and more to have to know about.
Is your feedback about the API surface (i.e. using the new exit tests in your tests) or about the specific implementation of it as it currently exists?
As I collect the feedback, it's important for me to understand if feedback is about the API, the implementation or perhaps both.
+1 from my side. I also like the API form as it has been proposed.
The inability to test for assertions has been my main gripe with testing in Swift since coming from Objective-C a few years ago. Testing only happy paths has always felt incomplete and insufficient (what if the assertion is broken or maybe gone?). So I'm really happing to see this finally coming.
Two questions for clarity:
As I understand it, for now, there's really no way to pass any arguments or state to the closure? So we won't be able to use shared setup code or test arguments like so for now:
To me it's not a problem that we cannot test for exits on iOS yet – better somewhere than nowhere! But how would we go about writing an exit test inside a universal test class? My understanding is that #expect(exitsWith:…) is just unavailable on unsupported platforms. So would this work and test for the exit on non-iOS platforms as expected?
Correct. This is obviously something we want to implement, and I know exactly how we'd do it (I dream in macro expansion code now… that's healthy, right? ) but we likely need a new compiler or language feature to do it correctly. See the support for passing state section for more details.
Overall, huge +1 from me, as originally mentioned in the pitch thread.
I do wonder if there is a more ergonomic way to mark the test as skipped because that platform doesn't support exit tests. One could always hide the entry with an #if wrapping it, but I can imagine knowing the test was skipped as a default would be more helpful than the compiler preventing the test from being written (or worse, written, then failing later because they only had one platform selected and used on something like CI).
One question that came up as I try to determine how I can use this: I'm assuming there is no opportunity for cleanup on failure, correct? In my use case, several tests write to disk (albeit a temporary directory), and although not required, it would be nice to clean up on expected failure.
A big +1 from me - it adds a flexible capability to the testing library that allows for quite a few new testing scenarios that were previously only enabled with external scripting, and hard to integrate into a holistic testing module - especially for functional or integration style testing.
I think the question was more if there's a possibility to expand the .enabled(if:) trait to cover this case - such as .enabled(if: Capabilities.supportsExitTesting)
As that opens up an evolution space of testing conditions that can be a mixture of configuration & runtime constraints on the test execution.
Is the label exitsWith really necessary? I feel that #expect(.failure) is clear enough. In that case, exitCode(:_) could be exit(code:) so at call site it would be #expect(.exit(code:)).
Otherwise, If #expect must have any label with the word exit I would suggest using .code(_:) to avoid repetition.
#expect(.failure) {} would be very ambiguous to anyone reading the code and could be easily confused with withKnownIssue {} (i.e. XFAIL).
"Exit code" is an established term of art: it's a numeric value reported by the system for a process after it has exited and is distinct from a signal (which indicates a different kind of termination.)
Something like hasFeature() (but for use by Swift Testing) would be nice to have but is beyond the scope of this proposal and would presumably require compiler changes to implement, because you'd still need to write #if os(...) in the body of the test if we just made it a testing trait.
With Scoping Test Traits, ST could in theory write the closure to invoke in the compiler checks for us - right?
There’s probably some pain in trying to get that to work retrofitted on the disabled trait, so I can understand if it’s not possible - making it moot to try this in another way
Or could the macro expansion do further preamble implementation based off the hasFeature trait?