For what it's worth that's what I was trying to do with this thread. The example was just to illustrate a larger problem. I don't really want a solution specific to this Notification Center example, but rather a general solution to testing code that uses async
-await
So this thread is a good place I think; we have collected a number of folks who are interested in the problem space.
There are a few distinct areas of focus that are worth noting -
Testing AsyncSequence types themselves that are sourced fully from concurrency domains (map, debounce etc)
Testing AsyncSequence types themselves that are sourced from external concurrency domains (see NotificationCenter)
Testing structured concurrency (ala actor, or async let, etc)
Testing unstructured concurrency (ala Task)
Testing unsafe unstructured concurrency (ala things derived from withUnsafeContinuation)
Each of these likely have somewhat specialized solutions to them. There are a few wrinkles with each. For example the nanoseconds sleep function is not controlled in the same manner as Task. Whereas the Clock based sleep function is controllable (via custom clocks).
It would be good to isolate the problems within each domain area and see if there are specific solutions that can be generalized to more areas. For example the testing of AsyncSequences fully sourced from concurrency domains is something that we have a start to a pretty neat solution for (and that might have reusable bits and bobs to apply to other areas).
Also worth note (which has previously been mentioned here):
There are a couple of axises of testing - we have integration, behavioral, and unit tests; each of which have differing needs.
We should also consider multiple platforms with how they interact with these - as well as interoperation to how the driving runloop or other root driver of concurrency works. Specifically not all apps have a NSRunLoop powering them, sometimes they are dispatch_main powered, other times we have apps that are fully single threaded (i.e. WebAssembly).
@Philippe_Hausler I've extracted the _CAsyncSequenceValidationSupport
module and the logic that manages serial execution and jobs to a package that recreates the example test on a branch here:
The extraction is all directly in the tests:
(I've renamed TaskDriver
to Serially
and incorporated its lifecycle into its own initializer.)
Unfortunately, I don't know enough about all the working parts to get a test the run serially and pass, though
In particular, when the test hits this line:
We get EXC_BAD_ACCESS on this line:
My assumption is I'm still missing some of the required tools, and that maybe they don't exist. Another issue with the code I'm trying to test is that some of it uses @MainActor
, which seems to escape the serial executor.
Hopefully this offers a working example, though!
Why is the Notification Center post async?
await NotificationCenter.default .post(name: UIApplication.userDidTakeScreenshotNotification, object: nil)
Posts by nature from NSNotificationCenter are posted and delivered on the queue/thread/actor they originated from. Is that coming from accessing the property: UIApplication.userDidTakeScreenshotNotification
?
I think the problem here is that it is running other async things that are out of the enqueue's prevue. Those hooks cannot grab things from the main actor iirc.
Yeah I believe so. It goes away if I add a @MainActor
on the test, but that prevents all of it from running through the serial executor.
Yeah, that was my impression, as well. Is this something that will be improved by future tooling?
My hunch is that most of the kinds of async
-await
tests I want to write just require some kind of "single-threaded" mode.
I've been thinking the same thing for a while now; not even because of concurrency. Currently, the package manager has tight integration with only one first-party library, XCTest, which was designed for running Obj-C tests in Xcode. It was okay when Swift first launched, but by now it isn't enough. As Swift evolves, it gets pushed a lot further, and it shows that we really need to start thinking about Swift-native testing.
I'd love to see us thinking really big. What about language integrated testing? Even for synchronous code, developers today struggle with things that should be simple - like creating mocks and stubs, because regular Swift code isn't designed to be picked apart the way you often need to do in tests. Often, developers will just throw protocols at the problem, which is a level of abstraction they don't really need, makes code harder to maintain, and it can come with a runtime penalty because it's difficult to strip all the dynamism out for release builds.
What about if the compiler could just build test projects in a special way that allows their logic to be more easily verified? So you could do things like temporarily substitute a method of a struct, even though that's not normally a thing you can do in Swift. I know this thread is about concurrency, but that's the level of solution I think we need for a really first-class testing experience.
With concurrency itself, we introduced new keywords, and concepts like Sendable
and actor-isolation,
because concurrency is big enough to warrant that. Testing is fundamental to ensuring the code you write even works; it's at least as important as concurrency, and if we can make testing better by that level of action at the language level, I think we should. That is in addition to a better testing support library.
Coming back to the topic: This is yet another gap, similar to mocking/stubbing, where you'll have to hack your way around it. XCTest is not really designed to test async
code. It supports async
test functions, but that's not enough to really inspect data that evolves over time.
If I could put together a team, similar to concurrency or regexes, for the next big feature in Swift, I'd choose to invest that engineering effort overhauling testing. IMO it should be our top priority.
I filed an issue with Async Algorithms to at least consider extracting the tools it uses for broader use: Tools for testing Swift concurrency · Issue #157 · apple/swift-async-algorithms · GitHub
Ideally there would be some associated proposals to accompany them, demonstrate use, etc.
I really do hope the Swift team focuses on the lack of testability for concurrent code soon, though! As folks adopt async
-await
more and more, they're going to end up in a pretty messy situation.
I don't think that testing techniques that rely on special scheduling modes would be ideal; in addition to being fundamentally brittle, that take the environment you're testing further away from the actual production environment the code will run in, making the test less effective. The sorts of tools swift-async-algorithms has may have their place but shouldn't be necessary for everyday testing. If we can fit synchronization points into the code that the test can connect with, then the test should be robust regardless of the environment it runs in. I don't think you need to radically rewrite your example to do that. For instance, you could use swift-async-algorithms' AsyncChannel to send sentinels at significant points in the ViewModel task's execution:
class ViewModel: ObservableObject {
@Published var count = 0
enum Progress {
case didSubscribeToScreenshots, didRespondToScreenshot
}
// set to non-nil when testing
var progress: AsyncChannel<Progress>?
@MainActor
func task() async {
let screenshots = NotificationCenter.default.notifications(
named: UIApplication.userDidTakeScreenshotNotification
)
await progress?.send(.didSubscribeToScreenshots)
for await _ in screenshots {
self.count += 1
await progress?.send(.didRespondToScreenshot)
}
}
}
In production, the channel property can be left as nil, and execution will proceed normally. But in testing, instead of hoping we get lucky with scheduling, the test can install a channel into the view model, and await key points of the ViewModel task's execution, while also asserting that the ViewModel task's progress corresponds to the test's expectations:
@MainActor
func testBasics() async throws {
let vm = ViewModel()
// Install a progress channel into the ViewModel we can monitor
let vmProgress = AsyncChannel<ViewModel.Progress>()
vm.progress = vmProgress
// We get `task.cancel(); await task.value` for free with async let
async let _ = vm.task()
XCTAssertEqual(vm.count, 0)
// Give the task an opportunity to start executing its work.
let firstProgress = await vmProgress.next()
XCTAssertEqual(firstProgress, .didSubscribeToScreenshots)
// Simulate a screen shot being taken.
NotificationCenter.default.post(
name: UIApplication.userDidTakeScreenshotNotification, object: nil
)
// Give the task an opportunity to update the view model.
let nextProgress = await vmProgress.next()
XCTAssertEqual(nextProgress, .didRespondToScreenshot)
XCTAssertEqual(vm.count, 1)
}
Combine is perhaps easier to test because you get these probe points "for free" between each layer of transformation, whereas async-await makes it easier to write lots of straight-line code that mixes concerns, for better or worse. It's another front in the functional-vs-imperative war in that regard.
Are you suggesting this as a workaround for things today, or are you suggesting that this is how async-await code should be written?
It feels really, really unfortunate to need to introduce ad hoc, test-only code and interleave it through a feature’s suspension points just to get a reliably passing test. This is also much more onerous than typical dependency injection, which has a minimal footprint on the logical reasoning of code, and is useful for more than just testing (Xcode previews, demos, offline testing, etc.). The sole purpose of the code you’ve added to the feature is for testing, and because async-await doesn’t provide a reliable testing experience by default.
I also don’t quite understand how this code guarantees a passing test. The .didSubscribeToScreenshots
progress point is sent to the channel before the actual subscription starts. Can you explain what guarantees that this suspension point in the VM doesn't hand off control back to the test and cause it to post the NotificationCenter
notification before the subscription to notifications begins?
Shouldn’t async-await aspire to give us these probe points “for free” as well, either with a single-threaded/serial mode, executor overrides, or otherwise? I find it odd that in order to test a simple async-await feature I need to introduce test-only code and thread it between each line that might have an async hop.
I also am not quite sure what you mean by “functional-vs-imperative” or what that has to do with the issue of async-await testability.
As a side note, this line does not currently compile:
async let _ = vm.task()
The variable must be named for some reason, which gives us an unused variable warning, which we can’t even silence:
async let task = vm.task()
_ = task // 🛑 Expression is 'async' but is not marked with 'await'
Should I file a bug?
I agree and couldn't have said it better.
I picked AsyncChannel because it's a readily available tool that didn't require much rearchitecting of the example code you had written, but I think that in general, robustly observing the execution of async code is going to need some explicit coordination. I don't think you'd need to, and you really shouldn't, try to instrument every suspend point, only the points where an event that's interesting to test occurred. And if an event is interesting to test, then it's likely also interesting for logging, analytics, tracing, or other instrumentation too. I may have added the code for testing, but you could use the channel to dependency-inject any other tracing you might want to do for this code, so it doesn't seem so different in nature to using DI in synchronous code to both improve testability and get all the other benefits of doing so like you said. But where traditional DI often requires a lot of restructuring and refactoring to get good results, the impact of the instrumentation I added is relatively minimal to the structure of your original code, on the level of inserting print
or os_log
statements.
I'll have to defer to @Philippe_Hausler here. If the subscription isn't guaranteed to actually be in effect before notifications(named:)
returns, then I agree that being able to get that guarantee seems like something Foundation should provide (and it should be documented that the subscription may be delayed in the API docs).
Trying to test every single suspension point is overkill for most application code. Whether and how often most code suspends is an implementation detail that'll be constantly changing, and even if you try to minimize the nondeterminism in the system with a mock execution environment, you'll be fighting library, compiler, and runtime changes that move otherwise uninteresting suspension points around.
Ah, that's too bad async let _
gets rejected; being able to fire-and-forget a scoped child task seems useful enough that we should let that work. _ = await task
would be the correct way to explicitly await the task at the end of its scope, though.
So the async sequence of notifications registers an observer immediately (before it returns) so that is active before it is iterated. There is a sneaky implementation detail in there where we do have a small buffer that holds pending notifications for consumption (which might be what is happening to ensure it works, any reasonable posting rate should work just fine for this code).
The problem is if you want to do any kind of integration testing you are going to need to do a potentially massive (and ad hoc) refactor here when the presence of async-await is the only reason for it.
Sorry, my question is about await
suspension points in general, nothing to do with Foundation or NotificationCenter
Consider this code:
class VM: ObservableObject {
...
func foo() async {
/*A: */await progress?.send(.didSubscribe)
for /*B: */await _ in asyncSequence {
// do something that should be testable
}
}
}
class Test: XCTestCase {
func testFoo() async {
...
async let task = vm.foo()
_ = /*C: */await progress.first { _ in true }
/*D: */syncEndpointThatFeedsTheAsyncSequence()
}
}
What provides the guarantee that B will be called before D? Couldn’t A yield back to C and call D before B?
For a bit more real-world context: we’re trying to make the Composable Architecture play nicely with async-await and are finding it quickly breaks when it comes to testability. The ad hoc solutions in this thread aren’t even possible to introduce at the library level. TCA comes with a lot of testing tools that make it easy to test entire application flows in nuanced ways, so TCA apps tend to be tested more. But this means our users are more likely to encounter these issues if/when they adopt async-await code, and they are likely going to blame the library when in reality there just is no general method for testing async-await code.
AsyncChannel
is unbuffered. send
won't resume until someone reads the value.
Do you have some more examples of TCA code that you'd like to integrate with async, and how you want to test it?
Thanks, knowing that's helpful, and appears to be the case for AsyncStream
as well! This means that, AsyncStream
seems like a useful substitute for the original sequence, and something we could inject into the VM, which would be less invasive than instrumenting a second "progress" stream.
This provides a path forward for certain situations, though there are a few rough edges that I guess are on the user to be aware of, and a few helpers are needed to avoid the boilerplate of initializing a stream from a sequence, and initializing a stream and continuation that can be used for testing.
I'll find time to share some examples soon!
Dunno if it helps but I'm having luck testing TCA-like code using a concurrency capable expectation class: CheckedExpectation
The idea is to explicitly block the current Task whenever expectation.wait()
is called. It's very similar to XCTExpectation in that regard.
Thanks! When comparing the testability of Combine and arbitrary async, I think one big difference is that Combine factors everything into a handful of composable abstractions, which happen to also have nice testability properties. Arbitrary async functions can on the other hand express almost anything. For a framework like TCA, maybe you can design protocols or types that encourage async code to be written in a form that allows for easy injection of sequence points by tests.
This is going to be important as we increasingly adapt our code patterns to use new asynchronous facilities in Swift. This was an interesting discussion to read. So after mulling over it for a few days, I took a deep dive.
Background observations
The code example in question is from one frame relatively simple but balances on the intersection of three somewhat unrelated components:
-
The feature being tested is the use of an
AsyncSequence
to respond to notifications over time. TestingAsyncSequence
is considerably more complex than testing a normal async function, because anAsyncSequence
does not return in the way a usual async function will when awaited. -
There is a desire for the test to run in a serial, predictable manner, but the test is examining a response to an
NotificationCentre
post, the scheduling of which is not fully under the control of the application. -
The current structure of the code means that even if a test can demonstrate that the
task()
function on theViewModel
is called when the correctNotificationCentre
message is posted, it is not currently possible to demonstrate that the production app is actually initializing this task when theContentView
is displayed.
AsyncSequence
not Async-Await
I want to emphasise the first point. In the thread I noticed there are multiple references to testing ‘async/await code’. This is quite a misnomer here. The complexity here arises primary from using an AsyncSequence
to respond to the notifications over time. With XCTest’s support for async test methods, it is trivally simple to create linear deterministic tests that await the results of an async method and then continue. The reason why this code is difficult to test pivots on the fact that you cannot await
on an AsyncSequence
in a test in the usual way as the test will never proceed forward to the subsequent waitForExpectations
. It is key that the whole discussion in the thread above needs to be understood in that framework in my view—the discussion was mostly relevant to testing code that uses AsyncSequence
not to testing async-await code in general. :)
Combine would be better
The OP and others noted that it would be much easier to implement this code with Combine. In my view, that would definitely be a better way to implement this code, both for testability and other features that Combine provides. My take is this is not a good situation to reach for AsyncSequence
, not just due to testability but due to suitability. Nonetheless, I’ve been mulling over the thread for several days and decided to sit down and lay out a way this could be made reliabily testable while still using AsyncSequence. Coming back today to try out some code I learned something in working through it, so that was good.
Define separate testable components
To test more effective and reliably, we need to create separate testable components so that when all tests pass, we can be confident that the production code both has the intended effects, and (ideally) even that the SwiftUI views are verifably configured to correctly call those methods as expected at runtime.
In particular, it would ideal to be able to test three things:
- Verify that when the default screenshot action is called, it increments the
viewModel.count
. - Verify that the supplied screenshot action is called, when the screenshot notification is posted.
- Verify that
ContentView
is configured to call theonDisplay
action, when it is displayed.
With SwiftUI the last of these seems a lot more difficult—managing view lifetimes is largely out of our hands now—but maybe we can take a shot.
Provide Actions via dependency injection
To achieve the above the first step is to dissociate the various parts of the process so that they are individually testable. To do this, we can use dependency injection to provide Actions to the ContentView, rather than the code for those actions residing within the ContentView (or ViewModel) itself. We can provide default implementations of these Actions alongside the definition of the actions that will used in production. This will enable us to however replace individual actions with test expectations as needed to test each component of the system. This is a pattern from John Sundell I've adapted that I am using in my own app.
To do this, we will define a single Actions struct within the ContentView, that has a single property (“action”), onDisplay
:
struct Actions {
// Actions the View takes in response to state changes or input.
var onDisplay: () async -> Void
}
An instance of Actions then becomes a required property on ContentView
, with a modified initialiser that will automatically use the default Actions(for: viewModel)
in production when an alternative implementation is not injected:
init(viewModel: ViewModel = ViewModel(), actions: ContentView.Actions? = nil) {
self.viewModel = viewModel
self.actions = actions ?? Actions(for: viewModel)
}
Instead of calling task()
on the ViewModel
, the ContentView
now calls its onDisplay
action when it loads:
var body: some View {
Text("\(viewModel.count) screenshots have been taken")
.task { await actions.onDisplay() }
}
The full architecture with the default actions in an extension are thus:
import SwiftUI
class ViewModel: ObservableObject {
@Published @MainActor var count = 0
}
struct ContentView: View {
@ObservedObject var viewModel: ViewModel
let actions: Actions
var body: some View {
Text("\(viewModel.count) screenshots have been taken")
.task { await actions.onDisplay() }
}
init(viewModel: ViewModel = ViewModel(), actions: ContentView.Actions? = nil) {
self.viewModel = viewModel
self.actions = actions ?? Actions(for: viewModel)
}
struct Actions {
// Actions the View takes in response to state changes or input. Pattern adapted from:
// https://swiftbysundell.com/articles/dependency-injection-and-unit-testing-using-async-await/
var onDisplay: () async -> Void
}
}
extension ContentView.Actions {
typealias AsyncFunction = () async -> Void
// Default actions for production.
// In this case, we define our default actions as static functions on the type because the
// defaultOnDisplay action wishes to be able to call the defaultRespondToScreenshot action,
// while that defaultRespondToScreenshot reuqires a reference to the ViewModel. In cases
// where the functions are independent, simple closures could be passed to an init.
static func defaultOnDisplay(screenshotAction: @escaping AsyncFunction) -> AsyncFunction {
return {
let screenshots = await NotificationCenter.default.notifications(
named: UIApplication.userDidTakeScreenshotNotification
)
for await _ in screenshots {
await screenshotAction()
}
}
}
static func defaultRespondToScreenshot(for viewModel: ViewModel) -> AsyncFunction {
return {
await MainActor.run {
viewModel.count += 1
}
}
}
init (for viewModel: ViewModel) {
let respondToScreenshot = ContentView.Actions.defaultRespondToScreenshot(for: viewModel)
let onDisplay = ContentView.Actions.defaultOnDisplay(screenshotAction: respondToScreenshot)
self.onDisplay = onDisplay
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
let viewModel = ViewModel()
let defaultActions = ContentView.Actions(for: viewModel)
ContentView(viewModel: viewModel, actions: defaultActions)
}
}
(The typealias is of course not required but makes reading the function parameters a lot more comprehensible.)
Previously, the default code for when a screenshot was taken was to call task()
on the ViewModel
. Note this new implementation moves that code to instead be passed to the onDisplay
action as a closure parameter. This will enable us to be able to replace this specific screenshot action task—separate from the functioning of the AsyncSequence
code—in some tests. (We also do not put this action as an injectable property on the ViewModel
because we would then need the ViewModel
to instantiate this action while also needing this action to instantiate the ViewModel
.)
Testing the components
We are now able to write performant and reliable tests that validate the code as outlined earlier.
1. Verify that when the default screenshot action is called, it increments the viewModel count.
Now that the default screenshot action is a standalone function, we can test its effects directly, without neither the complication of the NotificationCentre post nor the AsyncSequence:
func testA_defaultRespondToScreenshotAction_incrementsViewModelCount() async throws {
// Setup
let viewModel = ViewModel()
let defaultScreenshotAction = ContentView.Actions.defaultRespondToScreenshot(for: viewModel)
// Verify baseline value
await MainActor.run { XCTAssertEqual(viewModel.count, 0) }
// Code under test
await defaultScreenshotAction()
// Test expectations
await MainActor.run { XCTAssertEqual(viewModel.count, 1) }
}
This test is performant. On an i9 MacBook Pro (no Apple Silcon), it averaged about 0.003 seconds (3/1,000 of a second) to run:
Executed 100000 tests, with 0 failures (0 unexpected) in 293.792 (512.494) seconds
Because the ViewModel.count
is a published property for UI changes, we have asserted it should only be modified on the main thread hence the calls to MainActor.run
. Otherwise, uncomplicated, and entirely linear.
2. Verify that the supplied screenshot action is called, when the screenshot notification is posted.
When we create an instance of Actions
, using the default production implementation, we want to verify that it responds to the expected NotificationCentre
post when one is issued, and that it will indeed call the screenshot action we supply to it for each of those posts. With this design we can now test both of these things independently of both the ViewModel
and the ContentView
, neither of which is needed for this test:
func testB_defaultOnDisplayAction_callsSuppliedScreenshotAction_whenScreenshotNotificationPosted() async throws {
// Actions and expectation
let expectation = self.expectation(description: "screenshotAction called.")
let fulfillExpectation = { expectation.fulfill() }
let actions = ContentView.Actions(
onDisplay: ContentView.Actions.defaultOnDisplay(screenshotAction: fulfillExpectation)
)
// Initiate the process created by the onDisplay action
Task { await actions.onDisplay() }
// Post the notification created when a screenshot occurs.
_ = await MainActor.run {
Task {
NotificationCenter.default.post(name: UIApplication.userDidTakeScreenshotNotification, object: nil)
}
}
// Test expectations
await self.waitForExpectations(timeout: 10)
}
This test is fast (runs in about 0.005 seconds) and reliable—when run once at a time. However, when run repeatedly, this test rapidly triggers issues between NotificationCentre and expectation.fulfillment. After trying various things, my recommendation would simply be currently to not run this test repeatedly. In my view, these are issues with the functioning of the NotificationCentre in a multithreaded environment and with XCTestExpectation, the tool we have been given to verify async code (including for completion handlers, pre async-await). It seems pointless to bang our heads here on repeated test runs. This reliably confirms for us that our action works and is wired up correctly to the expected notification. Take the win.
3. Verify that ContentView is configured to call the onDisplay action, when it is displayed.
This is really the only necessary Integration Test here, in my view. However, I thought this would probably not be possible with a SwiftUI app and View, at least not without resorting to third party dependency extensions and complex code. But it sure would be good to lock in verification that our onDisplay
action really was being fired off when that ContentView
loaded—otherwise all our other tests are in vain, and we might have to resort to UITests or similar to continue to be sure that the view does indeed function as expected.
However, turns out, there is a way. And with just a short helper method, we can even have a clean, linear test too:
func testC_onDisplayActionIsCalled_whenContentViewIsDisplayed() async {
// Actions and expectation
let expectation = self.expectation(description: "onDisplay Action called.")
let actions = ContentView.Actions(
onDisplay: { expectation.fulfill() }
)
// Create content view
let viewModel = ViewModel()
let contentView = await ContentView(viewModel: viewModel, actions: actions)
// Process being tested
await display(contentView)
// Test expectations
await waitForExpectations(timeout: 1)
}
private func display(_ contentView: ContentView) async {
await MainActor.run {
let window = UIWindow(frame: UIScreen.main.bounds)
window.rootViewController = UIViewController()
let hostingController = UIHostingController(rootView: contentView)
window.rootViewController?.view.addSubview(hostingController.view)
window.makeKeyAndVisible()
window.layoutIfNeeded()
}
}
With the ability to inject a test expectation instead of the default onDisplay action, we can focus on how to instantiate a ContentView so that we can verify that it does call the onDisplay method. And after some trial an error, not that difficult! Put the ContentView
in a UIHostingController
, ad it as a subview to a new UIWindow
’s rootViewController
, and lay it out.
The test method itself here looks much more like testing typical async-await code. We call the code being tested—await display(contentView)
—and then we simply await to see if our test expectations are fulfilled.
The test is again fast. Because people were revving engines in this thread, I again ran this method 100,000 times, finding an average execution time of 0.007 seconds. But I’m sure it’ll be noticably faster on an M1:
Executed 100000 tests, with 0 failures (0 unexpected) in 705.619 (839.864) seconds
Summary
-
Super important to distinguish between testing AsyncSequence code, as described here, and standard async-await code. The latter will be much more common in our code bases.
-
Using dependency injection to supply Actions to our Views and/or ViewModels is an outstanding way of separating concerns and making code eminently more testable, particularly in an async world.
-
Well written tests can run fast. I really need to review some of my old tests as these ones smoke them.
-
Maybe I need to set up a dev blog.

Previously, the default code for when a screenshot was taken was to call
task()
on theViewModel
. Note this new implementation moves that code to instead be passed to theonDisplay
action as a closure parameter. This will enable us to be able to replace this specific screenshot action task—separate from the functioning of theAsyncSequence
code—in some tests. (We also do not put this action as an injectable property on theViewModel
because we would then need theViewModel
to instantiate this action while also needing this action to instantiate theViewModel
.)
@babbage Thanks for sharing your explorations! While this does refactor the code into something that deterministically passes 100% of the time, it also unfortuantely weakens what is actually being tested . By moving all of the view model's logic into a separate thing that is injected we are no longer able to test the logic.
For example, suppose the requirements of the feature were changed so that the first screenshot was not counted. Then we would update defaultOnDisplay
to be:
-for await _ in screenshots {
+for await _ in screenshots.dropFirst() {
And next we would update the test so that the first screenshot is not counted, but the next few are:
func testA_defaultRespondToScreenshotAction_incrementsViewModelCount() async throws {
let viewModel = ViewModel()
let defaultScreenshotAction = ContentView.Actions.defaultRespondToScreenshot(for: viewModel)
await MainActor.run { XCTAssertEqual(viewModel.count, 0) }
await defaultScreenshotAction()
await MainActor.run { XCTAssertEqual(viewModel.count, 0) } // First screenshot doesn't count
await defaultScreenshotAction()
await MainActor.run { XCTAssertEqual(viewModel.count, 1) }
}
However this test fails. The only way to make it pass is to also update defaultRespondToScreenshot
and reimplement the logic from defaultOnDisplay
, but then you are maintaining the logic in two different places just so that you can test it.
By injecting all of the behavior of the view model into the view we effectively lose the ability to test any of the behavior. It is better to inject only the bare essentials of the actual dependency, such as an async sequence or async stream, and then use that dependency in the view model so that you can then test that logic.