In the unit tests for a library I created, I need to read input from the user. I do this using readLine. It works fine when I run the tests from Xcode using Product > Test. However, when I run the unit tests from the command line using swift test, the program crashes. I discovered this issue because I am trying to test my library on linux, which doesn't have Xcode. Although, when I run the tests from the command line on macOS, the same issue occurs, which leads me to believe that this issue is not specific to linux.
Does anyone know how I can read input from the user when running unit tests from the command line?
This doesn't technically answer your question, nevertheless, I think it may help you.
Generally, you'd want to avoid reading from stdin in unit tests because typically you want to run them unattended. Also, you typically don't want to pipe data in them from the outside because if you have multiple tests reading from stdin, you'd need to know the order they run it to send the expected input data...
However, if you really don't want to "dependency-inject away" with some fake input in unit tests, there is one option which allows you to continue to use readLine() and depend on stdin. The solution I'm describing doesn't even require you to pipe anything in, you'd just provide the input in the tests as a Swift String. I would still absolutely not recommend doing this in unit tests and if you were to run multiple tests in parallel, you'd be in trouble.
The basic idea is to temporarily replace the actual stdin with a different file descriptor (a pipe). Instead of reading from the actual stdin (often a terminal), we'd read from the pipe. To feed data into the pipe, we'd do that in the unit test.
Here's a sketch of the code:
#if canImport(Darwin)
import Darwin
#else
import Glibc
#endif
import Foundation
// This is not production quality, this will only work if at least the following things are true:
// - No tests run in parallel.
// - No one else "messing" with stdin.
// - You don't supply a `string` that's too long. Pipes are bounded and the demo code below relies on
// it all fitting into a pipe. Usually, if you write less than 4kB of data, you should be good (not guaranteed) :).
func withStdinReadingString(_ string: String, _ body: () throws -> Void) rethrows {
// Save old stdin (probably the terminal).
let oldStdin = dup(STDIN_FILENO)
// Make a pipe whose write end we can write to and whose read end we replace stdin with.
let pipe = Pipe()
// Make the read end of the pipe the new stdin.
dup2(pipe.fileHandleForReading.fileDescriptor, STDIN_FILENO)
// Write the `string` data to the write end of the pipe.
pipe.fileHandleForWriting.write(Data(string.utf8))
// Close the write end.
try! pipe.fileHandleForWriting.close()
// Clean up.
defer {
// Restore the original stdin
dup2(oldStdin, STDIN_FILENO)
// Close our copy of the original stdin.
close(oldStdin)
// This try! is fine, this must work.
try! pipe.fileHandleForReading.close()
}
// Run the code.
try body()
}
withStdinReadingString("foobar\n") {
let l = readLine() // will read `foobar\n`
print(l ?? "n/a")
}
withStdinReadingString("hello world\n") {
let l = readLine() // will read `hello world\n`
print(l ?? "n/a")
}
withStdinReadingString("hello\nworld\n") {
let l1 = readLine() // will read `hello\n`
let l2 = readLine() // will read `world\n`
print(l1 ?? "n/a")
print(l2 ?? "n/a")
}
I not near my computer right now, so I haven't had a chance to try this out, but it doesn't look like this allows me to dynamically read from standard input. It looks like it it requires me to hard-code the input before the tests run, which isn't useful for me.
It does indeed require you (I'd say allow you) to specify the exact input that readLine() will read within your test cases. Isn't that typically what you want in a unit test? You run your code with a certain input and verify that the output is what you expect?
Do you really want to read from the terminal during a unit test run? What do you want to happen if no user's present whilst the tests are run?
If you do really want read from an actual terminal, you can open /dev/tty and dup2 it to STDIN_FILENO. Pseudo-code:
[...] // most of the code like before
let oldStdin = dup(STDIN_FILENO)
let ttyFD = open("/dev/tty", O_RDONLY)
if ttyFD == -1 {
throw SomeErrorHereWhichIndicatesThatYouCouldNotOpenTheTTY()
}
dup2(ttyFD, STDIN_FILENO)
defer {
// Restore the original stdin
dup2(oldStdin, STDIN_FILENO)
// Close our copy of the original stdin.
close(oldStdin)
}
// Run the code.
try body()
[...]
I'm a little confused about what the body of the function is supposed to consist of because you are referencing code from your previous example. Could you help me out by posting the fully self-contained function that I can use?
Right now, I am able to type text into my terminal, (previously readLine would immediately return nil) but when I press enter, the function does not return.
Here's what I have:
func withReadLine(_ body: () -> Void) {
// most of the code like before
let oldStdin = dup(STDIN_FILENO)
let ttyFD = open("/dev/tty", O_RDONLY)
if ttyFD == -1 {
fatalError("withReadLine: couldn't read line")
}
dup2(ttyFD, STDIN_FILENO)
defer {
// Restore the original stdin
dup2(oldStdin, STDIN_FILENO)
// Close our copy of the original stdin.
close(oldStdin)
}
// Run the code.
body()
}
// example:
print("What is your name?", terminator: " ")
withReadLine {
let name = readLine()
print("your name is \(name ?? "nil")")
}
If a test relies on input from STDIN, it's not a unit test. Unit tests shouldn't have any dependency on external factors. They shouldn't consume stdin, make network requests, interact with a DB, communicate with hardware, etc., and they should have absolutely minimal requirements of their environment. E.g. testing a german localization shouldn't require your computer to be set to germen. Every such integration or dependency slows tests down, makes them brittle (e.g. now you can't test offline, without a DB, with a french computer, etc.).
What's the unit you're trying to test, and what is it about it that makes you want to manually provide input?
Tests are meant to be automatic. Sitting around and keying in the right input at the right time during a test run is ... not automated testing.
@AlexanderM You're right, the tests that I'm running are not unit tests because the are not self-contained and they do rely on external input. They would be better described as integration tests. The tests are for GitHub - Peter-Schorn/SpotifyAPI: A Swift library for the Spotify web API. Supports all endpoints. a wrapper for the Spotify web API. In order to test my library, I have to go through the authorization process, which requires opening a URL in the browser, where I log in to my Spotify account. I then click agree and am redirected to a redirect URI that I specify, which contains authorization information in the query string. I then need to paste this URL back into the program in order to complete the authorization process.
I do have a handful of truly self-contained unit tests, which do not rely on external input. However, the whole point of my program is to interact with the Spotify web API, and so I needs tests that do that.
Nonetheless, Apple's XCTest framework is still incredibly convenient and I see no inherent reason for not using it for these tests. I need to run them somehow anyway, and creating a separate program to do that seems unnecessary.
As @AlexanderM mentioned, a better way is to refactor your interface such that reading from stdin is no longer necessary.
That said, all hope is not lost. If you have to test against stdin, FileCheck is a great way to do it, as it's the standard practice in LLVM/Swift compiler development.
Is this something you're doing to test specifically your authentication functionality, or is it a precondition to all other tests?
I'm doing it for both of those reasons. Going through the authorization process is a precondition for 95% of the tests. It's required in order to generate an access token, which is then used in most of the other tests. You can read about the authorization process for the Spotify web API here. I also need to test the authentication functionality. In particular I need to ensure that providing invalid values causes the authentication attempt to be rejected. The results of these tests have important implications for the security of my library.
Are there any kinds of test accounts you can make, whose authorizations could be stored in a secrets file?
I could go through the authorization process a single time before I run the tests and then inject the access token into the program at compile-time, but I actually need to go through the authorization process multiple times. For this reason, it would be extremely inconvenient to have to inject the access token into the program multiple times. I would basically have to re-run the tests each time, which essentially means that they would require more manual input, which is what I'm trying to minimize in the first place.
That's a huge bummer. How complex is the Spotify API? Do you think it's feasible to mock it entirely, so that most (non-authentication-flow related) unit tests just target a test data set served by a local web server (heck, even skip the web server, just mock the http client, too)
Indeed, that's a good idea.
You can inject one access token once, and use that one token to solve your problem for all non-authentication related unit tests. E.g. a class that looks up the songs in a playlist can use that one, known-working token. Ideally, if you could mock the API, you won't even need this class to know/care about authentication, at all.
Though how to effectively test the authentication flows is still an issue, that would need multiple manual authentications.
If it's the former, I would try to write some automation that drives a web browser through the authentication process.
Actually, If I was able to setup a local server that could listen for when Spotify redirects to my redirect URI (which can be any valid URL, including one with a custom scheme) and then deliver the URL back to my program, then I would no longer need to read from stdin. I wouldn't want to depend on any large libraries such as Vapor, because this is only necessary for the tests.
That would be a test-only dependency, so I don't think that's a big deal. The repo bloat is worth it, if it saves you from clicking through login pages all day long.
I guess I could create a local server using Vapor.
Even setting aside the complexity and the amount of time it would take, mocking the Spotify web API has its own issues. For example, what if it changes? If my mocked version was based on an old version of the Spotify web API, then I would never know about the change. Furthermore, some tests involve the player endpoints (e.g., playing tracks, toggling shuffle, skipping to the next track, retrieving the currently playing track). It would be unfeasible for me to mock that functionality.
I just can't believe that it's this hard to read from the standard input during unit tests. It shouldn't be that hard. Surely I can't be the first person who needs to do this.
As others have said, needing to actually read input (not just mocked input) during tests is extremely unusual and will make your testing much harder. If you wish test your API interactions, including multiple calls, mocking is the way to go. I suggest real network mocks using something like OHHTTPStubs, which allows you to mock particular endpoints using data or local JSON files. With a little work you can get a pretty dynamic system going. Best part is it doesn't require any client code changes. Since Spotify's API is versioned and very popular, it doesn't change often, and when it does, it's not a breaking change (old clients will still work for some time). So I don't think you need to worry about that.
The primary purpose of my tests is to ensure that I am interacting with the Spotify web API correctly. Creating a mocked version would defeat that purpose. (The second most important purpose of my tests is to ensure that the JSON is correctly decoded, but I do indeed have truly self-contained unit tests for that purpose that use local JSON files.) How could I ever be sure that the mocked version has the same behavior? Although Spotify has an impressive amount of documentation, I've discovered dozens of examples of unexpected, undocumented behavior. Who knows what else I'm missing. Using a mocked version would make my tests a lot more unreliable; this tradeoff doesn't make much sense to me.
With a little work you can get a pretty dynamic system going.
I don't think you're taking into account the complexity of the Spotify web API. I'm not only making GET requests for data that is more or less the same each time (which could be easily mocked using local JSON files). The player and playlists endpoints require keeping track of, and mutating, an immense amount of state. Future requests then rely on that mutated state.
Although I've already said this, I need to stress it again: The reason why I need to read user input is so that I can go through the authorization process, which cannot be mocked. In other words, the only tests that require user input are the ones that cannot be mocked, and so mocking doesn't even solve the problem that I set out to solve.