I am trying to wrap SFSpeechRecognizer in an async method. Here is a simplified version of my code:
public final actor SpeechRecognizer {
private var speechRecognizer: SFSpeechRecognizer!
private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
private var recognitionTask: SFSpeechRecognitionTask?
public func startOneTimeRecognition() async throws -> String {
self.speechRecognizer = SFSpeechRecognizer(locale: .init(identifier: "en-US"))
let recognitionRequest = try await createRecognitionRequestAndStartAudioEngine()
return try await withCheckedThrowingContinuation { continuation in
recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { result, error in
Task {
if let result {
let recognizedPhrase = result.bestTranscription.formattedString
continuation.resume(returning: recognizedPhrase)
} else if let error {
continuation.resume(throwing: error)
} else {
fatalError("This situation shouldn't happen")
}
self.stopRecognition()
}
})
}
}
func createRecognitionRequestAndStartAudioEngine() async throws -> SFSpeechAudioBufferRecognitionRequest {
// some code that prepares the voice recognition and returns a recognition request
}
public func stopRecognition() {
// code stopping the recognition
}
}
This code compiles with no warnings under the Swift 6.0 compiler but when executing this code I get a runtime warning: warning: data race detected: actor-isolated function at Tools/SpeechRecognizer.swift:41 was not called on the same actor
I think what is happening is that the callback passed into speechRecognizer.recognitionTask is getting dispatched by the framework on another thread. How do I get back on the actor here? I think the compiler thinks that the callback is still running on the actor.
While that seems like a diagnostic issue in the compiler (Task has got a few recently) there is a missing annotation on ObjC code – see below, I think you can simplify your code that also won't have this bug:
public final actor SpeechRecognizer {
private var speechRecognizer: SFSpeechRecognizer!
private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
private var recognitionTask: SFSpeechRecognitionTask?
public func startOneTimeRecognition() async throws -> String {
// make sure we do stop recording in both paths - success & failure
defer { stopRecognition() }
self.speechRecognizer = SFSpeechRecognizer(locale: .init(identifier: "en-US"))
let recognitionRequest = try await createRecognitionRequestAndStartAudioEngine()
return try await withCheckedThrowingContinuation { continuation in
recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { result, error in
if let result {
let recognizedPhrase = result.bestTranscription.formattedString
continuation.resume(returning: recognizedPhrase)
} else if let error {
continuation.resume(throwing: error)
} else {
fatalError("This situation shouldn't happen")
}
})
}
}
}
You don't need Task in that version and it uses structured concurrency – stopRecognition will get called in the end, after recognition task has completed.
So my code should probably be a compile time error in Swift 6 right?
Well, not exactly. In general there is nothing wrong with your code, only that it can benefit from structured concurrency. But it should require an await to call stopRecognition – I really wonder why it doesn't – which is actually here to ensure that call happens on the actor. My guess here is that this is something with API of SFSpeechRecognizer which makes it to not require await there, more likely this is some ObjC-bridged code that lacks proper annotations, I suggest filing a radar.
That's right. The specific problem is the resultHandler parameter needs to be marked @Sendable. Otherwise, the compiler infers that the task is isolated to the actor that the non-Sendable closure is formed on.
There's more information about this general class of problem of missing @Sendable annotations in APIs in the Swift 6 migration guide here, and information about the general rules around isolation and non-Sendable closures here.