Can I use a Dispatch group for a Process.terminationhandler(:

Hello, I am preparing some code for Swift 6.

I use Process and Pipe
How can I use process.terminationHandler with a DispatchGroup ?

        process.terminationHandler = { @Sendable _  in
            outputQueue.async {
                outputHandler(try? pipe.fileHandleForReading.readToEnd())
            }
        }

causes

error: 'async' call in a function that does not support concurrency
 72 |         process.terminationHandler = { @Sendable _  in
 73 |             outputQueue.async {
 74 |                 outputHandler(try? pipe.fileHandleForReading.readToEnd())
    |                 `- error: 'async' call in a function that does not support concurrency
 75 |             }
 76 |         }
1 Like

Hey Sébastien,
so an .async on a queue is not an asynchronous context (the closure is not async), so you won't be able to call async functions from it.

Would it be okey in this case to

        process.terminationHandler = { @Sendable _  in
            Task {
                outputHandler(try? pipe.fileHandleForReading.readToEnd())
            }
        }

or if you need it to run on a specific queue but be a Task then you might use a task executor:

final class NaiveQueueExecutor: TaskExecutor, SerialExecutor {
  let queue: DispatchQueue

  init(_ queue: DispatchQueue) {
    self.queue = queue
  }

  public func enqueue(_ _job: consuming ExecutorJob) {
    let job = UnownedJob(_job)
    queue.async {
      job.runSynchronously(
        isolatedTo: self.asUnownedSerialExecutor(),
        taskExecutor: self.asUnownedTaskExecutor())
    }
  }

  @inlinable
  public func asUnownedSerialExecutor() -> UnownedSerialExecutor {
    UnownedSerialExecutor(complexEquality: self)
  }

  @inlinable
  public func asUnownedTaskExecutor() -> UnownedTaskExecutor {
    UnownedTaskExecutor(ordinary: self)
  }
}

and

let myExecutor = NaiveQueueExecutor(...)

process.terminationHandler = { @Sendable _  in
  Task(executorPreference: myExecutor)  {
    outputHandler(try? pipe.fileHandleForReading.readToEnd())
  }
}

Though maybe someone may have a better idea.


There is a new Subprocess API in the works over here: Introduce Swift Subprocess by iCharlesHu · Pull Request #439 · apple/swift-foundation · GitHub but I don't see an async termination handler in it -- perhaps I missed it though.

1 Like

Thank you Konrad. This helped.
unblocked now :-)

1 Like

On the new Subprocess APIs we don't need an onTermination handler since we guarantee that after the run method finishes the subprocess has terminated. So you can just write straight line async code and know when the process is finished.

let result = try await Subprocess.run(...)
print(result.terminationStatus)
3 Likes

Just FWIW, this code will deadlock (unless you're reading the pipe elsewhere too).

Why?

  • Pipes are of a finite size
  • The subprocess writes into that pipe
  • If the pipe is full, the subprocess will block (or if it's writing using something async it'll suspend)
  • The process won't exit if it's blocked or suspended

So we have:

  • parent process waiting for child process to exit (before reading)
  • child process waiting for parent process to read (before exiting)

Which gives you a deadlock.


As others have suggested, ideally use the new Subprocess API but before that's available you could also get 'inspired' by the (internal) AsyncProcess module in swift-sdk-generator (examples in test cases) which also brings Process to async land.

3 Likes

Thank you Johannes for your analysis.
There is another reading indeed, I just shared part of the code for readability.

The other place is here

        pipe.fileHandleForReading.readabilityHandler = { fileHandle in 
            outputQueue.async {
                outputHandler(fileHandle.availableData)
            }
        }
1 Like

Okay, that makes sense. You'll still need to be super careful with

  • Reordering issues (is outputHandler really called in the correct order)
  • Backpressure (what happens if the output is produced faster than you can consume it, often easily testable by spawning cat /dev/zero and making sure that the parent process doesn't consume unbounded amounts of memory)
  • General concurrency issues

But yes, if you make sure to ~always read, then at least you won't stop the child process from exiting which is good.

2 Likes