Cancelable DispatchQueue.concurrentPerform

Would it be possible to make a cancelable version of DispatchQueue.concurrentPerform, allowing you to abort any yet-to-be-scheduled blocks?

The use-case is for when you’re concurrently performing a job and one of them sets a flag or fails in a way which already defines the result. Further execution wouldn’t affect the outcome, so it would be nice if an operation could flag back to concurrentPerform that it doesn’t have to schedule any more jobs.

For a real-world example, see this concurrent RandomAccessCollection wrapper I was building: Concurrent collection wrapper · GitHub

It would be beneficial if we could stop early in the case of errors (see: _forEach), or if some global flag is set (see: contains).

- Karl

Your example is:

    var _error: Error?
    DispatchQueue.concurrentPerform(iterations: numericCast(count)) {
      do { try body($0) }
      catch { _error = error } // TODO: lock. Would be cool if we could cancel future iterations, too...
    }
    if let error = _error {
      try rescue(error)
    }

So how would cancelability be superior to saying this?

    var _error: Error?
    DispatchQueue.concurrentPerform(iterations: numericCast(count)) {
      guard _error == nil else { return }
      do { try body($0) }
      catch { _error = error } // TODO: lock. Would be cool if we could cancel future iterations, too...
    }
    if let error = _error {
      try rescue(error)
    }

···

On Feb 23, 2017, at 8:35 AM, Karl Wagner via swift-corelibs-dev <swift-corelibs-dev@swift.org> wrote:

Would it be possible to make a cancelable version of DispatchQueue.concurrentPerform, allowing you to abort any yet-to-be-scheduled blocks?

The use-case is for when you’re concurrently performing a job and one of them sets a flag or fails in a way which already defines the result. Further execution wouldn’t affect the outcome, so it would be nice if an operation could flag back to concurrentPerform that it doesn’t have to schedule any more jobs.

For a real-world example, see this concurrent RandomAccessCollection wrapper I was building: Concurrent collection wrapper · GitHub

--
Brent Royal-Gordon
Architechies

Would it be possible to make a cancelable version of
DispatchQueue.concurrentPerform,
allowing you to abort any yet-to-be-scheduled blocks?

The use-case is for when you’re concurrently performing a job and one of
them sets a flag or fails in a way which already defines the result.
Further execution wouldn’t affect the outcome, so it would be nice if an
operation could flag back to concurrentPerform that it doesn’t have to
schedule any more jobs.

For a real-world example, see this concurrent RandomAccessCollection
wrapper I was building: https://gist.github.com/karwa/
43ae838809cc68d317003f2885c71572

Your example is:

   var _error: Error?
   DispatchQueue.concurrentPerform(iterations: numericCast(count)) {
     do { try body($0) }
     catch { _error = error } // TODO: lock. Would be cool if we could
cancel future iterations, too...
   }
   if let error = _error {
     try rescue(error)
   }

So how would cancelability be superior to saying this?

   var _error: Error?
   DispatchQueue.concurrentPerform(iterations: numericCast(count)) {
     guard _error == nil else { return }
     do { try body($0) }
     catch { _error = error } // TODO: lock. Would be cool if we could
cancel future iterations, too...
   }
   if let error = _error {
     try rescue(error)
   }

···

On 24 Feb 2017, at 09:48, Brent Royal-Gordon <brent@architechies.com> wrote:

On Feb 23, 2017, at 8:35 AM, Karl Wagner via swift-corelibs-dev < swift-corelibs-dev@swift.org> wrote:

--
Brent Royal-Gordon
Architechies

That's a fair point, I could do that in order to skip the body. In general
though, the size of the collection may be very large - we could be queuing
up thousands of no-op blocks after the first element threw an error (or
signalled a value, from a function like "contains"). It would be nice if we
could propagate the fact that the outcome has already been determined up to
Dispatch, which could then simply not bother to queue any more blocks.

I've kind-of worked around it by using a batched concurrentPerform for the
methods where we can maybe expect early termination (such as contains).

- Karl