Feature Request: An AsyncAlgorithm operator that behaves similar to a Dispatch Source

The new Async Algorithms package includes Debounce and Throttle operators. I'd like to propose a third operator that is similar, but subtly different. I loosely call it Coalesce and it works very similar to how a DispatchSourceUserData works.

In a DispatchSource, the event handler is invoked immediately if it's not currently running. If it is running, then subsequent calls to .add() will be coalesced and the event handler will run a single time at the next available opportunity.

Both Debounce and Throttle will always delay invoking the event handler until some amount of time has elapsed. However, there are many situations where you want the event handler to fire immediately even if there's a high likelihood that subsequent events might be triggered that would nullify the initial one.

For example, when filtering a list or performing type-ahead-selection, it's common for the user to only type a single character. When using a DispatchSource, there is no delay in running the event handler and thus the results are presented as quickly as possible. However, when using Debounce or Throttle, the initial delay must first elapse before the event handler is called. This delay can actually be quite noticeable in many use cases.

For now, I continue to use a DispatchSource with a cancellable event handler, but it would be great if this Async Algorithms package could offer something similar.

4 Likes

That is an interesting concept. I have a few questions:

How would it work with regards to time? Would it have some sort of passed in Duration?

Is this best served as perhaps an option to the existing ones? Or is this behavior topologically distinct?

Is there a parent that it belongs to? e.g. in the family of reductions or in the family of filter?

What would a diagram of the events look like? For example this is what debounce looks like: swift-async-algorithms/TestDebounce.swift at main · apple/swift-async-algorithms · GitHub

1 Like

In a DispatchSource , the event handler is invoked immediately if it's not currently running. If it is running, then subsequent calls to .add() will be coalesced and the event handler will run a single time at the next available opportunity

If I understand correctly, you can already achieve this goal using .buffer(.bufferingNewest(1)).

In a DispatchSource, the duration is effectively the time it takes the event handler to run. If the event handler takes 100ms to run, then incoming events are coalesced during that period. Once the event handler exists, then GCD can invoke it again.

GCD offers minimal support for passing in a Uint value to the event handler. In a newer framework, it would be great if the value passed in could be a richer data type and perhaps customizable so that the caller could specify if they want the first, last or some other value that was received while the event handler has busy.

Unlike Debounce and Throttle, there is no explicit duration or elapsed time between invocations of the event handler. It should be called as frequently as possible.

It's basically a serial queue implemented such that when a new block is pushed into the queue, any pending blocks are cancelled. Thus there's never really any more than one block executing and one block potentially pending.

After playing around a decent amount with Combine publishers, I was never able to truly match the behaviour of a DispatchSource. That makes me think that perhaps the implementation is different enough that it warrants its own operator.

There was some discussion on Stack Overflow here, though even this amazing answer isn't quite the same as a DispatchSource:

How do you apply a Combine operator only after the first message has been received?

I would consider it more for a reducer because I want events that occur while the event handler is running to be reduced into a single event.

For example, if I'm busy filtering a list on a background thread and the user continues to modify the query parameter, then I only need to know about the latest value of the filter query. This is, clearly, what Debounce does but Debounce and Throttle both include some concept of elapsed time before they pass on their values to a listener. It's that delay that a DispathSource doesn't have which is appealing to me.

"abcd----e---f-g----|"

"a---d---e---f-g----|"

Not sure if that's the correct way to draw such a diagram, but consider the following pseudo-code:

for await value in input.coalesce() {
  // While this block of code is running, coalesce any input 
  // events into a single element. 

  // When this block exits, `input` should have either one 
  // pending element that should be processed immediately
  // or it should be empty and we wait for the next value to 
  // be generated.
}

Two concrete examples of where this pattern is useful might be the Spotlight window in the Finder and Xcode's Quick Open Panel. In both cases, you want to show results to the user as quickly as possible so you should respond to any initial input right away. (It might be the only input.)

As the user types more, the "filtering thread" tries to keep up in real time as much as possible but it's ok if it falls behind the user's typing speed. Key events that occur while a filter is in-progress should just be coalesced together into a single event that can be processed on the next invokation of the filtering event handler. At most, there's only ever one pending event.

I'll have to go back through notes from awhile ago to remember. But does .buffer support the idea of buffering an unlimited number of elements while downstream Publishers are busy while also buffering no elements if the downstream Publisher is not busy? I feel like I ran into issues there but perhaps I'm mistaken now.

So the buffer algorithm does allow for custom actors to be the buffer; perhaps that might allow for the affordance you are looking for? Per the validation diagrams there is a writeup here: https://github.com/apple/swift-async-algorithms/blob/main/Guides/Validation.md. Let me know if there are areas that could be improved to make that more clear/easy to understand.

1 Like