I'm unable to prove that an async/await is faster than DispatchQueue

Hi, I wanted to write some code to test when async/await is faster than DispatchQueue.

I asked myself two questions:

  1. Is adding new Job to an Executor faster than adding new Work Item to DispatchQueue (including executing time)
  2. Is continuations switching in Async/Await faster than thread switching in DispatchQueue

For first question I have created simple functions like arithmetic calculations or just empty function and execute them concurrently in TaskGroup/DispatchQueue.
For second question I have used the same functions from first test but add some Task.sleep/usleep to force continutaions/thread switching

First test showed that DispatchQueue is faster, second showed no visible differences, sometimes one is faster, sometimes the other.

As far as I know after watching WWDCs and reading Swift proposals at least second test should show how Async/Await continuations switching is faster than thread switching in DispatchQueue.

Am I missing something or maybe my test methods are incorrect?

I'm attaching me code below, thanks! :)

class ViewController: UIViewController {
    var stoper = Stoper()
    var queue = DispatchQueue(label: "queue", attributes: .concurrent)
    let iterations: Int = 500

    override func viewDidLoad() {
        super.viewDidLoad()

//        queueOperation(title: "LONG COMPUTATIONS", longComputations)
//        asyncOperation(title: "LONG COMPUTATIONS", longComputationsAsync)
        
//        queueOperation(title: "SHORT COMPUTATIONS", shortComputations)
//        asyncOperation(title: "SHORT COMPUTATIONS", shortComputationsAsync)
        
//        queueOperation(title: "EMPTY", empty)
//        asyncOperation(title: "EMPTY", emptyAsync)
        
//        queueOperation(title: "EMPTY WITH SLEEP", emptyWithSleep)
//        asyncOperation(title: "EMPTY WITH SLEEP", emptyWithSleepAsync)
        
        queueOperation(title: "LONG COMPUTATIONS WITH SLEEP", longComputationsWithSleep)
        asyncOperation(title: "LONG COMPUTATIONS WITH SLEEP", longComputationsWithSleepAsync)
    }
    
    /// Execute 'operation' concurrently on DispatchQueue
    func queueOperation(title: String, _ operation: @escaping (@escaping () -> Void) -> Void) {
        print("Start dispatch \(title)")
        
        // Start timer
        stoper.start()
        
        let group = DispatchGroup()
        for _ in 0..<iterations {
            group.enter()
            queue.async {
                operation {
                    group.leave()
                }
            }
        }

        group.wait()
        // End timer
        print("Time \(title): \(stoper.stop())\n")
    }

    /// Execute 'operation' concurrently on TaskGroup
    func asyncOperation(title: String, _ operation: @escaping () async -> Void) {
        print("Start async \(title)")
        
        // Start timer
        stoper.start()
        
        Task.detached {
            await withTaskGroup(of: Void.self) { task in
                for _ in 0..<self.iterations {
                    task.addTask {
                        let _ = await operation()
                    }
                }

                await task.waitForAll()
                
                // End timer
                print("Time \(title): \(await self.stoper.stop())")
            }
        }
    }

    
    // >>>>>>>>>> LONG COMPUTATIONS <<<<<<<<<< //
    func longComputationsAsync() async {
        var value: Double = 0.1
        for _ in 0..<10000 { value /= 0.1 }
        for _ in 0..<10000 { value *= 0.1 }
    }

    func longComputations(callback: @escaping () -> Void) {
        var value: Double = 0.1
        for _ in 0..<10000 { value /= 0.1 }
        for _ in 0..<10000 { value *= 0.1 }
        callback()
    }
    
    // >>>>>>>>>> LONG COMPUTATIONS WITH SLEEP <<<<<<<<<< //
    func longComputationsWithSleepAsync() async {
        var value: Double = 0.1
        for _ in 0..<10 {
            for _ in 0..<1000 { value /= 0.1 }
            for _ in 0..<1000 { value *= 0.1 }
            try! await Task.sleep(nanoseconds: 1000_000)
        }
    }

    func longComputationsWithSleep(callback: @escaping () -> Void) {
        var value: Double = 0.1
        for _ in 0..<10 {
            for _ in 0..<1000 { value /= 0.1 }
            for _ in 0..<1000 { value *= 0.1 }
            usleep(1000)
        }
        callback()
    }
    
    // >>>>>>>>>> SHORT COMPUTATIONS <<<<<<<<<< //
    func shortComputationsAsync() async {
        var value: Double = 0.1
        for _ in 0..<10 { value /= 0.1 }
        for _ in 0..<10 { value *= 0.1 }
    }

    func shortComputations(callback: @escaping () -> Void) {
        var value: Double = 0.1
        for _ in 0..<10 { value /= 0.1 }
        for _ in 0..<10 { value *= 0.1 }
        callback()
    }
    
    // >>>>>>>>>> EMPTY <<<<<<<<<< //
    func emptyAsync() async {
        
    }

    func empty(callback: @escaping () -> Void) {
        callback()
    }
    
    // >>>>>>>>>> EMPTY WITH SLEEP <<<<<<<<<< //
    func emptyWithSleepAsync() async {
        try! await Task.sleep(nanoseconds: 1000_000)
    }

    func emptyWithSleep(callback: @escaping () -> Void) {
        usleep(1000)
        callback()
    }
}

struct Stoper {
    var startTime: DispatchTime = .init(uptimeNanoseconds: 0)
    
    mutating func start() {
        startTime = DispatchTime.now()
    }
    
    func stop() -> Double {
        let stopTime = DispatchTime.now()
        let nanoTime = stopTime.uptimeNanoseconds - startTime.uptimeNanoseconds
        let timeInterval = Double(nanoTime) / 1_000_000_000
        
        return timeInterval
    }
}

I hope nobody ever claimed that forking and spinning up new tasks with async/await is going to be a faster work queue than DispatchQueue. DispatchQueue is a high-quality work queue implementation. While creating a Task is fairly cheap, it’s almost certainly more expensive than whatever small overheads above optimal that DispatchQueue imposes.

There are two patterns on which Swift’s async/await should substantially beat traditional queue-and-callback programming in performance, neither of which will be apparent in every workload. The first is that tasks which make a sequence of calls should have better allocation patterns than you would get with callbacks, because the task re-uses context memory with a stack instead of freshly allocating closure objects. The second is that Swift can schedule switches on and off actors much more efficiently than dispatching onto a queue; in the fast path, the cost is more like acquiring or releasing a lock. Most other performance characteristics should be similar, or are more of a trade-off than a simple win.

Most of the benefits of async/await are not performance-oriented.

23 Likes

What I was trying to achieve with those simple functions with sleep is to test statement from "Swift concurrency: Behind the scenes" about scheduling overhead and see benefits in Async/Await druing excessive context switches

Swift concurrency: Behind the scenes
t: 10:20

Thanks

It looks like your long computations might not actually be doing any work; if I compile those with optimizations, the entire loop is removed. Benchmarking is difficult. :slight_smile:

@rokhinip may have more insights into how you can produce a thread explosion using DispatchQueues. It'll matter exactly which OS you're running on, though.

2 Likes

ahh sneaky swift optimizations, classic.
Thank you :)

For what it's worth, what I saw when I was working on AsyncBytes performance is that actor hopping was faster than dispatch_asyncs hopping back and forth between two queues, but slower than dispatch_async from a single queue to itself. This is an expected and good result in my opinion, since the "async to current queue" path in libdispatch is extraordinarily fast.

That's just one benchmark though, so please do file bugs if and when you run into places where it's slower than you'd like.

5 Likes