Running two threads in a strict interleaved order

I want to run the two threads in a strict interleave order: 1st, 2nd, 1st, 2nd, ...

Below works:

let mutex = NSLock()
var isFirst: Bool = true

Thread {
    while true {
        while !(mutex.withLock { isFirst }) {
            // busy waiting
        }
        print("first")
        mutex.withLock {
            isFirst = false
        }
    }
}.start()

Thread {
    while true {
        while (mutex.withLock { isFirst }) {
            // busy waiting
        }
        print("second")
        mutex.withLock {
            isFirst = true
        }
    }
}.start()

RunLoop.current.run(until: .distantFuture)

but is obviously bad because of the busy waiting.
Tried to use a cond variable and a semaphore, but the result is neither correct nor concise nor Swifty.

How to do this better?

It pushes to the main queue but queues are not threads so what is the intention of pushing to the main queue?

Edit: oh it looks like main queue always runs on main thread

Anyways, a good helper func I’ve seen in the past for something like this looked something like:

func block(complete: () -> Void) {
// only main thread can run completion handler
     if thread.current.isMainThread { 
                  complete()
          } else { block(complete) }
}

Maybe combined with someth like

func start() {

let sema DispatchSemaphore(value:0)

DispatchQueue.global().async({ 
 //only main thread can signal
block({(sema.signal())})
})

while true {
if sema.wait(timeout: .now().advancedBy(.0001)) == .timedOut { continue } 
break //only not the main thread can do this
}
// figure out what thread broke out and repeat
}

What are you actually trying to do? Do they have to be separate threads?

1 Like

Yes, either two separate threads or two different dispatch queues, executing in the interlocked manner.

Please ignore DispatchQueue.main.sync above (it is there for something else). Commented out those DispatchQueue.main.sync in the code above to make it less confusing.

I ask because the simplest answer I can think of is to have one thread that pulls work items from two (threadsafe, non-Dispatch) queues alternately, rather than doing a cross-thread handshake. How applicable this is depends on what you’re actually trying to do.

3 Likes

No, as I said it must be either two separate threads or two different dispatch queues. They will somehow "ping-pong the beachball of control" from one to another (mutex? semaphore?)


Having said that... If there's a magic solution that could make it work in a single thread mode I'd like to know that! Pseudocode:

func first() {
    // does something long
    // calls to "yieldToSecond()" every now and then
}

func second() {
    // does something long
    // calls to "yieldToFirst()" every now and then
}

Your setup describes two threads performing cooperative multitasking. When multiple threads have to communicate with each other, a condition lock might be the best synchronization idiom:

let lock = NSConditionLock(condition: 1)

Thread {
    while true {
        lock.lock(whenCondition: 1)
        print("first")
        lock.unlock(withCondition: 2)
    }
}.start()

Thread {
    while true {
        lock.lock(whenCondition: 2)
        print("second")
        lock.unlock(withCondition: 1)
    }
}.start()
4 Likes

This works well, thank you!

Posting an amendment to this:

func block(complete: () -> Void) {
// only main thread can run completion handler
if thread.current.isMainThread {
complete()
} else { block(complete) }
}

func block<T>(_ block: @escaping (() -> T)) -> T {
     if Thread.isMainThread { return block() } 
      else { return DispatchQueue.main.sync { block() }

}

So it goes like:
If on main thread run completion handler
Otherwise return the completion handler on the main thread

Now assuming we start not on the main thread, we could write the name of the thread we are on and then loop and check if we are on it

Yes, thank you!

Code for Review
//
//  InterleavedThreads.swift
//  CmdLine
//
//  Created by ibex on 28/3/2024.
//

// [https://forums.swift.org/t/running-two-threads-in-a-strict-interleaved-order/70931/7]

import Foundation

@main
class InterleavedThreads {
    let id: String
    let c1: Int
    let c2: Int
    let limit: Int
    var counter = 0

    required init (id:String, c1:Int, c2:Int, limit:Int) {
        self.id = id
        self.c1 = c1
        self.c2 = c2
        self.limit = limit
    }
    
    static func main () {
        let _1 = Self (id: "1", c1: 16, c2: 17, limit:32)
        let _2 = Self (id: "2", c1: 32, c2: 33, limit:64)

        _1.run ()
        _2.run ()
        RunLoop.current.run (until: .distantFuture)
    }
    
    private func run () {
        let lock = NSConditionLock (condition: c1)

        Thread {
            while true {
                lock.lock(whenCondition: self.c1)
                defer {
                    lock.unlock(withCondition: self.c2)
                }
                
                self.first ()
                if self.counter >= self.limit {
                    return
                }

            }
        }.start()

        Thread {
            while true {
                lock.lock(whenCondition: self.c2)
                defer {
                    lock.unlock(withCondition: self.c1)
                }
                
                self.second ()
                if self.counter >= self.limit {
                    return
                }

            }
        }.start()
    }
    
    private func first () {
        print (#function, id, c1, c2, counter)
        counter += 1
    }
    
    private func second () {
        print (#function, id, c1, c2, counter)
        counter += 1
    }
}

I implemented the code in terms of "yield", although the API of this yield is a bit weird:

  • yield takes two parameters: the current thread identification and the other thread identification
  • the other thread identification parameter is optional and when called first time from a thread the thread has to pass nil there.

Don't see now how to make this API less awkward.

Full test code:

let lock = NSConditionLock(condition: 0)
var progress = 0

func yield(from: Int, to: Int?) {
    if let to {
        lock.unlock(withCondition: to)
    }
    lock.lock(whenCondition: from)
}

Thread {
    var other: Int?
    while true {
        yield(from: 0, to: other)
        other = 1
        printProgress()
    }
}.start()

Thread {
    var other: Int?
    while true {
        yield(from: 1, to: other)
        other = 0
        printProgress()
    }
}.start()

func printProgress() {
    progress += 1
    if (progress % 1000000) == 0 {
        print(progress)
    }
}

RunLoop.current.run(until: .distantFuture)

Note that printProgress uses a global progress variable without the need of synchronising (via lock or atomic or otherwise) – this is safe because the two threads are run in a serialised manner.

This is not generally sufficient! Different threads may run on different cores, which have different caches, that can get out of sync. You will generally only see this on arm architectures, which provide weaker memory guarantees than x86. In this case, however, you have effectively protected the progress variable with the same lock you’re using as a condition, so it works out.

5 Likes

Good catch. I wonder what changes could I make to this implementation to see the effect you are talking about.

I’m not sure you can. Pretty much all synchronization primitives will issue a full memory barrier.

Edit: actually, I take that back. Apparently it’s idiomatic to issue a barrier only in the inner synchronization domain (”same CPU cluster”, roughly speaking), not the full system.

1 Like