Today I found one custom serial DispatchQueue queue in our app using Thread 1 to execute its' work. The function had an assertion to make sure it wasn't executing on the main thread. The assertion failed.
My question:
Is this behavior expected ?
Are there better ways to check if we are on the main thread ?
Can we always assume Thread 1 is the main thread ?
I have read a few blog posts that say that relying on Thread.isMainThread is not the best of practices.
Except for the dispatch queue representing your app's main thread, the system makes no guarantees about which thread it uses to execute a task.
Which implies : The only guarantee we have is DispatchQueue.main { } will always run on the main thread. Any other queue can make its' own decisions about which Thread to use.
I was somewhat aware that concurrent global queues may sometimes use main thread to do work, but seeing a serial queue use the main thread was somewhat unexpected.
For some background context, I had left the app running in the simulator overnight, and when opened the simulator in the morning, I found it halting on the assertion. And in practice, I have never seen a custom serial queue use Thread 1.
There are two scenarios in which a dispatch queue might run on the main thread:
If you dispatch synchronously from the main queue to a background queue, as an optimization, the OS may run that work on the main thread. The idea is that a synchronous call is going to block the calling thread anyway, so it is smart enough to eliminate the unnecessary context switch. This can happen when dispatching synchronously to a global concurrent queue, a custom concurrent queue, or a custom serial queue. (The only exception to this optimization is when dispatching synchronously from a background queue to the main queue … in that case it always will run on the main thread.)
The other (edge case) scenario is that technically one can set a target queue, X, for a particular dispatch queue, Y, meaning that any work dispatched to Y will run on X.
FWIW, it is not the case that work submitted to a global concurrent queue will ever run on the main thread (except with the optimization outlined in point one, above).
Your check is fine, I could also suggest: dispatchPrecondition(condition: .notOnQueue(.main))
I can see the sync call being used up the stack (the bottom of your screenshot) – that's the problem and the main thread is indeed blocked at this point.
OTOH the need to be on a secondary thread when performing "seemingly long" operations is not always justified. For example if you make an asynchronous (not in swift "async/await" sense of this word but in "URLSession.dataTask + callback or delegate" sense of this word) network call from the main thread – the main thread is not going to be blocked for a significant amount of time, the actual "networking" (and possibly blocking) will be done on some other thread and once done - complete on the known queue (e.g. the one you supplied along with the delegate, etc).
Thank you for the detailed explanation! Based on the responses above, I understand that when using queue.sync from the main thread, GCD optimizes by avoiding a context switch and executes the synchronous work directly on the main thread to prevent unnecessary overhead.
Context and Challenge
I am working with an SDK that enforces thread-safety by acquiring a mutex lock via Objective-C's @synchronized on every API call. Many of these APIs are being invoked on the main thread.
The straightforward solution would be to offload SDK interactions to a custom serial queue using queue.async to prevent blocking the main thread. However, the application's architecture expects these API calls to return their results synchronously, making it non-trivial to refactor for asynchronous handling.
Question
Are there alternative design patterns or strategies that would allow me to:
Move the SDK interactions (and the associated locking) to a background thread, and
Still provide synchronous return values to maintain compatibility with the existing codebase?
Unfortunately there's no good way to synchronously wait for asynchronous work. Even if there was though, it would eliminate the benefit (the main thread would still be blocked, just blocked waiting for the return value instead of blocked running the work)
You can use a semaphore to wait for asynchronous work to complete – I did this in order to use an async-unaware REPL library in a command line tool – but it’s full of foot guns, particularly if you call (or transitively use) APIs that dispatch work to the main thread. Example here.
Locks/mutexes is the classic answer. As a guideline keep locked sections small and quick. If it's say a dictionary or a pair of dictionaries that's getting read / written from different threads – lock is a good and the quickest solution. The alternatives of hopping onto a dedicated serial queue to change the value asynchronously (plus using a callback version for "get"), or an actor plus await – are much slower and could be more cumbersome to use (e.g. changing the previously "sync" code to a "callback" or an "async" version).
Perhaps it would help if you could provide a couple of examples here of what you are doing currently.
Thank you for the follow up!
I was hesitant to ask in this forum as it felt more like a stack overflow type of question, but here goes!
Main question: Is the structure below the right approach to work around an SDK that locks on every operation ?
I have an SDK that I am working with, which is SQLite wrapper, which does not support concurrent reads/writes. I know SQLite does, but this particular 3rd party SDK does not.
The SDK is set up in such a way, where every operation and access to the SDK locks while it does any work. We were seeing:
app crashes due to app hanging on the main thread (SDK access from main thread, but it's locked due to parsing work already going on (i.e) 10K items)
lock contention from multiple threads (since we were not serializing access to the SDK, and many different classes having their own queues trying to get access to the SDK from different threads.)
So, to avoid the problems, we decided to start with 2 goals:
Avoid accessing the SDK from the main thread
Avoid multiple threads contending to do SDK work, by trying to serialize the access to the SDK itself and hence the lock.
We decided to use a single serial shared dispatchQueue through which all classes and code logic access the SDK and perform work.
SDK sample that locks on *every* operation
@interface SDK : NSObject
- (void) performWork :(void (^)(void))work;
@end
@implementation SDK
// this object will be retained and used to lock from outside classes.
id _mutex;
- (instancetype)init {
if (self == [super init]) {
_mutex = [NSObject new];
}
return self;
}
-(void)performWork:(void (^)(void))work {
@synchronized (_mutex) {
work();
}
}
@end
Application code
class ViewModel {
let sdk = SDK()
public var sharedSDKAccessQueue: DispatchQueue(label: com.application.accessQueue)
func queryDataA() -> A {
sharedSDKAccessQueue.sync {
sdk.performWork {
/// Process 10,000 items, potentially taking 5-6 seconds.
return result
}
}
}
func queryDataB() -> B {
sharedSDKAccessQueue.sync {
sdk.performWork {
/// Process 5,000 items, potentially taking 2-4 seconds
return result
}
}
}
func writeDataB() {
sharedSDKAccessQueue.sync {
sdk.performWork {
// Write 500-600 items into the SDK
}
}
}
}
class MyViewController: UIViewController {
let viewModel = ViewModel()
override func viewDidLoad() {
super.viewDidLoad()
getData()
}
func getData() {
viewModel.sharedSDKAccessQueue.async {
let data1 = viewModel.queryDataA()
let data2 = viewModel.queryDataB()
}
}
}
The approach shown is highly prone to deadlocks, and I wouldn’t recommend using it. Instead, I’d suggest removing the mutex entirely along with the sync queue operations. You can then switch to using queue.async or async/await in Swift for a more efficient and deadlock-free solution.