What is the default target queue for a serial queue?

I just recently noticed that dispatch queues always target another queue by default even if not explicitly specified. For concurrent queues it will target the global queue with QoS .default. However the precondition in the following code will fail if I try to use it on serial queue.

  • Does a serial queue has a default target queue at all?

  • What behavior should I expect?

let serial = DispatchQueue(label: "serial")
let concurrent = DispatchQueue(label: "concurrent", attributes: .concurrent)

let serial_1 = DispatchQueue(label: "serial_1", target: serial)
let serial_2 = DispatchQueue(label: "serial_1", target: concurrent)

serial.async {
  dispatchPrecondition(condition: .onQueue(serial))
  // What is the default target queue here?
  print("on serial")
}

concurrent.async {
  // Both preconditions are met
  dispatchPrecondition(condition: .onQueue(concurrent))
  dispatchPrecondition(condition: .onQueue(DispatchQueue.global(qos: .default)))
  print("on concurrent")
}

serial_1.async {
  dispatchPrecondition(condition: .onQueue(serial))
  // What is the default target queue here?
  print("again on serial")
}

serial_2.async {
  // Both preconditions are met
  dispatchPrecondition(condition: .onQueue(concurrent))
  dispatchPrecondition(condition: .onQueue(DispatchQueue.global(qos: .default)))
  print("again on concurrent")
}
4 Likes

I didn’t know dispatchPrecondition, thanks!

5 Likes

If you don’t explicitly set the target queue, the serial queue will implicitly use a global concurrent queue. This is specifically documented in the dispatch_set_target_queue man page:

The default target queue of all dispatch objects created by the
application is the default priority global concurrent queue.

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

1 Like

Well, as I showed in the sample above I'm already aware of that fact. It works for concurrent queues, they default to the global concurrent queue with QoS default. However that seems not be the case for serial queues, which is main question in this thread. ;)

The GCD documentation has a major omission: it doesn't discuss the existence of both overcommit and non-overcommit queues.

Here's something the implementation says about overcommit queues:

/*!
 * @enum dispatch_queue_flags_t
 *
 * @constant DISPATCH_QUEUE_OVERCOMMIT
 * The queue will create a new thread for invoking blocks, regardless of how
 * busy the computer is.
 */
enum {
	DISPATCH_QUEUE_OVERCOMMIT = 0x2ull,
};

A non-overcommit queue does limit the number of threads it creates.

There are two global queues for each QoS. One is overcommit and the other is non-overcommit. The Swift function DispatchQueue.global always returns the non-overcommit queue. It prints like this:

(lldb) po DispatchQueue.global(qos: .default).description
"<OS_dispatch_queue_root: com.apple.root.default-qos>"

You can get a reference to the overcommit queue by dropping down to the C function dispatch_get_global_queue (available in Swift with a __ prefix) and passing the private value of DISPATCH_QUEUE_OVERCOMMIT:

(lldb) po __dispatch_get_global_queue(0, 2).description
"<OS_dispatch_queue_root: com.apple.root.default-qos.overcommit>"

Of course you should not do this in production code, because DISPATCH_QUEUE_OVERCOMMIT is not a public API. I don't know of a way to get a reference to the overcommit queue using only public APIs.

So, back to Quinn's man page quotation:

The default target queue of all dispatch objects created by the
application is the default priority global concurrent queue.

It says “the default priority global concurrent queue” (emphasis added). But there are two default-priority global concurrent queues. And as it turns out, the default target queue is the overcommit queue. Here's my modified version of your test, as a macOS command line program:

import Dispatch

let serial = DispatchQueue(label: "serial")
let concurrent = DispatchQueue(label: "concurrent", attributes: .concurrent)

let DISPATCH_QUEUE_OVERCOMMIT = 2
let defaultOvercommit = __dispatch_get_global_queue(Int(QOS_CLASS_DEFAULT.rawValue), 2)

let serial_1 = DispatchQueue(label: "serial_1", target: serial)
let serial_2 = DispatchQueue(label: "serial_1", target: concurrent)

let group = DispatchGroup()

group.enter()
serial.async {
  dispatchPrecondition(condition: .onQueue(serial))
  dispatchPrecondition(condition: .onQueue(defaultOvercommit))
  print("on serial")
  group.leave()
}

group.enter()
concurrent.async {
  // Both preconditions are met
  dispatchPrecondition(condition: .onQueue(concurrent))
  dispatchPrecondition(condition: .onQueue(DispatchQueue.global(qos: .default)))
  print("on concurrent")
  group.leave()
}

group.enter()
serial_1.async {
  dispatchPrecondition(condition: .onQueue(serial))
  dispatchPrecondition(condition: .onQueue(defaultOvercommit))
  print("again on serial")
  group.leave()
}

group.enter()
serial_2.async {
  // Both preconditions are met
  dispatchPrecondition(condition: .onQueue(concurrent))
  dispatchPrecondition(condition: .onQueue(DispatchQueue.global(qos: .default)))
  print("again on concurrent")
  group.leave()
}

group.wait()

Note that I had to add the DispatchGroup, else the program exits before any of the async blocks runs.

Anyway, here's the output:

on serial
again on concurrent
on concurrent
again on serial
11 Likes

Great, that solves the mistery for me. I do understand the precondition in it‘s full form. Basically the ‚onQueue‘ condition is met from the starting queue up to the last target queue which is either a non-overcommit or overcommit queue.


One more question: is it guaranteed that the same queue is going always to run on the thread?

There is only one fixed association between threads and queues: if you add a block to some queue Q, and Q is either the main queue (DispatchQueue.main) or has the main queue in its target chain, then the main thread will run the block.

If Q is not the main queue and does not have the main queue in its target chain, then GCD generally makes no guarantee about which thread will run the block. (But note that the main thread could still run the block, and in particular will run the block if you queue it synchronously from the main thread/queue.)

3 Likes

So if I run two async calls on a non main queue which does not target the main queue both calls can potentially be executed on different threads, even if it was a serial queue?

This also raises another question: there is no real thread safty with such custom queues?

Correct, there is no guarantee that two blocks added to some random queue will run on the same thread.

There is no “thread safety”, but there is “queue safety” if you use a serial queue. Two blocks added to a single serial queue cannot run at the same time, even if the blocks end up running on different threads, and memory operations are fenced so that all effects of the first block are visible to the second block.

3 Likes

Thank you.

Any chance there are still techniques in conjuction with GCD to isolate code so it does not exit the thread it was initially invoked on? Or is this idea to crazy or unecessary when working with GCD?

I‘m just trying to wrap my head about the theoretical aspect of this topic. ;)

I was going to start my previous response with a “What are you really trying to do?”, and then found that handy quote in the docs and decided to use that instead. I think it’s now clear I made the wrong call there (-:

Any chance there are still techniques in conjuction with GCD to
isolate code so it does not exit the thread it was initially invoked
on?

I need to clarify what your goals are here. I suspect that you want to configure a specific serial queue so that all blocks added to that queue run on a specific thread. Is that right? If so, that’s simply not possible (other than for the main queue, of course).

And why would you want to do that? I’ve had folks ask me about this before and the usual reasons are:

  • Thread-local storage

  • Run loop integration

For the former, you can use queue-specific storage (dispatch_queue_{g,s}et_specific). For the latter, yeah, just don’t go there (-:

Is there something else?

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

I'm trying to correct some synchronization issues in our application, but for that matter I need to fully understand how GCD behaves, to make everything more predictable. Please don't get me wrong, I do appreciate your help and didn't meant to sound rude or something in my previous post.

That said, when working with queues one should not care too 'much' about threads. Of course one should not everdo the usage of queues, to prevent things alike thread explosion. If done correctly, no matter on which thread an object is accessed a queue can protect it and also serialize the access so it is kind of thread safe, because it will be properly synchronized.

Is that mindset correct?

didn't meant to sound rude or something in my previous post.

Please rest assured that I’m thoroughly enjoying every aspect of this thread!

Of course one should not everdo the usage of queues, to prevent things
alike thread explosion.

I think that’s the key point. There’s two different types of parallelism problems that you can solve with Dispatch, symmetric and asymmetric:

  • In symmetric parallelism you have a whole bunch of blocks that all need to do the same work. In that case I strongly favour dispatch_apply (aka DispatchQueue.concurrentPerform(iterations:execute:)) because it lets Dispatch decide on how best to map your work to available CPUs.

  • In asymmetric parallelism you have different subsystems within your program and you want them to be able to run on different CPUs. In that case the design I favour is a ‘forest’ of serial queue trees, connected via the target queue, where the root of each tree is a serial queue that provides the actual concurrency for the entire subsystem. This idea is explored in depth in WWDC 2017 Session 706 Modernizing Grand Central Dispatch Usage, which is a great resource.

Finally, I want to stress that Dispatch isn’t magic. To work efficiently you have to size the work done by your dispatch blocks appropriately. This matters most in the symmetric case, and that’s something specifically called out in the dispatch_apply man page (look for the text starting with “Sometimes, when the block passed to dispatch_apply is simple, the use of striding can tune performance.").

However, it can also be relevant in the asymmetric case. For low-level primitives within your program it’s often best to not implement any concurrency control, but instead require that the caller serialise calls to that primitive. That makes the primitive’s code easier and also encourages the caller to do more work in each dispatch block, which is generally more efficient.

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

9 Likes

Sorry for joining the conversation so late, but I'm reading this post and it's very interesting.
I was focusing on the last response by @eskimo and specifically I was thinking about the "forest of serial queue trees" he mentioned.

Anyway, I think I'm getting why you should use different serial queues (so various subsystem can work on separate threads concurrently) but I'm failing to understand why a single subsystem would even need a tree more than just a single base queue. I even watched the WWDC talk about it, but didn't solve my doubts.

What's the advantage of using two different branches of serial queues if they target the same base serial queue? Wouldn't it be the same if we used the same queue in both of the branches, since when we dispatch to them actually we dispatch to the base serial queue?

The only thing I can imagine to be useful, is if the base queue is at lower QoS and the other queues are at higher QoS (but different one from the other) so we can schedule work at different QoS than the one specified by the base queue and maybe when the tasks of one of the queue get scheduled, they make the whole queue rise in QoS, making the system prioritize it the most (until the higher priority work is finished).

Am I missing something or it's really just that?

And, also, even if this is actually the only advantage, wouldn't be the same to force the priority on the single work dispatched to the base queue?
Is it then just a matter of utility? (avoid to repeat the QoS on each work dispatched if every work dispatched to that queue is actually going to be of a specific QoS)

Thanks again for all the conversation you had up until now and for future responses

Your QoS example is a good reason but there are some other queue-specific features that work better with this design:

  • Labels

  • dispatchPrecondition

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

One question on this. you said "the default target queue is the overcommit queue." But how come the default target queue for the concurrent queue (concurrent in your example) is not on defaultOvercommit ?

Only serial queues use an overcommit queue as their target by default.

1 Like

Thank you! I wonder if only serial queue uses overcommit queue, is it possible to cause thread explosion with serial queue?

How exactly does this fencing work? Say, in this class:

class C {
    private var x: Int

    func foo() {
       bar(x)
       x += 1
    }
}

is it the case that all accesses to x are fenced with some read/write memory barriers?

Edit: hmm, don't see anything like this in the resulting asm... What is the trick?

My understanding is that the fencing is between block invocations in the queue internals.