What is the default target queue for a serial queue?

I just recently noticed that dispatch queues always target another queue by default even if not explicitly specified. For concurrent queues it will target the global queue with QoS .default. However the precondition in the following code will fail if I try to use it on serial queue.

  • Does a serial queue has a default target queue at all?

  • What behavior should I expect?

let serial = DispatchQueue(label: "serial")
let concurrent = DispatchQueue(label: "concurrent", attributes: .concurrent)

let serial_1 = DispatchQueue(label: "serial_1", target: serial)
let serial_2 = DispatchQueue(label: "serial_1", target: concurrent)

serial.async {
  dispatchPrecondition(condition: .onQueue(serial))
  // What is the default target queue here?
  print("on serial")
}

concurrent.async {
  // Both preconditions are met
  dispatchPrecondition(condition: .onQueue(concurrent))
  dispatchPrecondition(condition: .onQueue(DispatchQueue.global(qos: .default)))
  print("on concurrent")
}

serial_1.async {
  dispatchPrecondition(condition: .onQueue(serial))
  // What is the default target queue here?
  print("again on serial")
}

serial_2.async {
  // Both preconditions are met
  dispatchPrecondition(condition: .onQueue(concurrent))
  dispatchPrecondition(condition: .onQueue(DispatchQueue.global(qos: .default)))
  print("again on concurrent")
}

I didn’t know dispatchPrecondition, thanks!

2 Likes

If you don’t explicitly set the target queue, the serial queue will implicitly use a global concurrent queue. This is specifically documented in the dispatch_set_target_queue man page:

The default target queue of all dispatch objects created by the
application is the default priority global concurrent queue.

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

Well, as I showed in the sample above I'm already aware of that fact. It works for concurrent queues, they default to the global concurrent queue with QoS default. However that seems not be the case for serial queues, which is main question in this thread. ;)

The GCD documentation has a major omission: it doesn't discuss the existence of both overcommit and non-overcommit queues.

Here's something the implementation says about overcommit queues:

/*!
 * @enum dispatch_queue_flags_t
 *
 * @constant DISPATCH_QUEUE_OVERCOMMIT
 * The queue will create a new thread for invoking blocks, regardless of how
 * busy the computer is.
 */
enum {
	DISPATCH_QUEUE_OVERCOMMIT = 0x2ull,
};

A non-overcommit queue does limit the number of threads it creates.

There are two global queues for each QoS. One is overcommit and the other is non-overcommit. The Swift function DispatchQueue.global always returns the non-overcommit queue. It prints like this:

(lldb) po DispatchQueue.global(qos: .default).description
"<OS_dispatch_queue_root: com.apple.root.default-qos>"

You can get a reference to the overcommit queue by dropping down to the C function dispatch_get_global_queue (available in Swift with a __ prefix) and passing the private value of DISPATCH_QUEUE_OVERCOMMIT:

(lldb) po __dispatch_get_global_queue(0, 2).description
"<OS_dispatch_queue_root: com.apple.root.default-qos.overcommit>"

Of course you should not do this in production code, because DISPATCH_QUEUE_OVERCOMMIT is not a public API. I don't know of a way to get a reference to the overcommit queue using only public APIs.

So, back to Quinn's man page quotation:

The default target queue of all dispatch objects created by the
application is the default priority global concurrent queue.

It says “the default priority global concurrent queue” (emphasis added). But there are two default-priority global concurrent queues. And as it turns out, the default target queue is the overcommit queue. Here's my modified version of your test, as a macOS command line program:

import Dispatch

let serial = DispatchQueue(label: "serial")
let concurrent = DispatchQueue(label: "concurrent", attributes: .concurrent)

let DISPATCH_QUEUE_OVERCOMMIT = 2
let defaultOvercommit = __dispatch_get_global_queue(Int(QOS_CLASS_DEFAULT.rawValue), 2)

let serial_1 = DispatchQueue(label: "serial_1", target: serial)
let serial_2 = DispatchQueue(label: "serial_1", target: concurrent)

let group = DispatchGroup()

group.enter()
serial.async {
  dispatchPrecondition(condition: .onQueue(serial))
  dispatchPrecondition(condition: .onQueue(defaultOvercommit))
  print("on serial")
  group.leave()
}

group.enter()
concurrent.async {
  // Both preconditions are met
  dispatchPrecondition(condition: .onQueue(concurrent))
  dispatchPrecondition(condition: .onQueue(DispatchQueue.global(qos: .default)))
  print("on concurrent")
  group.leave()
}

group.enter()
serial_1.async {
  dispatchPrecondition(condition: .onQueue(serial))
  dispatchPrecondition(condition: .onQueue(defaultOvercommit))
  print("again on serial")
  group.leave()
}

group.enter()
serial_2.async {
  // Both preconditions are met
  dispatchPrecondition(condition: .onQueue(concurrent))
  dispatchPrecondition(condition: .onQueue(DispatchQueue.global(qos: .default)))
  print("again on concurrent")
  group.leave()
}

group.wait()

Note that I had to add the DispatchGroup, else the program exits before any of the async blocks runs.

Anyway, here's the output:

on serial
again on concurrent
on concurrent
again on serial
6 Likes

Great, that solves the mistery for me. I do understand the precondition in it‘s full form. Basically the ‚onQueue‘ condition is met from the starting queue up to the last target queue which is either a non-overcommit or overcommit queue.


One more question: is it guaranteed that the same queue is going always to run on the thread?

There is only one fixed association between threads and queues: if you add a block to some queue Q, and Q is either the main queue (DispatchQueue.main) or has the main queue in its target chain, then the main thread will run the block.

If Q is not the main queue and does not have the main queue in its target chain, then GCD generally makes no guarantee about which thread will run the block. (But note that the main thread could still run the block, and in particular will run the block if you queue it synchronously from the main thread/queue.)

2 Likes

So if I run two async calls on a non main queue which does not target the main queue both calls can potentially be executed on different threads, even if it was a serial queue?

This also raises another question: there is no real thread safty with such custom queues?

Correct, there is no guarantee that two blocks added to some random queue will run on the same thread.

There is no “thread safety”, but there is “queue safety” if you use a serial queue. Two blocks added to a single serial queue cannot run at the same time, even if the blocks end up running on different threads, and memory operations are fenced so that all effects of the first block are visible to the second block.

3 Likes

Thank you.

Any chance there are still techniques in conjuction with GCD to isolate code so it does not exit the thread it was initially invoked on? Or is this idea to crazy or unecessary when working with GCD?

I‘m just trying to wrap my head about the theoretical aspect of this topic. ;)

I was going to start my previous response with a “What are you really trying to do?”, and then found that handy quote in the docs and decided to use that instead. I think it’s now clear I made the wrong call there (-:

Any chance there are still techniques in conjuction with GCD to
isolate code so it does not exit the thread it was initially invoked
on?

I need to clarify what your goals are here. I suspect that you want to configure a specific serial queue so that all blocks added to that queue run on a specific thread. Is that right? If so, that’s simply not possible (other than for the main queue, of course).

And why would you want to do that? I’ve had folks ask me about this before and the usual reasons are:

  • Thread-local storage

  • Run loop integration

For the former, you can use queue-specific storage (dispatch_queue_{g,s}et_specific). For the latter, yeah, just don’t go there (-:

Is there something else?

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

I'm trying to correct some synchronization issues in our application, but for that matter I need to fully understand how GCD behaves, to make everything more predictable. Please don't get me wrong, I do appreciate your help and didn't meant to sound rude or something in my previous post.

That said, when working with queues one should not care too 'much' about threads. Of course one should not everdo the usage of queues, to prevent things alike thread explosion. If done correctly, no matter on which thread an object is accessed a queue can protect it and also serialize the access so it is kind of thread safe, because it will be properly synchronized.

Is that mindset correct?

didn't meant to sound rude or something in my previous post.

Please rest assured that I’m thoroughly enjoying every aspect of this thread!

Of course one should not everdo the usage of queues, to prevent things
alike thread explosion.

I think that’s the key point. There’s two different types of parallelism problems that you can solve with Dispatch, symmetric and asymmetric:

  • In symmetric parallelism you have a whole bunch of blocks that all need to do the same work. In that case I strongly favour dispatch_apply (aka DispatchQueue.concurrentPerform(iterations:execute:)) because it lets Dispatch decide on how best to map your work to available CPUs.

  • In asymmetric parallelism you have different subsystems within your program and you want them to be able to run on different CPUs. In that case the design I favour is a ‘forest’ of serial queue trees, connected via the target queue, where the root of each tree is a serial queue that provides the actual concurrency for the entire subsystem. This idea is explored in depth in WWDC 2017 Session 706 Modernizing Grand Central Dispatch Usage, which is a great resource.

Finally, I want to stress that Dispatch isn’t magic. To work efficiently you have to size the work done by your dispatch blocks appropriately. This matters most in the symmetric case, and that’s something specifically called out in the dispatch_apply man page (look for the text starting with “Sometimes, when the block passed to dispatch_apply is simple, the use of striding can tune performance.").

However, it can also be relevant in the asymmetric case. For low-level primitives within your program it’s often best to not implement any concurrency control, but instead require that the caller serialise calls to that primitive. That makes the primitive’s code easier and also encourages the caller to do more work in each dispatch block, which is generally more efficient.

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

3 Likes
Terms of Service

Privacy Policy

Cookie Policy