Task safe way to write a file asynchronously

Hi! I'm curious what the current suggested way is to write data to a file from within an async Task that also keeps with the runtime contract of "threads are always able to make forward progress".

The Foundation functions I see all block the thread while writing. These are:

Am I correct in thinking that blocking I/O functions shouldn't be called from within a Task?
Is there another API for this functionality I may be missing?
Is wrapping one of the above functions in a DispatchQueue the preferred solution?

3 Likes

AFAIR Swift NIO had some async helpers, but apparently they still rely on blocking I/O calls under the hood. Overall I second this, and I would love to see some "recommended way to do it" sample code that integrates specifically filesystem NIO APIs with Task and actors, but Foundation and Dispatch examples would be great too for comparison.

My brief research shows that async IO at the kernel level is poll-based. That is, the application has to poll for the result. To abstract this at a higher level requires a dedicated thread or the use of a runloop to ensure busy-waiting doesn't block. You could also chain-queue a block on a DispatchQueue, but that seems very inefficient.

1 Like

Depending on how deep you want to go, you may find this article interesting. Basically, to @Avi 's point:

Asynchronous I/O goes hand-in-hand with event notification .

The problem to be solved is how to receive notification of asynchronous events in a synchronous manner, so that a program can usefully deal with those events. With the exception of signals, asynchronous events do not cause any immediate execution of code within the application; so, the application must check for these events and deal with them in some way.

What happens when an IO event occurs and your Task's continuation needs to be executed?

It can't just run - it needs to run on some thread, but which thread? And is that thread even capable of interrupting whatever it is doing to start processing the event? And which functions are safe to use while processing that event? Remember that we just interrupted the thread, so we may be invoking functions reentrantly.

Generally there are 2 ways to handle that:

  • Polling (or blocking while waiting for events) on some dedicated thread, which can then process them in a straightforward manner, or
  • Signals, which will interrupt the thread in order to invoke a custom signal-handler.

For Swift's concurrency model, neither of these are ideal. A suspending call will have a continuation, and we'd really want to enqueue that continuation on the appropriate executor when the event is triggered.

In theory, we could do that by install a signal handler which maintains a list of (executor, continuation) pairs (possibly including other flags which affect how the executor schedules the job on the thread/s it manages, such as priority). However, I don't think we have public APIs in the standard library to do that yet, and I don't believe it is specified whether the existing implementations are signal-safe.

To add to your collection there is an unorthodox method to write to files: memory mapped I/O. It is not recommended because of the following limitations:

  • file size can't change.
  • works on certain volumes only.
  • no way to check for errors or determine operation completion.
  • the app might crash on some errors.
Yet you use it all the time (without realising)

It's fascinating how it actually works. First consider "read": when your app accesses memory:

memory = mmap(...)
let x: UInt8 = memory[12345] // can block
print(x)

that memory access can block until I/O is done.

As for the write:

memory = mmap(...)
memory[12345] = 123 // won't usually block, but can
munmap()

the memory[12345] = 123 line won't block in most cases (I believe), and the actual write can happen at some later point. From man munmap:

"If the mapping maps data from a file (MAP_SHARED), then the memory will eventually be written back to disk if it's dirty. This will happen automatically at some point in the future (implementation dependent). Note: to force the memory to be written back to the disk, use msync(2)."

Note: you are using this mechanism without realising: your app or library or a system framework code is memory mapped and whenever CPU hits some uncharted territories the corresponding blocking I/O happens to read the relevant page into memory, possibly writing some unrelated dirty page to VM file to free some space.

Playing devil's advocate: if blocking file I/O (both read and write) is good for VM system (you don't even notice it in most cases), why avoid synchronous file I/O in your app? Network volumes is a concern, but those aside?

There’s nothing wrong with synchronous I/O, but one has to realize that the kernel is effectively a thread of its own. It only blocks processes waiting for the I/O. The app-level equivalent is having a dedicated thread.

2 Likes

Am I correct in thinking that blocking I/O functions shouldn't be
called from within a Task?

That depends on the constraints of your I/O. If you’re writing small amounts of data to a file on a critical volume — on the Mac this would be the root file system or the file system containing the user’s home directory — then synchronous file I/O should be fine. It’ll consume a Swift concurrency worker thread for the duration, but there’s a reasonable bound on that duration [1].

Beyond that, things get complex. On Apple platforms the file system code within the kernel is fundamentally synchronous. The thread that calls into the kernel does all the work, blocking if it’s necessary to wait for I/O. This affects all layers of the kernel, from the system call layer, to VFS, to APFS, to UBC. Asynchronous operations only come back in play once you get down to I/O Kit.

So, there’s no general-purpose async file I/O API you can use to underpin a Swift async file I/O library. What you do about that depends on your specific requirements. In some cases you might find a good option. The case that immediately springs to mind is where you’re transferring large amount of data. Dispatch I/O is really good at this. In other cases, you’re kinda on your own. For example, if you’re doing something metadata heavy, like traversing a large directory hierarchy, it’s probably best to spin up your own thread for that work.

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

[1] There’s no actual bound — if you hit a disk error it could take many seconds to succeed or fail, and if the user has their home directory on a network file system it could take many minutes to complete, or indeed never — but, if a critical volume is misbehaving in this way, the user has bigger problems.

8 Likes

Correct. There are no APIs in POSIX that work on file I/O and can be configured to never block. On Linux, this is now possible with io_uring but before that it was not. On macOS that is just not possible. In Windows this has ~always been possible and I think IOCP would be the best API today but I'm very far from being and expert in Windows.

Both SwiftNIO's NonBlockingFileIO and Foundation's AsyncBytes wil use blocking I/O under the hood. NIO's NonBlockingFileIO will block a thread in a NIOThreadPool (note this is a separate thread pool meant for blocking operations and not the EventLoops) and Foundation's AsyncBytes will just block a Swift concurrency executor thread. AsyncBytes however will make sure to block only one executor thread (using the IO Actor which will run all the potentially blocking I/O ops).

Both SwiftNIO's way and AsyncByte's way come with advantages and disadvantages: in SwiftNIO you will always have to switch threads from the EventLoop onto the NIOThreadPool to perform file I/O and AsyncBytes does not need to switch threads. On the flip side however, in SwiftNIO you have control over how many blocking operations you want to allow at the same time and all your EventLoops can still handle network I/O whilst the system is waiting for file I/O to complete. Ie. you can reach 100% CPU on all your cores whilst waiting for the disk/NFS/... to complete the I/O. With AsyncBytes you can have at max one blocking file I/O operation and that will block one of your executor threads which means that you may not be able to fully load your CPU whilst waiting for the I/O to complete.

Of course, both are tradeoffs and overall I'd say that both systems chose the right tradeoff. SwiftNIO primarily targets servers so limiting file I/O to just one at a time and also blocking an EventLoop wouldn't be acceptable. AsyncBytes however is primarily used on iOS and macOS so the amount of concurrent file I/O load should be much more limited (and typically also fulfilled by the built-in SSD which is fast).

The real solution will need help from the operating system. On Linux, NIO's NonBlockingFileIO could (and should ;) ) be switched to using io_uring and the same can be said for AsyncBytes. On macOS and probably other BSDs there are no file I/O APIs that just never block.


For the more curious:

  • epoll will just refuse to work on files
  • select/kqueue/poll will always return both readable and writable for all file I/O requests, even if the data is not actually available in the buffer cache yet
  • the various aio implementations don't actually never block, they just reduce blocking and have a lot of wrinkles (also they aren't compatible between OSes at all from what I know)
  • Reading the memory from an mmap'd file will also block, it'll just not be a system call but a random memory access
  • read/write and other system calls will just block on files.
8 Likes

Whilst io_uring does not block (and is just a totally badass API all around - the design document is very well-written for those who want to learn more), IIUC it is not fully asynchronous because you still need to poll events from the completion-event ring, or use a blocking API such as eventfd to be notified of those events. So you're still going to be burning a thread on that, although polling does not require a syscall.

In any case, the interface of io_uring suggests to me that it is more about asynchronously submitting your IO and not about asynchronously handling completion events for IO. Which kinda makes sense since you probably need all IO for a certain task executed anyway so it's not really important to get the completion event as fast as you can, but about submitting IO operations asynchronously so that they are executed at once. And then using the io_uring_wait_cqe() call to more or less "join" the different IO operations back together before using it elsewhere in your application.

axboe/liburing issue 385

AFAIK, the only fully-async event notifications in current operating systems are signals. But they also have problems (e.g. signal numbers are part of a global namespace, which is problematic for libraries).

Or is this not the case?

Correct. But io_uring can also handle networking, user-injected events and everything else that kqueue/epoll etc can do. AFAIK, there's nothing that kqueue/epoll can do that io_uring couldn't, in fact SwiftNIO has a first-pass implementation which implements the networking fd eventing on top of io_uring (thanks @Joakim_Hassila1). This first-pass implementation uses io_uring as an eventing mechanism only (and not actually the I/O) which proves the point but isn't as good as it could be.

So whilst you're technically correct that in order to use io_uring you will have at least one blocked (or spinning...) thread when there are no events you can build asynchronous systems that do networking and file I/O (and much more) on top of that. And that's the key property that we're interested in here. And yes, Axel is of course correct, UNIX kernels don't traditionally just spawn threads in user land so if you want to have something truly asynchronous where the kernel never sees a blocked thread, then you'd need to use signals. That's pretty unpractical and I don't think anybody would want to do this today.

For all intents and purposes I think we can summarise this as: With kqueue/epoll we need at most 1 thread to handle arbitrary amounts of network/pipe/... I/O without arbitrarily long pauses when waiting for data. But once we want to add file I/O into the mix we need more potentially blocked threads (because epoll/kqueue can't notify you about a 'background load' of bytes from disk). What io_uring changes is that suddenly we can handle network/pipe/... and file I/O with just one thread (and do the whole thing much faster and go far beyond just eventing).

6 Likes

There's just nothing else that can possibly be done on any system on reading a memory location that is paged out...

Fascinating level of details in your posts, thanks. Speaking of other possible designs - traditional "Mac OS" didn't block I believe (both file manager and device driver). The flip side was impractical programming model with io completion proc being called at interrupt time where you weren't be able doing much (firing another async I/O request from interrupt or deferred task was ok though).

if you want to have something truly asynchronous where the kernel
never sees a blocked thread, then you'd need to use signals.

That’s true for traditional Unix kernels but not true for Darwin. In Darwin an event on a dispatch source can trigger the kernel to assign a workloop thread to service the associated queue. You don’t have to have a thread blocked in the kernel as you would with select and its descendents.

Of course that doesn’t for the file system because all the file system code is inherently synchronous )-:

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

2 Likes

So what happens if I call dispatch_read? Does it block a kernel thread waiting for the VFS layer to return some data?

I’m not an expert on the internals of Dispatch I/O but I took a quick look at the Darwin source and it seems to use a different approach for file and non-file requests:

  • It groups file requests based on their underlying dev node, using a queue per disk (com.apple.libdispatch-io.deviceq.xxx).

  • It drives non-file requests (aka stream requests) from a dispatch source. These seem to be scheduled on a queue per direction (read vs write), all with the same name (com.apple.libdispatch-io.streamq)-:

Given that, I believe that the answer to your “Does it block a thread?” question is “Yes” for disk I/O and “No” for stream I/O. That certainly makes sense given the capabilities of the underlying kernel code.

However, I’d need to do some testing before I’d claim that as a definitive answer.

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

Thank you, you're right of course, that was implemented around 2018 IIRC. But as you say later, doesn't help for disks but still an important detail!

As @eskimo points out, that's correct (for disk I/O). The libdispatch code is actually open source and this is the read that'll block. The whole dispatch_io code is actually pretty readable (no guarantee that the Darwin code exactly matches the OSS dispatch code but I don't think there are major differences in this area).

If you want to follow along for a read, the code flow will go approximately like this: dispatch_io_read :arrow_right: _dispatch_operation_enqueue :arrow_right: _dispatch_disk_enqueue_operation :arrow_right: _dispatch_disk_handler :arrow_right: _dispatch_disk_perform :arrow_right: _dispatch_operation_perform :arrow_right: read.

A little word of warning regarding dispatch_io: It does not natively support backpressure. That means you need to either suspend, cancel, or completely handle the data that it gives you synchronously inside the handler block. If you don't handle it synchronously, dispatch_io will just keep feeding you data as fast as it can, potentially infinitely much. This can lead to out-of-memory situations with large files and denial of service attacks if you have a socket to an untrusted peer. This of course doesn't matter if you know how much you will read at max and can either handle it immediately on the I/O handler queue within the handler or you can fit everything into memory. As I said above, you can emulate backpressure with dispatch_suspend (or by cancelling the dispatch_io_t) but it's brittle and hard to get right, I've implemented it once before the SwiftNIO project was started using dispatch_suspend but it was neither pretty nor particularly fast.

[NIO's NonBlockingFileIO on the other hand does backpressure, it'll read the next chunk once you fulfilled the future you return from the chunk handler.]

1 Like