Actors, Threading and SQLite

One of the things I'm most excited about with actors is the ability to isolate the SQLite code in an iOS app into a single actor. This would replace the more complex and less performant GCD-based solutions that are usually used in the pre-Concurrency world.

But, after watching Swift Concurrency: Behind the Scenes, I'm not 100% sure that will be possible. After learning more about how async code can hop threads, I'm wondering if the "DBActor" idea is compatible with SQLite's threading rules.

SQLite on iOS appears to be compiled with the "Multi-Thread" flag on, so the rule is:

Multi-thread: In this mode, SQLite can be safely used by multiple threads
provided that no single database connection is used simultaneously 
in two or more threads.

I think actors and Swift Concurrency would be compatible with this rule. Can anyone confirm if that's true?


that no single database connection is used simultaneously in two or more threads.

This will work fine with the existing actor model, as long as a connection is not shared by any two (or more actors). As long as at most one actor has the connection it will be properly accessed by at most one thread - the one the actor is running on. The connection type should therefore not be Sendable.

One could also make it work well in the single threaded, no mutexes at all mode I believe... If it really strictly needed "only one specific thread" and not just mutual exclusion, we could make this work once we get the custom executors which have an early pitch here: Support custom executors in Swift concurrency

but indeed, for now the multi threaded mode should be compatible with actors IMHO.


It may be important to protect against interleaved actor calls, though...?


Worth noting that a few popular SQLite libraries like GRDB have been doing connection pooling with the default mode (“multithreaded” on iOS) for quite a long time, where a pool of connections is borrowed/reused by any requesting threads. As long as mutual exclusion is enforced, which actors do guarantee, SQLite is alright with it.

But caution should be paid to transactions. Transactions across suspension point is dangerous, especially with actor reentrancy being allowed by default.


Is this dangerous in the same way that long-running transactions are dangerous in a fully distributed system with a shared database: DB resource starvation, potential for DB deadlocks, etc.? In other words, can we reason about actors as if they are distributed nodes connected by a magic network of Sendable values?

Or is it dangerous in a more low-level way have to do with locks, memory safety, etc.?

It is dangerous in the sense that reentrancy can violate transaction ACID without one noticing.

Consider this simple pseudo code using a hypothetical transaction API that supports an async body:

actor DbActor {
    let sqlite: Sqlite
    let network: Network

    func latestLog() async -> Log? {

    func syncBalance() async {
        await withTransaction {
            sqlite.write(Log("will sync balance"))

            let params = sqlite.readSomething()
            let balances = await network.fetchBalances(params)

If withTransaction is naively implemented as:

func withTransaction(_ body: () async throws -> Void) async {
    sqlite.exec("SAVEPOINT t0")
    await body()
    sqlite.exec("RELEASE t0")

Most use SQLIte in WAL mode which has multi-reader, single-reader MVCC.

  • While your transaction task is suspended for the network request, latestLog() can see your uncommitted writes that are supposed to be isolated from WAL readers, until the transaction is committed.

  • During the suspension, the actor can be reentered with other methods that can write things that e.g. invalidate constraints/assumptions you made after the suspension point about your earlier reads.

  • There is a high-level API design question of how one differentiates the intent to create a nested transaction within the current running transaction, and the intent to start a separate transaction that should be deferred until the current one completes.

Edit: The best course IMO is not to open this rabbit hole. Performing your SQLite transactions synchronously, while still being protected by the actor's mutual exclusion, is perfectly fine. Exactly the same as how we are using SQLite today around lengthy network/IO requests. :stuck_out_tongue:


I don't know how the Actors are implemented on the thread pool side, but i'm guessing they are dispatched to a thread pool with no guarantee of thread affinity/locality. but maybe if there's a concept of "shared" and "exclusive" actor that would guarantee thread affinity with the actor would make it possible to have a secure implementation for disk IO bound kind of tasks.

A shared actor would work just as it is working now, and a exclusive one would bind only to the same thread also allowing access to the TLS.

I guess such a implementation detail would make it possible to use it in scenarios of disk IO, that in my experience need's a exclusive "backend" thread allowing for the multi-threading magic to happen in the frontend consumers.

The thread pool implementation for allowing exclusive access would make it more complex and depending on the limit of threads it might have problems, but maybe leaving those limits to be configured by the developer might adapt to different use cases.

This is sub-optimal but the present day reality -- you would not want to run long running and IO operations on actors on the default pool.

This is why we're working on Support custom executors in Swift concurrency which will enable the "make this actor run on this dedicated executor (e.g. single thread)". This is a well known and established pattern from other actor runtimes to handle such threading requirements, and does not affect the semantic model of an actor runtime -- it just affects the small detail™ of where the actor is allowed to run.


Looking forward for custom executors, because til then i cannot use the new Swift concurrency model idioms, at least for this specific case, giving there's a need to integrate into a runtime that have also to integrate with am already present C++ runtime with its own logic, where being able to choose in which thread the execution will happen is important.

This pattern presented here with consumers/producers integrating with other systems with specific behavior might as well benefit from it.

Is it a strict NO to run I/O operations in the cooperative pool? Or is it more a general advice on long running ones?

Embedded databases like SQLite have come a long way to have non-blocking reads & efficient in small queries. It does feel a overkill if we advice that all things I/O of any size/duration must hop out of the cooperative pool to be executed.

1 Like

I cannot speak for them, but lets not forget that the backend threading pool implemented by actors is shared and the developers will often use them for other tasks like UI compositing for instance.

If you have a scenario where you know that it will be used only for file IO, lets say, that you implement a backend server that is specialized in doing SQlite queries, than there will be no issues.

The problem is that if this is a driver offering of a SQlite, this will run in the same process the user will use the actors for other tasks, often ones that need low latency, like UI compositing and audio for instance. In that particular case the time each IO task take to complete will become visible giving it will affect the performance of your low latency tasks.

If a handle to a SQlite have the proper locks and can be use by multiple threads, even then you would want to have a custom thread pool with "frontend" and "backend" threads that are specialized in long running tasks. With that configuration you can dispatch the IO to the backend threads without bothering the people that use your driver with unwanted lags in their final applications.

If you are not sure that the handle can work in multi-threaded mode, or you just want a more safe environment, than the best you can do is use a single-thread mode where you know only a particular thread will access your handle and from the perspective of that handle it is working in a synchronous mode, while you still can offer a multi-threaded environment from the point of view of the user, with multiple tasks consuming from the IO producer which runs is a isolated and safer environment.

With that configuration you can also use something like LMDB safely which as far as i know, unlike SQlite its not safe to be accessed by multiple threads directly.

Please correct me if I am wrong.

With SQLite, you have a pointer to the database file. Let’s wrap that in an actor for safety and let’s not use await within that actor’s functions for the reasons given by Anders_Ha.

SQLite also compiles SQL statements via sqlite3_prepare_v2. That function requires the database connection and can thus run on our database actor. It vends a pointer to the prepared statement and can be reused many times (if properly reset). Though multiple uses should run consecutively. So let’s stick this prepared statement pointer in an actor of it’s own.

Now we have an actor instance for the database connection that can open/close the database file and prepare statements. And for each prepared statement we have an actor instance so we bind values, reset it, and just keep track of our many different reusable queries.

But we can’t just prepare and run those queries concurrently, especially not writes. In other words both actor types (DB + all of the statements) should run on the same execution context (i.e. thread), right?

So for this usage we would have to wait for the custom executor or global actor proposal, right?

Do statement preparation and execution (and all direct SQLite interaction) on one actor (e.g. DBActor) and it should work, based on the response above from @ktoso. There's no reason to prepare your statements in a second actor, and that will quickly lead to the same database being accessed by two threads at once.

Which would be a global actor, right? With the database pointer and statement pointers wrapped in their own class attributed/marked with that global DBActor.

Seems to work (with the experimental concurrency compiler flag enabled).

Global actors don't change anything here -- in terms of threading it is just "specifically one instance of an actor" nothing special about them.

The moment you have two actors, they may execute on 2 threads in parallel, thus breaking the "only access from single thread" requirement. So if none of those database things can be done at the same time, none such things should be "in" many actors.

It'd change if we had "pinned" (to specific thread) executors, that is true. Then many actors can be on the specific serial executor, and thus achieve mutual exclusion thanks to that, even though they are many actors.

I mayhaps misinterpreted the effect of a global actor. I annotated two classes with the same global actor thinking both classes would just be local to that global actor. Calling one class from the other would just be the same as if calling one function from another within an actor, i.e. not leaving the actor's island.

Not sure I am expressing myself clearly and anyways my understanding / experience with async/await has improved since last week. Not exactly sure anymore what I was thinking at the time.

That is correct.

Though global actors are not any different than “put a bunch of stuff into the same actor”. They don’t really offer any new semantics over “in the same actor”. So the same is achievable with just “one database actor” :slight_smile:


So the same is achievable with just “one database actor” :slight_smile:

Yes, that's what I have now. And also a global actor was just a bad idea anyway if I want to open multiple files. It's an excellent exercise btw to get a grip on async/await. :smile:

1 Like

Correct yeah, that’s a reason one would not want to solve this using a global actor :slight_smile: no worries, it’ll take some time getting used to those new concepts. I’m glad it seems to be going well tho!

1 Like

Having partially non-reentrant actors would be nice in this situation I think.
The SQLite database pointer and basic functions (*_bind_*, *_column_*) inside an actor can be stuffed inside a reentrant actor.

Though that database actor shouldn’t be aware in my view of the underlying model. The model could be spread out over multiple databases, local storage for large files or even remote storage. Each of these could be their own actor with the model an actor in itself.

So if I would want to save some large file, I could for example end up with this
function on the model actor.

func add(file:…) {
await mainDB.write(..) 
await fileDB.write(…)
await remote.send(…)

If one of the writes fail, the other one should be rolled back. Such a function could presumably be reentrant (not there yet in my version).

But what shouldn’t be re-entrant is this function in the model actor.

func load() 
  let version = await“select version from…”)
  if version.isOld  
     await“create newTable…”)
     await mainDB.write(“update version…”)

Every await is a suspension point during which example the add function could run. But if the database schema hasn’t been updated yet then that could fail. So the load function should be non-reentrant.

At least this is the current stage of my understanding and experience thus far. Just wanted to share.

Terms of Service

Privacy Policy

Cookie Policy