This got me wondering... How is this going to be implemented? For example, will we provide some sort of callbacks representative of each thread to the concurrency API?
We are currently working through "How is this going to be implemented", so the answer for now is to stay posted.
One active area of work is stabilizing the interface between libSwiftConcurrency and the "Executor" which actually runs the Swift Concurrency Tasks. On desktop platforms, this is handled by libDispatch, but on embedded systems it will need a different library specific to the platform.
its likely we will need to create adapters for popular runtimes e.g. Zephyr, FreeRTOS, etc...
Not yet, no. Weāll have to propose some APIs through swift evolution and once we have a direction in mind this will be shared through the usual evolution pitches and process. Thereās nothing early to check out yet, you can probably keep an eyes on this thread for any movement. Itās just a bit too early to share the ideation phases.
If you have an actually concurrent embedded environment, it would probably be easier to port a new thread pool to your environmentās thread system than to port all of libdispatch. A lot of embedded environments are not concurrent in the threads sense, though ā they might take asynchronous signals/interrupts, which is a form of concurrency, but thereās no actual parallel execution.
Right, let me clarify: I was more curious if this work dovetailed with the possible replacement of libdispatch on Linux with a custom thread pool - we see a number of performance issues weād like to try to help sort out on Linux that are related to libdispatch but as itās not the long term solution weāre curious whether the work for embedded would define the required apis for us to roll our own runtime then if needed more easily.
Ah. I donāt think the possible Dispatch replacement has much to do with it, but yes, if you want to completely swap out the thread pool, that is something embedded developers also want to do and should become easier as part of this work.
I'm completely resigned to rolling my own support over something like Zephyr, and am actually quite interested in seeing what that looks like. Just not sure which of the concurrency abstractions I should be paying attention to beyond Executor (which seems a bit light as an abstraction for this).
Whatās the state of these things nowadays? Iām working on Matter devices using the ESP32-C6, which can be programmed in Swift (itās the example platform), but the Matter APIs have a concurrency model, and ESP uses FreeRTOS under the hood. Iād like to take a stab at a nice Swift abstraction for Matter, but I think the best result would come from having executors that can work with FreeRTOS. From what I gather, Custom Executors are accepted and in Swift 5.9, but I donāt know if thatās enough to write an integration?
Concurrency is now supported in Embedded Swift for Wasm (with some caveats when all optimizations are enabled in release mode), which brings great binary size improvements. With Embedded Swift final executable binary from the WebGPUDemopackage demoed in WWDC 2025 āWhatās new in Swiftā session is down to 390 kB, compared to dozens of megabytes without Embedded Swift. The WebGPU setup API is async, so this wouldnāt be possible without Embedded Swift concurrency in the first place.
We did need a libc and libc++ for it to work, so if you have a compatible libc for your embedded platform, itās a matter of building the Embedded Swift stdlib with enabled concurrency modules using that libc.
For Wasm we have a cooperative single-threaded executor enabled by default, and in the browser we use the new custom executors API to instantiate a custom JavaScript event loop executor.