Async/Await effectiveness on a single core computer (linux-rpi)

I am in the process of rewriting my codebase to use make use of async await. One of the programs runs on old single core computers like Raspberry-Pi W V1.6 and I'm wondering if it's worth implementing async await or not. The program is event driven and most of the time it's on idle or waits for incoming data (UART, etc) or does some IPC and network communication.
I know true concurrency is not possible on a single core, but I still feel that some part of async await like thread suspension could increase overall performance. Is this true?
I'm a little hesitant to just try it out, since compiling the code for the RPI will be quite a pain, as I haven't found a swift 5.6 toolchain compiled for the 32bit armV6/7 architecture and would have to do that first.
Is the effort worth the performance increase?

Concurrency is possible on a single core, since concurrency and multi-threading are orthogonal. You can have truly concurrent code running on a single core with event loops (cooperative concurrency) or threads (preemptive concurrency), which is exactly how it worked before multiple cores became widely available on consumer hardware.

I don't think that performance itself is related to async/await per se in the single-threaded environment. The main question is whether the code you write is blocking or non-blocking.

If your code is non-blocking, you probably already use either callbacks or async/await, and I think there's a general consensus that async/await is easier to maintain than callbacks, especially in terms of error handling.

If your code is blocking and I/O-bound, then probably switching to async/await is easier than switching to callbacks. If your code is CPU-bound, then on a single core suspension points will likely add some overhead, but there unlikely to be a performance benefit.

Of course, I don't know the details of what you write, YMMV. Parts of your application can be I/O-bound and other parts CPU-bound, then using async/await for the former makes sense, but not for the latter.

I'd recommend cross-compiling if possible, we have a proposal in review that could help with that.


Thanks a lot for the quick reply.

Sorry, mixed terms up. I was thinking about multi-threading on different cores.

Currently the I/O part is blocking, which is 75% of the code. - I had little time then
For the networking part I used callbacks, but async/await would definitely boost readability and maintainability. Of course I could stick to swift 5.1 and use GCD with callbacks, but I feel like this would make less sense than implementing async/await.

Thanks, sounds great! Happy to see progress on this topic.

1 Like

Note that if it's file system I/O, you'll have to create a separate thread pool for that and carefully manage access to it with either callbacks or actors, since underlying Foundation or even POSIX file system I/O primitives are blocking. This is not the case for networking of course. Although there are other I/O things you could be interested in other than these two, even on a Raspberry Pi.

Thanks for the hint.
Is it necessary to use a thread or am I fine with a DispatchQueue or Lock as well?
Back then I chose a DispatchQueue to serialise file I/O, but I consider to move towards actors for the new version.

Curious as to which Swift you are running on that hardware.

DispatchQueue is fine if you already have code working with it, but long-term it's probably better to rely purely on Swift Concurrency for better portability on all platforms.

1 Like

@rvsrvs Iā€˜m currently using swift 5.1/5.1.5 on the raspberry pi zero (first gen). - The latest build I could find for this old 32bit armv6/7 architecture.

ok that's what I thought. Arm7 is pretty up-to-date thanks to the work of @Finagolfin and @ColemanCDA, but I don't think anyone was ever able to get Arm6 compiling after 5.1.1.I tried several times and failed. @uraimo at one point had a process that he could run for about 4 days on an actual pi zero to build the base distribution, then I could build that into an x-compiler, but that process stopped working at 5.2 and never seemed to get fixed.

Anyway, Async/Await Concurrency will be out for that device as that requires at least 5.5. On the upside, I did get the 5.7.3 x-compiler working just this past weekend so if you can use a pi zero 2, you can run all the latest stuff.

I've moved to working with Pi 3a's rather than zero's largely for this reason. I have several things that would be more effectively done on a zero (just some UART control), but I just don't feel like setting up a c++ or python development environment and re-educating myself on those languages.

1 Like

good to know - thanks.

For now I was lucky and got everything done with swift 5.1 and never had to deal with building swift myself thanks to pre-builds (@uraimo).
Why keeps building swift for this architecture failing? I was able to compile serval complex c++ and f90 libraries for this architecture and even could use openmpi for computational clustering.

That's a pity as it would boost my projects a lot. Unfortunately we missed buying rpi w 2s before they became short, but luckily my university had around 120 rpi w 1 left - so I need to stick to them.


Arm6 is not really a supported triple. @uraimo had a set of patches that he maintained from swift 3 through to 5.1 but at 5.2 a lot of breaking changes for 32 bit got committed and the pi community couldn't keep up. @Kaiede at one point put through an extraordinary number of pr's as i recall to get arm7 working again but there just wasn't enough interest in the zero at the time to recover.

1 Like

At least from my standpoint, it's also a case of time vs cost. 64-bit became officially supported by the Pi folks a while back, making it worlds easier to use use an aarch64 toolchain like @futurejones makes available. Not to mention much less lag time between a release and availability on Debian, along with far fewer patches needed. Yes, older hardware got left out, but I had to swap out less something like 2 boards that weren't already 64-bit compatible, so it was a no brainer to do the moment 64-bit was more supported.

I totally sympathize at the lack of new hardware to upgrade to today though. If you are stuck on 5.1, I would keep using libdispatch for things. It's generally been pretty good for me on Linux other than a couple cases of resource leaks (i.e. avoid recreating dispatch timers endlessly and reuse them instead, your process will eventually decide it's run out when running over long periods like mine does if you don't reach some steady state here). I do prefer that my new code uses Swift Concurrency for maintainability reasons, but we do what we must with what we have at times.

As for this question, it's more an issue of resourcing to validate things didn't run into issues. Apple in particular was focused on 32-bit/64-bit Darwin, and 64-bit Linux. 32-bit Linux does some things differently than 32-bit Darwin, and it caused issues. I was even using a 32-bit x86 VM as a way to help chase down some issues faster, since there was some overlap between x86 and aarch32 in terms of things not working. With enough people and time, it would be possible to keep up development, but to do that there needs to be enough interest as well.

1 Like