Advice on non-blocking communication via UART/SPI with swift-nio/vapor

Hey, I'm looking for (plausibility) advice for the following idea:

A little "why": Since most commercial data loggers and open source data loggers are not non-blocking or event-driven and do not offer sophisticated real-time data processing/management, I started to develop such a data logger framework in Swift as they are key to our research.

Here comes the hard part. I have to deal with different types of sensors (analogue, digital) that often have custom I/O requirements that most computers can't meet (32 VDC, 24-bit ADC, etc.), and some of them have a large data throughput of several dozen Gbs per day. Combined with real-time data processing, this is far too much for most microcontrollers. So I'm sitting between the chairs and have to bridge the two fields of real-time applications on microcontrollers (AVR, FPGA) and single-board computers (for data management, remote interaction and real-time data processing).

I decided to split the frame into an I/O part that runs on a microcontroller and another that runs on a single-board computer. This allows different I/O connections (physical plugins "I/O terminal") e.g. for optical and high voltage sensors. However, I now need a hardware abstraction layer and a really reliable, non-blocking communication interface to manage all I/Os. UART is well supported but slow, SPI would be good, but if I remember correctly there is no automatic data buffering or event handler on most single boards like on the RPIs. Regardless of the physical protocol, I have to transfer metadata (plugin ID, supported sensor types, etc), compiled code for the microcontroller, and the sensor data asynchronously in a non-blocking manner. That leads to a server-client kind of thing with transfer control protocol and so on and I thought it might make sense to use swift-nio for that. I already use vapor in the project and was wondering if I could add another source (UART) to the event loop so that I would not need to add an other one.

Having to split the framework across two physical systems is really painful and requires a lot of extra abstraction and security layers, but I guess that's the price to pay. I already have a less complex "get-it-to-work" prototype running, but before I get to work, I'd like to know if it makes sense to a) use swift for all this and b) if swift-nio/vapor would be suitable.

Many thanks in advance.

1 Like

Pass your analogue data through an appropriately spec'ed sound card and read it with a CoreAudio client app?

Hi, thanks for the hint, but that won't work. Analogue sensors are a bit more complicated in our case (often require additional electronic components and reference voltages), and I would still have to solve the problems for the other sensor types (most of them come with loose wire ends, are small series for special research purposes with custom data protocols). - It's basically my job to build something like a sound card for the sensors.

I mean, after this step is done and all your various sensors are behind the standard or custom made "sound cards" - you can read the data from those cards with a CoreAudio client and do whatever with it? How else could you work with realtime data? Everything else is not realtime AFAIK (e.g. reading data from a socket, etc). Depends of course what realtime requirements you have in mind (e.g. maximum tolerable latency, etc, with audio one...few ms is achievable).

1 Like

Hm ok, I haven't thought about seeing the project as a "sound card" and I'm not that familiar with kexts/driver development, so please forgive my naivety. If the "sound card" would be developed, how would I get the communication with the computer to work? Are there non-blocking frameworks ready to use and how to integrate them in swift-nio/vapor event loop?

Perhaps I need to specify my requirements a little more precisely.
Let's assume you have two (analogue) optical sensors that are read out at 1 GHz and two digital sensors that need to be requested to provide data and that have a response delay of 1 second. On the "sound card", a microcontroller would read on request the data from the optical sensors, buffer and average them and send packets (=data stream) every x microseconds and first stop when a "stop" command is sent. This would be our "base load". Now, commands to read the digital sensors are sent every x seconds. To do this, the address, sensor specific read command, ID etc. (=metadata) must be sent to the "sound card" while the other data is streamed. In this context, real-time means that all commands are executed directly one after the other and there is no waiting until the previous request has been completed; everything runs simultaneously and non-blocking. - "Real-time" is probably not the right word, but most logger systems read the sensors iteratively and thus screwing projects where multiple data streams have to be transferred and processed "simultaneously".

In my understanding, that's pretty much what swift-nio/vapor is capable of and what it was designed for - with the exception of the point-to-point UART/SPI communication part. I''m currently just not sure, how to integrate it in swift-nio/vapor event loop.

I see. Ignore what I said above – I had a different use case in mind (e.g. sampling data at a much slower 10-100 kHz rate, a continuous stream of data coming from the sensor without request / response, latencies on the one/few millisecond scale).

Ok, there are sensors wich are fine with a way slower sampling rates and latencies on the one/few millisecond scale. However, the bi-directional request/response communication remains necessary.

FWIW, with Core audio requests / responses could be implemented via setPropertyData / getPropertyData.

Could you make your sensors standard TCP/IP/UDP/HTTP endpoints (client or server as appropriate for your use case)? Then you'd be able talking to them via TCP/IP/UDP/HTTP (Network framework / sockets / URLSession / swift-nio).