Hi everyone, I’m curious whether anyone has used Swift for handling live sensor streams or camera feeds (e.g., from a small microcontroller or IoT device) as part of a larger data‑processing or ML pipeline. I saw a cool project where they use an ESP32‑CAM + OpenCV for real‑time face detection https://www.theengineeringprojects.com/2025/03/esp32-cam-based-real-time-face-detection-and-counting-system.html, and thought maybe Swift could be used to build the client‑side or data‑processing part of such a setup.
I’ve also noticed a few Arduino and Raspberry Pi community threads where people stream sensor data via MQTT or HTTP, then post‑process with Python or Node.js. Has anyone here tried doing similar IoT → computer‑side pipelines using Swift (for macOS, iOS, or server)? I’d love to hear about pros/cons: dependencies, performance, ease of integration, or limitations you ran into.
That's a personal end-goal for me, too, so whatever progress you make keep us posted and I'd love to hear how anyone else replies.
I have HTTP requests going, but not a stream/socket, and not HTTPS yet either. (That's looking like a January task)
Depending on how strict you're going to be about "pure Swift" that could be an easier or harder task. You could write the thinnest of skins over the SDKs and be done pretty quickly. I'm trying to not use any espressif-specific calls in the HTTP stack, so it's going to take me a minute. If I was willing to use the existing wrappers in the supplied SDK, I'd be done.
There is a MQTT Server example if you wanted to receive it in Swift...
The server side would actually be a lot easier to knock out.
If you want easy... this is not the path yet. There simply aren't a lot of existing projects to crib from yet, but I at least am having fun!
ETA: (Folks who are working in an embedded linux environment like the Pi rather than the RP2040 will have an easier time, FWIW)