Embedded Swift

C and C++ don't let you do that either. That's what assembly is for.

People try to use C and C++ for it pretty often, but they're wrong.


Beautiful!! What a truly amazing product!


Yeah, I think if your requirements are precise down to the level that you essentially need an exact, fixed sequence of instructions, you aren't really programming in a high-level language anymore.


Is there a document describing the approach of how Embedded Swift will interact with hardware? Specifically, which of the two main approaches "arduino core" or "micropython supervisor"? (as I call them)

The "arduino core" approach has a unified API across chip types. Each chip's core maps that API to the manufacturer's SDK and compiler. And it still allows dropping down to chip-specific stuff for custom handling interrupts, DMA, etc.

The "micropython supervisor" approach is more like an OS w/ drivers: still a unified API across chip types but they talk to Micropython-internal functions with no way for the user to drop lower and access chip-specific features if there are holes in the API.

As an near real-time example, let's take playing audio via an I2S DAC. Usually done by setting up the chip's I2S peripheral, setting up DMA to that peripheral, and then periodically being sent DMA interrupts to fill the buffer. In Arduino, one can either use a built-in "I2S" class (if it exists for your core) or just set up yourself using C. In Micropython, you use the machine.i2s class but if your chip's port of Micropython doesn't have that, you are out of luck.

Which one is more like what Embedded Swift will be targeting? I see benefits to both approaches.

I don't know enough about the internals of the Swift build process or how "compiled" compiled Swift actually is. Since we can use C/C++ in Swift, does that mean we'll be able to access chip registers and put Swift in interrupt routines?

1 Like

Is there a document describing the approach of how Embedded Swift will interact with hardware? Specifically, which of the two main approaches "arduino core" or "micropython supervisor"? (as I call them)

Embedded Swift as a compilation mode is not going to have an opinion on this -- it should enable anyone who wants to build actual embedded software to be able to do that with either (or even other) approach, using Swift.


Good to hear that the "arduino core" approach is even possible, thanks!

This is actually something that I've been working on with Swift 4 Arduino. (The name is a bit of a misnomer as we don't really use any Arduino)

The Arduino Core actually uses a bunch of lookups for much of what you are talking about that has some runtime performance implications. We're working on ways to mitigate this. The project is semi-open-source as it's been a messy work in progress whenever I get a little time. I was hoping to "launch" a 0.1 HAL in the next month or so. I would love to get feedback if you're willing. Feel free to DM.


Swift compiler can't guarantee a timing of a piece of code in general case aside from some simple specific cases – that would be equivalent to solving the unsolvable halting problem. You can make a language that postulates a maximum cycle count for a given piece of code with some markers and then enforce that limit at runtime via throwing an exception once that limit is reached. The calling context would have to do something with the error, e.g. if this is an audio procedure that must provide audio data some placeholder data would have to be provided, or a flag indicating "can't provide data at this time" returned back to the OS caller Somewhat relevant post.

1 Like

It's great news that Apple's core engineers seem to be aware of this internal project and are bought into it. It should make things possible in the embedded Swift space that weren't possible before and is very exciting for us as a team at Swift for Arduino.

Some background on us...

As some of you will know who follow us, "Swift for Arduino" have been working in this space for years, building working versions of Swift on AVR atmega platforms (and recently on attiny), mostly the atmega328p, which has 2Kb of RAM and 32Kb of flash program memory, and is able to operate in the microwatt range, making it commercially useful as a microcontroller.

I think sometimes people read our posts or see our videos and think "it must be a trick" or "they aren't really doing this and these are hobby projects or fake demos". That was arguably somewhat true 5-6 years ago, when the project started and we were just getting going. We have come a long way since then.

We have commercial hardware products made by third parties, built on our platform built PURELY with atmega328p microcontrollers, no ARM chips, no STM chips, no "secret processor in the corner doing the real work" or other smoke and mirrors. Real commercial products built with just commercial microcontrollers using just Swift programs, built on our tools and technologies.

End product that are being sold to the general public. Currently only a small amount because we have focused on development of the tools.

But the traditional Swift, this is not...

It has been a major challenge making Swift work and, you can see that even the apple engineers themselves have struggled to get all the wonderful features of the full language. We have gone through many of the same challenges and made the same decisions, although many of these decisions were years ago for us.

In the linked evolution proposal the diagram Kuba added is perfect. It shows how many features that people expect of Swift just cannot fit in realistic embedded environments. I would note that for commercial microcontroller products at scale, the attiny type specs are more common than STM type specs.

Some features, such as resilience and reflection, can be fairly straightforwardly suppressed and LTO builds are easily enforced. On our platform we had to do all of these things. Some things are harder to control. When generics are used in Swift it feels like the implementation details are not very explicitly documented, but from what I can see, generic implementations even in things as basic as functions will create a version in the object file that takes type parameters for each generic type parameter.

Type metadata is very bulky and this sort of programming is inappropriate for most embedded programs, so we warn people using our platforms to be very careful about exposing generic functions across module boundaries. For all our customers, they mostly compile code into one program that's a single module and generics are specialised by the optimiser, eliminating these sorts of "dynamic variants" of generic functions. But as the product scales it is going to be hard to prevent this sort of thing. We really want to be able to turn off the ability to compile non specialised generics in this way, at least on some of our smaller platforms, because it produces programs that simply don't fit. Apple working on the technology must surely help.


Very exciting! I’m using Swift for the control plane / UI of an embedded audio product; Swift’s heavy resource requirements meant it was unsuitable for smaller MCUs I’m using. Looks like I can revisit this.

(OT, has anyone looked at porting to XMOS? Believe they have an LLVM backend.)

1 Like

I've been watching from afar, and it's been very cool to see what y'all have done with Swift for Arduino. Part of the goal of this vision is to help formalize the language subset of "Embedded Swift" so programmers know what to expect and can reason about it, write code against it in a way that works for various embedded platforms, and to make sure the Swift compiler and toolchain better supports embedded environments so (e.g.) it's easier to bring up a new one.

This is one of the important pieces of the prototype described here. The compiler is put into a mode where all generics are specialized and type metadata is never emitted. If there are multiple modules, the implementations of all of the generics are made available (as-if one has used @inlinable on all of them) to enable this specialization, and if something prevents specialization (say, uses of infinitely-recursive types), the compiler will produce an error. I hope that this will provide a model that's easier to reason about.



RP2040 and ARM0 are BIG chips in the embedded space!
I have a commercially used application running on an AtMega 328p, sensing 6 current sensors and handling 6 valves, the vacuum and various delays for a woodworking shop. All the code fits in 19.4 kB of flash,using 840 bytes of RAM, which I can probably reduce if it was necessary. This was built with the Swift 4 Arduino product.

Sure, it's easier to use an ARM chip and python, but that's just a whole different category of CPUs.


Thanks! I am aware :slightly_smiling_face: I just didn't want to be to greedy out of the gate. I can fall back to C/C++ as needed for now. (And yes the occasional chunk of assembly as necessary.) I don't personally need other options for my purposes on smaller chips. I'm going to enjoy getting a feel for Embedded Swifts strengths and weaknesses as it comes along! (Although smaller chips would certainly be nice as a deepest cut of compiler flags? )


I've been looking for something like this for a while. Great work! Would it work for 8- and 16-bit micros like PICs?


From up-thread by the authors, this is only laying the compiler groundwork. This proposal is not making any claims on the API used to talk to microcontrollers nor the wrappers around chip- & board-specific capabilities (what Arduino calls "cores" and what Micropython calls "ports")

There are existing projects you can try right now for using Swift on microcontrollers: the (now) confusingly named swift-embedded and the ill-named Swift for Arduino. Both of these are using Swift from 3-4 years ago, in case you want to use newer language features. (at least as far as I can tell for S4A, since it's not open source)

To a larger point: creating a good chip- & board-agnostic API is hard. Both projects seem to create chip-specific APIs ("import AVR", "import STM32F4") that expose a set of useful classes or functions that look a bit like Arduino's API or look a bit like Micropython's API. These are good attempts but I think miss what makes Arduino/Micropython/CircuitPython's APIs so incredible: you can use the same code across multiple chips with no changes. As written, this proposal is making no claims on these yet-created APIs, leaving it to us. I'm excited to see what we come up with!


what makes Arduino/Micropython/CircuitPython's APIs so incredible: you can use the same code across multiple chips with no changes

Even when Arduino became famous in the first few years, it only targeted the AVR architecture. After they gained enough resources and a large enough community, people began trying to port it to other architectures. So what you find incredible is actually a result after they became popular, not the original reason for why they became popular. MicroPython followed a similar path, initially targeting STM32.

Abstracting all hardware details into unified APIs is a complex and extensive project, especially for different architectures, which is precisely what Zephyr aims to accomplish. Therefore, we created SwiftIO based on Zephyr. Once Zephyr supports any hardware architecture, SwiftIO will also provide support for it.


Yes! SwiftIO's approach looks really good! And it follows Arduino's approach of being mostly platform agnostic.
(And a small correction, just because I lived through it: Arduino evolved from the Wiring project, which itself was an attempt to bring the concepts of Processing to microcontrollers. Both Wiring and Arduino tried to be as platform-independent as Processing was, a hard task in 2005 for microcontrollers where BASIC Stamps ruled the education space. If it wasn't for the recent release of avr-gcc, Wiring and thus Arduino, would probably have been released for PICs and the API would've been the same)


So it's taken me a few days to think about how to respond to this. I do in fact use assembly as needed. I wouldn't expect Swift to replace assembly. My question is can it replace a C++ layer? a C layer? Which is what I thought I was asking about.

There is real time computing™ and then there is working with things that are inherently time based. There is just frankly physics involved. As everyone here knows, if you want those timing based hardware things to look event based to the rest of your code you have to write the handlers yourself to fake it. Which I can do. Concurrency certainly makes that a lot easier. Thank you for that.

But, using this compiler flag, what percent of the package will even be in Swift at all? How close to the hardware itself should I expect pure Swift code to be operating?

  • Can one (will one be able in the future) inline assembly like one can with C? That was at one point referred to as niche… well this is that niche.

  • Okay maybe not RealTime™, but how tight of a timing? Can we depend on millis? micros? nanos? to be at least somewhat in the ballpark? Not MORE than C or C++ but on par.

  • Is it a good guideline if you can get it done C++ go ahead and give it a shot in Swift or is that more no than yes?

  • Should I engage with Swift like its a bonafide potential C/C++ replacement for working in the embedded space or will that just break my heart :wink:?

I don’t want to know what others have done to get things to work, I’d like to know what the actual language team’s opinion is of how low Swift should be expected to go.

1 Like

I guess I’m just wondering what you think is different about Swift here. What is it about C and C++ that makes them suitable for meeting these physics-driven timing requirements that would not be true of Swift? My understanding has been that people doing real-time programming in C and C++ write fairly normal code, expect it to be translated in a predictable way, and then measure and test that it meets their requirements; that should all work in Swift as well, especially in this embedded dialect, and if there’s a specific obstacle to that then it needs to be discussed.


The response to my question wasn't "If you can do it in C or C++ have at in Swift" it was "use assembly." So perhaps there is just a miscommunication? We will be able to inline assembly?