[Accepted] A vision for Embedded Swift

Hello Swift community,

I'm pleased to announce that the Language Steering Group has accepted a vision document for Embedded Swift:

This document presents a vision for "Embedded Swift", a new compilation model of Swift that can produce extremely small binaries without external dependencies, suitable for restricted environments including embedded (microcontrollers) and baremetal setups (no operating system at all), and low-level environments (firmware, kernels, device drivers, low-level components of userspace OS runtimes).

Sentiment in the discussion of the prospective vision document was strongly positive, and the Language Steering Group agrees that this is an important direction for the Swift language.

As with all vision documents, the Language Steering Group's acceptance of this document is a strong endorsement of the goals laid out in the vision, a general endorsement of the basic approach, but only a weak endorsement of any concrete proposals. All proposals in the vision will have to undergo ordinary evolution review, which may result in rejection or major revision.

Please feel free to discuss this vision in this thread.

Doug Gregor
Language Steering Group


A good day for Swift and a great day for embedded systems!

I’m looking forward coding embedded systems using Swift with my daughter in a couple of years time when she is 7 years old perhaps.



Hope this would become another firm step for Swift towards its goal of global domination!


Fascinating and exciting!

A question that is probably goes nowhere and also is out of scope because it really belongs in the “options to be further explored” part of the diagram in the document above, but…throwing it out there, mostly out of curiosity:

Alex Bradbury remarked on Mastodon that this proposal could make Swift far more viable for targeting Wasm, because of the greatly reduced download size. However, losing existentials (and some of those other type-metadata-related costs) seems a bit steep for that situation where memory is plentiful, the process is reasonably fast, high-level abstraction is desirable, JIT is available, and it is only the size of the compiled code at rest that’s a problem.

Is it possible that there’s some intermediate option that preserves existentials but sacrifices a bit of performance to reduce compiled code size? For example, could good old message-based dynamic dispatch (a la JS, ObjC) save on code size? It seems possible, since (I think??) witness tables involve a sort of combinatorial cost of method-protocol pairs, whereas message passing involves only a per-method cost. Again, clearly off the edge of this document’s scope, but I’m curious!


I reckon you were assuming Wasm is always used in the browser? That's not the case though. In general in a given Wasm runtime, memory may not be plentiful (e.g. in application plugins implemented with Wasm), JIT may not always be available, pure interpretation may be even preferred for better security, and size of the compiled code is not the only problem when every executed instruction counts.

In the browser and server-side environments that provide access to JavaScript, interop with JS changes the whole calculus altogether. If you're working with Web APIs exposed from JS, you can get existential-like dynamic dispatch just by allocating a JS object through the bridge (with a JavaScriptKit-like API). I could foresee JavaScriptKit itself reimplemented with a non-allocating subset of Embedded Swift for smaller footprint and better performance.

OTOH, if/when browsers start implementing Component Model support for Web APIs, it may look wildly different from what we're used to seeing in JS, due to the fact that WebAssemby Interface Types don't support existentials so far.

In summary: all of these aspects of Embedded Swift are valuable for Wasm support. It's not out of scope, since Wasm for many intents and purposes is an embedded platform.


As far as this goes, the code size difference is a wash. A witness table contains the methods in the protocol for each conforming type; those methods would otherwise be in the name-based dispatch table for the type. The call site can be smaller for witness tables too, if the protocol doesn’t need to be ABI-stable. So it’s linear space either way, with reasonably similar constant factors, unless for some reason you have one method that satisfies requirements from many different protocols.


With that said, we can certainly explore designs that are more pay-as-you-go for some of these features. After all, protocol dispatch is basically just calling through a v-table, and it's not like embedded code in C never uses v-tables. We can probably find ways to support that while subsetting out enough of the dynamism about type identity and value representations (e.g. only providing class / metatype existentials) to still let us drop most (maybe all) of the metadata runtime.


In addition to what everyone else has said, I think there is still also room for a "missing middle" between Swift as it exists on desktop and the vision laid out for embedded Swift, where there is a full featured deployment environment but the full cost of the Swift standard library and runtime in its entirety is undesirable. Some of the work we'll do for embedded Swift, such as breaking out optional functionality in the standard library, could help those platforms, as would our continuing to improve our ability to eliminate unnecessary dead code and metadata in "hermetically sealed" environments where there isn't a need to dynamically link in new Swift code outside of the initial deployment.