PoC: Improving macro build times with WebAssembly

The Lede

I decided to take a crack at the whole "Building Macros with WebAssembly" idea and managed to put together something that improves build times by up to 10x even without any compiler integration!

There's currently some caveats on the usability front since this is a standalone package instead of integrating into swiftc/SwiftPM, but I think integrating something like this into the compiler would make macros much more usable.


The repo offers two WebAssembly runners, one with JIT and one without. The table shows build-time performance with each of these, as well as if one were to use SwiftSyntax directly. All times are in seconds.

Kind WASM WASM+JIT SwiftSyntax
Clean (debug) 33.8 19.2 29.0
Clean (release) 32.0 18.4 183.2
Incremental (debug) 9.8 1.3 0.6
Incremental (release) 1.1 1.5 0.8

Details on methodology (plus a lot more) in the README. Interested to hear what everyone thinks!


This is absolutely fantastic!

Here's hoping it's a salve to: Compilation extremely slow since macros adoption - #68 by vatsal

1 Like


Slow SwiftSyntax and macro build-time is the limiting factor for many teams. Thanks for taking effort to improve it.


Fantastic work!

Side note: If we can integrate WasmKit into SwiftPM, we can skip building WasmKit itself for "Clean (debug|release)", and it would be as fast as WebKit version "WASM+JIT". In other words, the major difference between "Clean (debug|release)" of "WASM" and "WASM+JIT" is not if it's JIT nor no-JIT but if the engine is pre-compiled or not.


ah, good point! In fact I bet the overhead would be even less than the current WebKit measurement, all things considered, since we would be able to entirely remove the "host" module and move execution into SwiftPM.

I'm not 100% sure whether this would be doable at the SwiftPM level though, given that the plugin evaluation infra lives within swiftc. One way around this is might be using the load-plugin-library infra and allowing SwiftPM itself to serve as a "plugin" that evaluates the wasm binaries in its own address space.

1 Like

Yes, this would also allow us to virtualize package manifests and plugins with Wasm, in addition to enabling swift run for WASI products.


Looks very impressive. I wanted to clarify: does this project improve the performance of macros when they're invoked by the compiler to generate code? Or only when they, and supporting libraries like SwiftSyntax, are built?

In other words, does this help address the concerns raised in this thread?

1 Like

I can't say for sure without benchmarking — I'm pretty surprised that the overhead of merely invoking the binary is that expensive. Though if the bottleneck in the aforementioned thread is that macro binaries are built in debug by default, the two-stage architecture proposed here (where the wasm binary is pre-compiled and vended) could definitely help.

One bit of evidence motivating this hypothesis is the Incremental (debug/release) entries for the "WASM" column of the performance table. I've elaborated on this in the README but note how release builds compile faster than debug builds — this is because the release config builds WasmKit itself in release mode (aside: per Yuta's comment above this can be mitigated by baking WasmKit into SwiftPM.) Importantly, if WASM macros are pre-built with optimization and are run on an optimized build of WasmKit, it could definitely improve performance. The same could be done by building traditional macros in release mode, but that would 1) require additional work on the SwiftPM side (which, to be fair, @Max_Desiatov points out to me is now feasible due to changes to the build graph as of Swift 6.0) and 2) would require building SwiftSyntax in release mode for those who can't use it in binary form.


Though if the bottleneck in the aforementioned thread is that macro binaries are built in debug by default

That's a bottleneck, but it's surmountable; and the performance isn't good enough for our purposes even in release builds.

Just to be clear, the issue raised in the thread I linked has nothing to do with compiling SwiftSyntax itself, or the macros themselves; the issue is that even after that's solved, macros still create overhead when the compiler invokes them, which grows with usage.

And while there's always gonna be some overhead, the current amount of overhead may make it challenging to use them in large codebases.

Fwiw, the "release builds actually compile faster" behavior holds for "vanilla" macros, if you have a prebuilt SwiftSyntax binary, and even for the Swift Compiler itself.

If you haven't already, I would encourage you to see what the impact is on compilation performance on a codebase that has a lot of macro invocations, even if it's as simple as 2000 expression macros being invoked in one function.

All that said, it's great to see progress on this, and based on the other thread linked here it seems like it's solving a real problem.


So I added some microbenchmarks to Wacro in order to understand this better. In release mode, the marginal overhead of macro expansion on my machine (M3 Max) is around 25ms with WasmKit, 1ms with WebKit (specifically it appears WebKit starts closer to 1.4ms and improves over time to 1.0ms as it uses better quality JIT.) Cold start performance is relatively comparable, ~300ms in both cases.

Testing real-world swiftc runs, a file with 1000 print(#stringify(1+1).1) lines adds 30s to the build time with WasmKit (release). Meanwhile the same file adds 3.3s of build time with WebKit.

I also did some benchmarking of the MRE in the post you linked and it looks like the major overhead is that each frontend invocation is spawning a new instance of the plugin executable. This just seems like an unrealized optimization opportunity to me: one can envision a world in which swift-frontend accepts pipes instead of a plugin path, allowing SwiftPM to spawn the plugin once and multiplex messages to and from the compiler (cc @Max_Desiatov what do you think of this idea?) This is mostly orthogonal to what WebAssembly Macros aim to achieve, though 1) it would probably make wasm macro integration easier, and 2) the fact that WebAssembly is deterministic could mitigate any risks with reusing the same instance of a plugin executable.


one can envision a world in which swift-frontend accepts pipes instead of a plugin path, allowing SwiftPM to spawn the plugin once and multiplex messages to and from the compiler

Yeah, it’s not clear why this wasn’t done from the start. Perhaps we can get someone from the core team to chime in on whether they’d accept this as a contribution.

I think there’s two potential issues with the idea though:

  1. You can now store information about prior invocations in static vars inside your newly long-lived process, which might create a temptation to have state inside your macro, which could tempt macro authors to try to take advantage to do more global analysis than is currently possible.
  2. It’s unclear what the exact perf implications would be, but it could just be replacing one problem (overhead of starting a process) with another (lots of macro invocations contending for access to the process). Idk enough about IPC to know if this is a real problem or not

IMO both of these issues are lesser evils than spawning the macro over and over again. In fact one approach to fix both issues could be to spawn as many processes as min(# of jobs using macro, # of cores). This ensures that people don't (ab)use macros to store global state and also reduces contention. Though given that macros take ~1ms to evaluate with JIT I feel like contention won't be a big deal anyway, and I think there's already enough nondeterminism in the macro lifecycle to ensure people don't assume nonexistent API contracts.

If anything, I think the greater benefit of allowing the frontend to accept pipe-based-plugins would be that it makes the architecture a lot more extensible by enabling the caller (instead of a separate POSIX process) to handle macro expansion requests. As an example, I've created a Node.js based shim for swiftc that emulates pipe-based-plugins and uses this emulation to load wasm plugins with -load-plugin-executable Foo.wasm#Foo, ditching WacroPluginHost entirely. The emulation is quite hacky (see prepareForwarder()) but if pipe-plugins were supported by the compiler it would be a lot more robust.


Yeah to be clear I don't necessarily find these arguments convincing personally. But if you're the kind of person who's very concerned with having reproducible builds (1) might hold a lot more weight