PoC: Improving macro build times with WebAssembly

The Lede

I decided to take a crack at the whole "Building Macros with WebAssembly" idea and managed to put together something that improves build times by up to 10x even without any compiler integration!

There's currently some caveats on the usability front since this is a standalone package instead of integrating into swiftc/SwiftPM, but I think integrating something like this into the compiler would make macros much more usable.

Performance

The repo offers two WebAssembly runners, one with JIT and one without. The table shows build-time performance with each of these, as well as if one were to use SwiftSyntax directly. All times are in seconds.

Kind WASM WASM+JIT SwiftSyntax
Clean (debug) 33.8 19.2 29.0
Clean (release) 32.0 18.4 183.2
Incremental (debug) 9.8 1.3 0.6
Incremental (release) 1.1 1.5 0.8

Details on methodology (plus a lot more) in the README. Interested to hear what everyone thinks!

35 Likes

This is absolutely fantastic!

Here's hoping it's a salve to: Compilation extremely slow since macros adoption - #68 by vatsal

1 Like

Awesome!

Slow SwiftSyntax and macro build-time is the limiting factor for many teams. Thanks for taking effort to improve it.

3 Likes

Fantastic work!

Side note: If we can integrate WasmKit into SwiftPM, we can skip building WasmKit itself for "Clean (debug|release)", and it would be as fast as WebKit version "WASM+JIT". In other words, the major difference between "Clean (debug|release)" of "WASM" and "WASM+JIT" is not if it's JIT nor no-JIT but if the engine is pre-compiled or not.

4 Likes

ah, good point! In fact I bet the overhead would be even less than the current WebKit measurement, all things considered, since we would be able to entirely remove the "host" module and move execution into SwiftPM.

I'm not 100% sure whether this would be doable at the SwiftPM level though, given that the plugin evaluation infra lives within swiftc. One way around this is might be using the load-plugin-library infra and allowing SwiftPM itself to serve as a "plugin" that evaluates the wasm binaries in its own address space.

1 Like

Yes, this would also allow us to virtualize package manifests and plugins with Wasm, in addition to enabling swift run for WASI products.

9 Likes

Looks very impressive. I wanted to clarify: does this project improve the performance of macros when they're invoked by the compiler to generate code? Or only when they, and supporting libraries like SwiftSyntax, are built?

In other words, does this help address the concerns raised in this thread?

1 Like

I can't say for sure without benchmarking โ€” I'm pretty surprised that the overhead of merely invoking the binary is that expensive. Though if the bottleneck in the aforementioned thread is that macro binaries are built in debug by default, the two-stage architecture proposed here (where the wasm binary is pre-compiled and vended) could definitely help.

One bit of evidence motivating this hypothesis is the Incremental (debug/release) entries for the "WASM" column of the performance table. I've elaborated on this in the README but note how release builds compile faster than debug builds โ€” this is because the release config builds WasmKit itself in release mode (aside: per Yuta's comment above this can be mitigated by baking WasmKit into SwiftPM.) Importantly, if WASM macros are pre-built with optimization and are run on an optimized build of WasmKit, it could definitely improve performance. The same could be done by building traditional macros in release mode, but that would 1) require additional work on the SwiftPM side (which, to be fair, @Max_Desiatov points out to me is now feasible due to changes to the build graph as of Swift 6.0) and 2) would require building SwiftSyntax in release mode for those who can't use it in binary form.

2 Likes

Though if the bottleneck in the aforementioned thread is that macro binaries are built in debug by default

That's a bottleneck, but it's surmountable; and the performance isn't good enough for our purposes even in release builds.

Just to be clear, the issue raised in the thread I linked has nothing to do with compiling SwiftSyntax itself, or the macros themselves; the issue is that even after that's solved, macros still create overhead when the compiler invokes them, which grows with usage.

And while there's always gonna be some overhead, the current amount of overhead may make it challenging to use them in large codebases.

Fwiw, the "release builds actually compile faster" behavior holds for "vanilla" macros, if you have a prebuilt SwiftSyntax binary, and even for the Swift Compiler itself.

If you haven't already, I would encourage you to see what the impact is on compilation performance on a codebase that has a lot of macro invocations, even if it's as simple as 2000 expression macros being invoked in one function.

All that said, it's great to see progress on this, and based on the other thread linked here it seems like it's solving a real problem.

4 Likes

So I added some microbenchmarks to Wacro in order to understand this better. In release mode, the marginal overhead of macro expansion on my machine (M3 Max) is around 25ms with WasmKit, 1ms with WebKit (specifically it appears WebKit starts closer to 1.4ms and improves over time to 1.0ms as it uses better quality JIT.) Cold start performance is relatively comparable, ~300ms in both cases.

Testing real-world swiftc runs, a file with 1000 print(#stringify(1+1).1) lines adds 30s to the build time with WasmKit (release). Meanwhile the same file adds 3.3s of build time with WebKit.

I also did some benchmarking of the MRE in the post you linked and it looks like the major overhead is that each frontend invocation is spawning a new instance of the plugin executable. This just seems like an unrealized optimization opportunity to me: one can envision a world in which swift-frontend accepts pipes instead of a plugin path, allowing SwiftPM to spawn the plugin once and multiplex messages to and from the compiler (cc @Max_Desiatov what do you think of this idea?) This is mostly orthogonal to what WebAssembly Macros aim to achieve, though 1) it would probably make wasm macro integration easier, and 2) the fact that WebAssembly is deterministic could mitigate any risks with reusing the same instance of a plugin executable.

3 Likes

one can envision a world in which swift-frontend accepts pipes instead of a plugin path, allowing SwiftPM to spawn the plugin once and multiplex messages to and from the compiler

Yeah, itโ€™s not clear why this wasnโ€™t done from the start. Perhaps we can get someone from the core team to chime in on whether theyโ€™d accept this as a contribution.

I think thereโ€™s two potential issues with the idea though:

  1. You can now store information about prior invocations in static vars inside your newly long-lived process, which might create a temptation to have state inside your macro, which could tempt macro authors to try to take advantage to do more global analysis than is currently possible.
  2. Itโ€™s unclear what the exact perf implications would be, but it could just be replacing one problem (overhead of starting a process) with another (lots of macro invocations contending for access to the process). Idk enough about IPC to know if this is a real problem or not
2 Likes

IMO both of these issues are lesser evils than spawning the macro over and over again. In fact one approach to fix both issues could be to spawn as many processes as min(# of jobs using macro, # of cores). This ensures that people don't (ab)use macros to store global state and also reduces contention. Though given that macros take ~1ms to evaluate with JIT I feel like contention won't be a big deal anyway, and I think there's already enough nondeterminism in the macro lifecycle to ensure people don't assume nonexistent API contracts.

If anything, I think the greater benefit of allowing the frontend to accept pipe-based-plugins would be that it makes the architecture a lot more extensible by enabling the caller (instead of a separate POSIX process) to handle macro expansion requests. As an example, I've created a Node.js based shim for swiftc that emulates pipe-based-plugins and uses this emulation to load wasm plugins with -load-plugin-executable Foo.wasm#Foo, ditching WacroPluginHost entirely. The emulation is quite hacky (see prepareForwarder()) but if pipe-plugins were supported by the compiler it would be a lot more robust.

2 Likes

Yeah to be clear I don't necessarily find these arguments convincing personally. But if you're the kind of person who's very concerned with having reproducible builds (1) might hold a lot more weight

1 Like

Wanted to provide a quick update in that I'm now working to upstream this.

2 Likes

This is very interesting and it's cool to see use of Wasm.

However, I'm not happy with the idea that macro authors will need to ship pre-compiled binaries (in Wasm or any other format) for faster build times. It would be a pretty disappointing concession to make.

Effectively what this is doing is excluding swift-syntax and the macro implementation from a "clean build", and that's where the savings come from. It seems to me that we could achieve the same result (without requiring anything extra from macro authors) by having SwiftPM build macros once and cache them. We would introduce a new form of "clean build" which keeps these cached host tools, which may not be fully "clean", but IMO it's no less clean than downloading a precompiled version.

I understand that SwiftPM's build system has limitations when it comes to host tools, but I don't think that's a good justification to commit to something like this across the package ecosystem.

It is very cool, though. And the numbers show that there is much to be gained if SwiftPM were improved.

2 Likes

To achieve that same result one would also need to build infrastructure for a global distributed cache that every user has access to and is somehow populated with artifacts for each platform that a macro is running on. Otherwise macro users would still have the overhead of building at least once to populate their own cache, and then also rebuilding every time when their toolchain or macro dependencies change.

Precompiled macros is not the only justification. FWIW virtualizing macros for enforcing determinism and providing security guarantees is even more important.

While users of macros on Darwin have macros sandboxed, on other platforms there's literally arbitrary code execution happening in every build, where macros have same access to filesystem and networking as other processes.

Not only this is bad from the security perspective, it allows macro authors to break incremental builds by making macro output non-deterministic on repeated runs for same inputs. With Wasm we can easily forbid access to outside world and also restrict non-deterministic code (like Date or users of arc4random such as Int.random or Set) to always return same output for same input.

2 Likes

i personally would be okay with this tradeoff, even in the medium-long term. but we only have a handful of macros that we are using in a lot of places. if you have a larger toolbox of macros i imagine the tradeoff might be worse.

2 Likes

Building on what Max said, I want to add that adding wasm plugin capabilities to Swift doesn't say anything about how the build system compiles/fetches wasm content to execute. We'll probably need a Swift Evolution proposal for adding this to SwiftPM, at which point we can decide how we want to handle that. Questions to be answered include 1) do we want to allow binary distribution of wasm plugins? (imo yes but of course my opinion isn't the only one that matters), 2) could we perhaps compile macros to Wasm by default to enable the determinism and security guarantees?

As far as the question of binary distribution goes, keep in mind that Wasm plugins are effectively pure functions โ€” they can only take in text and return text. Indeed, the text they're generating is code that's compiled for the target system, but you can expand your macros to see exactly what they generate, so there's nothing a Wasm macro does that you can't see. Also I personally feel like the best way to distribute these would be through a registry that compiles them transparently on CI (for even more observability), but that's a question for another day.

2 Likes

Yeah this is what I have in mind, and I think that's fine. It's how all other source packages are built.

I don't think the complaints about macro build times are related to what happens when your toolchain or dependencies change -- they're mostly about how long build times affect developers' day-to-day workflow.

I'm not sure that these are unique to Wasm.

The alternative - if the focus is on Wasm for reasonable build performance with macros, or if it even becomes required for security as you suggest - is that if I as a library author want to publish a macro, I'll need to download a toolchain which supports Wasm code generation and a Wasm SDK. I don't think that's reasonable. More broadly, I don't think it's wise for Swift to make Webassembly a required and load-bearing component of the ecosystem right now.

There are smaller steps we could take to restrict particular system calls which have much less impact on macro authors.

4 Likes

Don't forget about CI builds.

3 Likes