Swift Macros: Build Time Overhead Concerns

Hello Swift Community,
I'm reaching out to share our team's recent experiences with the newly introduced Swift Macros feature, a promising addition we've been eagerly exploring. As part of our ongoing efforts to enhance code quality and reduce boilerplate, we conducted a series of experiments with Swift Macros to understand their impact on our development workflow and we found out a significant build time overhead, which became a blocker for adoption.

For example, we benchmarked compiling an example documenting how to adopt macros in SwiftUI. The result was a 2x increase in build times.

We are aware of an existing forum thread discussing macro build performance. However, a big part of it seems to be focussing on the overhead of building SwiftSyntax, which is not a primary concern for us as we are using pre-compiled SwiftSyntax. Hence a new thread.

Project Setup

SPM complete repros download link
We set up several medium-sized projects incorporating different workflows to gauge the performance of Swift Macros. Our experiments were structured around the following macro types:

StringifyMacro Project:

  1. Summary: The analysis contrasts a project utilizing StringifyMacro with one employing a non-macro equivalent.
  2. Refer to “WithStringifyMacro” and “WithoutStringifyMacro” folders in the above zip.
  3. “WithoutStringifyMacro” has files that make use of expressions like “print((1+2, "1 + 2"))”. “WithStringifyMacro” has files that make use of expressions like “print(#stringify(1 + 2))”. More details about this pattern can be found in the uploaded zip or here(with macro), here(without macro) for a quick reference.
  4. Source of StringifyMacro implementation: swift-syntax/Examples/Sources/MacroExamples/Implementation/Expression/StringifyMacro.swift at main · apple/swift-syntax · GitHub

MemberwiseInitMacro:

  1. Summary: The analysis contrasts a project utilizing MemberwiseInitMacro with one employing a non-macro equivalent where explicit initializers are added.
  2. Refer to “WithMemberwiseInitMacro” and “WithoutMemberwiseInitMacro” folders in the above zip.
  3. “WithMemberwiseInitMacro” has classes making use of “@MemberwiseInit(.public)” whereas “WithoutMemberwiseInitMacro” has the exact same setup but with explicit initializers in the same file scope. For quick reference of exact usage, refer to these links: with macro usage, without macro usage.
  4. Source of MemberwiseInitMacro implementation: GitHub - gohanlon/swift-memberwise-init-macro: Swift Macro for enhanced automatic inits.

ObservationMacro:

  1. Summary: Utilizes the example provided during WWDC, we compared its performance against the conventional ObservedObject pattern, as talked about it in the migration guide here - Migrating from the Observable Object protocol to the Observable macro | Apple Developer Documentation
  2. This was a motivating example to try as the above guide calls out: “Tracking optionals and collections of objects, which isn’t possible when using ObservableObject.” so we would like to get such benefits with Macros but looks like inviting the Observable macro equivalent comes with ~2x the build time overhead. (the project uses it in the exact same way as suggested in the guide)
  3. Refer to “WithObservationMacro” and “WithoutObservationMacro” folders in the above zip.
  4. For quick reference of exact usage, refer to these links: with macro usage, without macro usage.

Findings and Concerns

Our tests revealed a significant increase in build time overhead when macros were employed.

Clean Build Times Comparison (Avg of 3 runs)

  • Explicit Initializers vs MemberInitMacro
    • Without Macro (Target ST0): 226.0 seconds
    • With Macro (Target ST0): 429.0 seconds
  • ObservedObject vs ObservationMacro
    • Without Macro (Target ST0): 84.8 seconds
    • With Macro (Target ST0): 154.9 seconds
  • Explicit Expressions vs StringifyMacro
    • Without Macro (Target ST0): 42.5 seconds
    • With Macro (Target ST0): 107.3 seconds

Notably, the Activity Monitor indicated a substantial load attributed to swift-plugin-server among other macro-related processes. We hypothesize that a part of the build time overhead is primarily due to the invocation of macro executables as an additional build step – a contrast to the integrated compilation process observed in languages like Rust and Kotlin. To test this, we also wrote the simplest possible macro; one that takes no arguments, and returns an empty string literal. It had the same overhead.

Addressing Build time concerns

While we are excited about the potential of Swift Macros to modernize our development practices, the observed build time overhead currently poses a substantial blocker to their adoption in our projects.

We are reaching out to the community for insights:

  • Are there known workarounds or optimizations that we might not be aware of that could mitigate this overhead?
  • Is there ongoing or planned work within Swift to address these concerns?
  • Are there easy wins that we can help to implement?

We look forward to any guidance, suggestions, or discussions that could help us leverage Swift Macros more effectively while maintaining our project's performance standards.

24 Likes

Based on experience (no testing), it seems like the compiler can overload the system with macro processes, slowing down the overall build. I've encountered what seemed like build freezes but were actually macro processes consuming all CPU, preventing progress in the build. What caused that overload, whether the macro processes were stuck or just fighting with each other for CPU time, I don't know, but killing the processes and rebuilding would solve the issue at least temporarily. Given the overhead of SwiftSyntax I haven't investigate the issue, as it's very intermittent and the SwiftSyntax overhead is consistent.

So I encourage more concrete testing here. How do these macro processes scale across files and the number of macro invocations? On very large projects can you reliably make them deadlock? Can you build the macro plugins in release mode and does that improve performance? Do profiles of the macro processes during builds reveal any egregious bottlenecks?

2 Likes

We are utilizing pre-compiled SwiftSyntax to eliminate the associated overhead. Additionally, we have stress-tested macro usage ranging from a few files to hundreds across modules and observed build time increase primarily due to the spawning of more processes and increased usage. We also conducted tests in the release configuration and confirmed that the build time overhead still exists. By the way, the Observation Macro provided by SwiftUI is optimized as it comes from Xcode's toolchain and operates in release mode by default.

Fortunately, during our most extensive stress testing configuration, involving thousands of files across modules, we did not encounter any deadlocks. Moreover, the macro processes themselves didn't take much time, with the maximum duration being only 0.7 seconds for our use case. However, if you see this in your workflow where killing of processes helps, would be great if you could create an FB so that folks can look into it.

2 Likes

There is a correlation between macro invocations and the multiple. For example, if you write a macro that takes no parameters and immediately returns an extremely simple output (#emptyString -> ""), then invoke it 2000 times in one file, you can get up to a 13X slowdown on an M1 Max.

@chiragramani has already covered this, but yes, if you build in release, you can roughly halve this overhead to a 6X slowdown.

This is about the simplest test I can think of of the raw overhead of macros; they are doing just about the minimum amount of work in the process itself that it's possible to do. If you or anybody else has a more minimal test they'd like to see feel free to describe it and we can give it a shot.

Also worth noting that we tried what was at the time the latest Swift 5.10 compiler shipping with the last Xcode 15.3 before the one that dropped yesterday, and there was no change.

I acknowledge that the design of Swift Macros has some inherent slowdowns in the name of security and compiler stability, and that those are at least good and worthy goals, perhaps even strictly necessary ones. But I think it is fair to say that when the cost is measured in the dozens of seconds, there is a project size threshold past which it's simply not feasible to adopt them, and we are hoping to see that threshold rise into the millions of lines of code.

3 Likes

It might be worth benchmarking a non-swift-syntax macro. It's probably not going to be a meaningful difference from your #emptyString macro, but it's ever so slightly more minimal.

5 Likes

Oh, that's really cool. It's not impossible that SwiftSyntax adds pre-main time to new processes being spun up or something.

I tried it. The results are even worse: 0.8 seconds without, 40 seconds with.

It isn't impossible that this is because it outputs a big block comment with the JSON message. I did also observe what looked like deadlocking as the number of invocations went over 3,000.

It does suggest that we can do an even "purer" test with something even simpler though.

3 Likes

@chiragramani first of all, thank you for the detailed measurements and the reproducible test codes. We are aware of the performance issues regarding macros. And we, including myself, are actively working on it. There are a number key points we are looking at, including but not limited to:

  • swift-syntax build time
  • swift-syntax compiler side performance
  • Plugin process startup overhead
  • Message coding and serialization between the compiler and plugins
  • Syntax tree visitation/mutation
19 Likes

Thank you for acknowledging the measurements and we appreciate the detailed update and plans in flight to address these issues! This feature holds a special place for us and if there's any way we can help, please let us know! :pray:

1 Like

Is there a way to fully disable the macro sandboxing, maybe along the lines of IDESkipPackagePluginFingerprintValidatation?

I feel that would be useful in tracking the full pipeline of a particular macro or set of macros.

@chiragramani, hi

To test this, we also wrote the simplest possible macro; one that takes no arguments, and returns an empty string literal. It had the same overhead.

Could u please share numbers for this test case as well ?

It could be useful to compare your results to the results that i obtained from our project for this particular test case, since for us they didn't show any overhead.

1 Like

The other thread has been useful for tracking whatever progress is being made around the swift-syntax build time, but this is the only thread that discusses the other issues with macro usage and performance. Are there any updates on progress with those issues?

4 Likes

Pardon my ignorance, but I thought that swift-syntax is the sole cause for builds getting slower as a project's targets adopt more macros. Your message made me realize that probably our build pipeline isn't suffering from swift-syntax as much as it does from expanding unchanged macro sources on each incremental build. I suppose that won't be fixed by shipping swift-syntax as a prebuilt binary, correct?

Not completely, but it will be much faster with a prebuilt release version of swift-syntax, as it will be built with optimizations, which source-built versions do not get in debug mode. We won't see how impactful that might be until we get the pre-built version.

1 Like

Based on the benchmarks here it probably isn't. It's interesting that @restermans is saying that his project doesn't have this issue, because my employer also uses Bazel, and we are also (in theory) building SwiftSyntax in release and depending on the artifact of that build, so our issues should be pure overhead from using macros.

To @Jon_Shier's point, macros that are part of, say, Foundation, should be using a release build of SwiftSyntax, so you should be able to see this yourself just by making a normal Xcode project that uses @Observable a lot.

I haven't done a benchmark on this recently, but as far as we're concerned this is still a serious issue, and while we have a lot of cool macros implemented we're hesitant to use them because of the build time concerns.

We did another benchmark of just @Observable. Same result as before: the overhead of using the macro grows much faster than if you compile an expanded version of the code. We'll look at making an external version of this available.

With Swift Testing and now serialization APIs potentially using macros, I'd anticipate developers in medium sized projects begin to feel the pain as macros become an essential part of using Swift.

1 Like

Can you publish your results?

To some extent this is expected and unavoidable; macros dynamically expand to add code at build time and that necessarily has some overhead. An O(n) impact from macros seems unavoidable. That said, making the curve as shallow as possible should be high priority goal for the macro system. As far as I can tell, the various aspects of the macro system impact performance in different ways.

  1. swift-syntax has a well known build time impact for non-toolchain (Apple) macros. This would be addressed through the distribution of pre-built artifact instead of requiring source builds (the impact of which is doubled since SPM insists on building it universally, so we're actually paying double cost). This seems to be relatively close to a solution but it's unknown when it will actually be publicly available. Still nothing as of Xcode 16.3b3.
  2. Related to 1, swift-syntax's build configuration has an impact on the performance of macro expansions, so a precompiled, release-mode artifact will further improve performance. Unfortunately, release mode swift-syntax builds are very slow, taking over a minute on an M1 Ultra. This should also be addressed once the precompiled artifact is available.
  3. Once we have the build impact of swift-syntax itself minimized (there may be additional optimizations around building the macro plugins themselves that will be visible once that's solved, but I don't know of any now), the next issue is that the compiler creates a separate process for each macro invocation. This is particularly slow on macOS, but has an impact on all platforms. I've seen some Swift PR's mentioning an in process expansion model, but I don't know whether that has shipped or is enabled by default.
  4. Even once the expansions themselves are better optimized, the compiler (and / or Xcode) seems to have a significant problem with caching the results of the expansion for use in other parts of the build pipeline, especially the emit module step, which seems to require the reexpansion of every macro in a module. If the expansions were faster this would be less of a problem, but with problems 1 through 3, this can be significant time (see previous threads about the issue, my current project pays a 33.4s emit module cost for every incremental build). Even if macros had a more significant clean build cost, fast incremental builds would make their use more acceptable.
  5. Even after all of these issues are solved, we have the overall performance of swift-syntax itself to consider.

At this point Swift hasn't shipped a solution to any of these problems (as far as I can tell). Hopefully this means we can only get faster from here, but people will keep banning macros from their codebases until the situation noticeably improves.

12 Likes

I agree with the sentiment, but disagree with the framing here a little; there are two separate tracks of work, and there are two separate groups who can resolve them. The SPM team needs to fix the swift-syntax build time issue, and the folks who own Swift-Syntax/the compiler need to fix the others.

It's been said in this thread before, but worth reiterating: your issues 1 & 2 are solved problems for many of the folks in this thread; we use caching build systems like bazel or precompile SwiftSyntax, built for release. There's probably a bifurcation between big companies with developer experience teams who solve these issues but have enough code to be bitten by 3, 4, & 5, and small companies who aren't even noticing them because they're bitten by 1 & 2 so hard.

I've seen some Swift PR's mentioning an in process expansion model, but I don't know whether that has shipped or is enabled by default.

I think this has been in the compiler before macros were landed, and is for optimization passes written in Swift. I believe that we tried to turn this on for our macros some time ago, without success.

I do believe there should be a "trusted macro" flag that lets macros be run in process, but I expect that there could be some issues with dynamically linking SwiftSyntax in, since the compiler already has a copy in process. If that isn't the case, I think I would seriously consider forking the compiler internally to add it until this issue is resolved.

Can you publish your results?

Yeah like I said, we will soon. But for now, here is someone else's.

An O(n) impact from macros seems unavoidable

Agreed. For us a 1.2x slowdown seems reasonable, but the benchmark above has a 24x slowdown, and our benchmark will show superlinear growth in the Observation case.

2 Likes