Swift Macros: Build Time Overhead Concerns

Hello Swift Community,
I'm reaching out to share our team's recent experiences with the newly introduced Swift Macros feature, a promising addition we've been eagerly exploring. As part of our ongoing efforts to enhance code quality and reduce boilerplate, we conducted a series of experiments with Swift Macros to understand their impact on our development workflow and we found out a significant build time overhead, which became a blocker for adoption.

For example, we benchmarked compiling an example documenting how to adopt macros in SwiftUI. The result was a 2x increase in build times.

We are aware of an existing forum thread discussing macro build performance. However, a big part of it seems to be focussing on the overhead of building SwiftSyntax, which is not a primary concern for us as we are using pre-compiled SwiftSyntax. Hence a new thread.

Project Setup

SPM complete repros download link
We set up several medium-sized projects incorporating different workflows to gauge the performance of Swift Macros. Our experiments were structured around the following macro types:

StringifyMacro Project:

  1. Summary: The analysis contrasts a project utilizing StringifyMacro with one employing a non-macro equivalent.
  2. Refer to “WithStringifyMacro” and “WithoutStringifyMacro” folders in the above zip.
  3. “WithoutStringifyMacro” has files that make use of expressions like “print((1+2, "1 + 2"))”. “WithStringifyMacro” has files that make use of expressions like “print(#stringify(1 + 2))”. More details about this pattern can be found in the uploaded zip or here(with macro), here(without macro) for a quick reference.
  4. Source of StringifyMacro implementation: swift-syntax/Examples/Sources/MacroExamples/Implementation/Expression/StringifyMacro.swift at main · apple/swift-syntax · GitHub


  1. Summary: The analysis contrasts a project utilizing MemberwiseInitMacro with one employing a non-macro equivalent where explicit initializers are added.
  2. Refer to “WithMemberwiseInitMacro” and “WithoutMemberwiseInitMacro” folders in the above zip.
  3. “WithMemberwiseInitMacro” has classes making use of “@MemberwiseInit(.public)” whereas “WithoutMemberwiseInitMacro” has the exact same setup but with explicit initializers in the same file scope. For quick reference of exact usage, refer to these links: with macro usage, without macro usage.
  4. Source of MemberwiseInitMacro implementation: GitHub - gohanlon/swift-memberwise-init-macro: Swift Macro for enhanced automatic inits.


  1. Summary: Utilizes the example provided during WWDC, we compared its performance against the conventional ObservedObject pattern, as talked about it in the migration guide here - Migrating from the Observable Object protocol to the Observable macro | Apple Developer Documentation
  2. This was a motivating example to try as the above guide calls out: “Tracking optionals and collections of objects, which isn’t possible when using ObservableObject.” so we would like to get such benefits with Macros but looks like inviting the Observable macro equivalent comes with ~2x the build time overhead. (the project uses it in the exact same way as suggested in the guide)
  3. Refer to “WithObservationMacro” and “WithoutObservationMacro” folders in the above zip.
  4. For quick reference of exact usage, refer to these links: with macro usage, without macro usage.

Findings and Concerns

Our tests revealed a significant increase in build time overhead when macros were employed.

Clean Build Times Comparison (Avg of 3 runs)

  • Explicit Initializers vs MemberInitMacro
    • Without Macro (Target ST0): 226.0 seconds
    • With Macro (Target ST0): 429.0 seconds
  • ObservedObject vs ObservationMacro
    • Without Macro (Target ST0): 84.8 seconds
    • With Macro (Target ST0): 154.9 seconds
  • Explicit Expressions vs StringifyMacro
    • Without Macro (Target ST0): 42.5 seconds
    • With Macro (Target ST0): 107.3 seconds

Notably, the Activity Monitor indicated a substantial load attributed to swift-plugin-server among other macro-related processes. We hypothesize that a part of the build time overhead is primarily due to the invocation of macro executables as an additional build step – a contrast to the integrated compilation process observed in languages like Rust and Kotlin. To test this, we also wrote the simplest possible macro; one that takes no arguments, and returns an empty string literal. It had the same overhead.

Addressing Build time concerns

While we are excited about the potential of Swift Macros to modernize our development practices, the observed build time overhead currently poses a substantial blocker to their adoption in our projects.

We are reaching out to the community for insights:

  • Are there known workarounds or optimizations that we might not be aware of that could mitigate this overhead?
  • Is there ongoing or planned work within Swift to address these concerns?
  • Are there easy wins that we can help to implement?

We look forward to any guidance, suggestions, or discussions that could help us leverage Swift Macros more effectively while maintaining our project's performance standards.


Based on experience (no testing), it seems like the compiler can overload the system with macro processes, slowing down the overall build. I've encountered what seemed like build freezes but were actually macro processes consuming all CPU, preventing progress in the build. What caused that overload, whether the macro processes were stuck or just fighting with each other for CPU time, I don't know, but killing the processes and rebuilding would solve the issue at least temporarily. Given the overhead of SwiftSyntax I haven't investigate the issue, as it's very intermittent and the SwiftSyntax overhead is consistent.

So I encourage more concrete testing here. How do these macro processes scale across files and the number of macro invocations? On very large projects can you reliably make them deadlock? Can you build the macro plugins in release mode and does that improve performance? Do profiles of the macro processes during builds reveal any egregious bottlenecks?


We are utilizing pre-compiled SwiftSyntax to eliminate the associated overhead. Additionally, we have stress-tested macro usage ranging from a few files to hundreds across modules and observed build time increase primarily due to the spawning of more processes and increased usage. We also conducted tests in the release configuration and confirmed that the build time overhead still exists. By the way, the Observation Macro provided by SwiftUI is optimized as it comes from Xcode's toolchain and operates in release mode by default.

Fortunately, during our most extensive stress testing configuration, involving thousands of files across modules, we did not encounter any deadlocks. Moreover, the macro processes themselves didn't take much time, with the maximum duration being only 0.7 seconds for our use case. However, if you see this in your workflow where killing of processes helps, would be great if you could create an FB so that folks can look into it.


There is a correlation between macro invocations and the multiple. For example, if you write a macro that takes no parameters and immediately returns an extremely simple output (#emptyString -> ""), then invoke it 2000 times in one file, you can get up to a 13X slowdown on an M1 Max.

@chiragramani has already covered this, but yes, if you build in release, you can roughly halve this overhead to a 6X slowdown.

This is about the simplest test I can think of of the raw overhead of macros; they are doing just about the minimum amount of work in the process itself that it's possible to do. If you or anybody else has a more minimal test they'd like to see feel free to describe it and we can give it a shot.

Also worth noting that we tried what was at the time the latest Swift 5.10 compiler shipping with the last Xcode 15.3 before the one that dropped yesterday, and there was no change.

I acknowledge that the design of Swift Macros has some inherent slowdowns in the name of security and compiler stability, and that those are at least good and worthy goals, perhaps even strictly necessary ones. But I think it is fair to say that when the cost is measured in the dozens of seconds, there is a project size threshold past which it's simply not feasible to adopt them, and we are hoping to see that threshold rise into the millions of lines of code.


It might be worth benchmarking a non-swift-syntax macro. It's probably not going to be a meaningful difference from your #emptyString macro, but it's ever so slightly more minimal.


Oh, that's really cool. It's not impossible that SwiftSyntax adds pre-main time to new processes being spun up or something.

I tried it. The results are even worse: 0.8 seconds without, 40 seconds with.

It isn't impossible that this is because it outputs a big block comment with the JSON message. I did also observe what looked like deadlocking as the number of invocations went over 3,000.

It does suggest that we can do an even "purer" test with something even simpler though.


@chiragramani first of all, thank you for the detailed measurements and the reproducible test codes. We are aware of the performance issues regarding macros. And we, including myself, are actively working on it. There are a number key points we are looking at, including but not limited to:

  • swift-syntax build time
  • swift-syntax compiler side performance
  • Plugin process startup overhead
  • Message coding and serialization between the compiler and plugins
  • Syntax tree visitation/mutation

Thank you for acknowledging the measurements and we appreciate the detailed update and plans in flight to address these issues! This feature holds a special place for us and if there's any way we can help, please let us know! :pray:

1 Like

Is there a way to fully disable the macro sandboxing, maybe along the lines of IDESkipPackagePluginFingerprintValidatation?

I feel that would be useful in tracking the full pipeline of a particular macro or set of macros.