Target dependency graph computation performance too slow

I have a workspace with a good amount of dependencies, both local and remote (and the local dependencies also exist as remote dependencies as a convenience for being able to edit several packages I work with from within the same Xcode workspace).

Here are the logs from the first steps of an incremental debug build. The “Compute … for package preparation” step took 1.284 seconds, “Send project description to service” 0.774s, “Compute target dependency graph” 1.359s. This adds up to 3.4s out of my 5.5s build which is frustrating. Is there any way I can optimize this without moving to xcframeworks? Is it recalculating all this from scratch each build because I have a few local dependencies? The local dependencies as you can see below in turn include most of the remote dependencies.

Here are the logs (too long to paste here): Dependency graph logs · GitHub

My local dependencies are: ManabiReaderCore, ManabiCommon, LakeOfFire

Currently using Xcode 26 b5 but had this issue on earlier versions too.

1 Like

On Xcode 26 the first ~5 seconds of my builds (out of 10 seconds total) go to computing target dependencies. Is there any good strategy for reducing this?

What worse is that, for all the slow work that phase does, it doesn't properly handle simple cases like needing to rebuild a dependency as a dynamic framework when I switch from just building a watch app to the iOS app that embeds it, where both need the same underlying dependency. I'd be more okay with slow analysis if it was actually accurate, but it's not, so I end up having to repay it over and over again as I clean and rebuild.

If we migrate to Tuist, both our problems are solved…

Only 2 seconds? I raise you 125 seconds. It takes Xcode literally more than 2 minutes to compute the dependency graph on a project I work on. We’d love to know why. We’ve been simplifying the project considerably and it still takes 2 minutes.

2 Likes

There definitely seems to have been a major regression in planning performance in Xcode 26 / 26.1. In one of my small projects it barely showed up in Xcode 16.4, now it's plainly visible across the project and all dependencies. Even tiny packages have 1.6s planning phases. Not to mention Xcode 26.1 now has a visible "Build stat cache" step that takes 5s on an M3 Ultra every build.

2 Likes

Just confirming that this is still the case in Xcode 26.2 beta 1 in the project that @Oliver_Jones mentioned above (and other projects that we both work on).

1 Like

We are seeing the “Compute target dependency graph for package preparation” ComputePackagePrebuildTargetDependencyGraphstep take over an hour for our large Xcode projects (>700mb) using Xcode 26.0.1 and 26.1.1.

From spin dumping SWBBuildServiceand tracing execution of a dev version of SWBBuildServiceI am seeing that simply reading a message over a socket is what is taking up so much time (the loop here). The specific message that is taking so long is the TRANSFER_SESSION_PIF_OBJECTS_LEGACY_REQUEST message which for one of our projects is receiving 736.24mb for that one message.

I can also reproduce the issue with a dummy xcode project where I dramatically inflated the project size by editing a scheme to add OTHER_SWIFT_FLAGS = "-warn-concurrency -warn-concurrency -warn-concurrency [maybe 100,000 more times]". Will file a Radar with the dummy project to hopefully get more visibility to this issue.

1 Like

Am also running into this with a project that has a dependency graph of ~1000 edges. It often takes upwards of 3-5 minutes to compute the dependency graph on an Apple M3 Ultra Mac Studio with 512GB of ram.

1 Like

This is from a clean build on the machine. Definitely feels like something’s wrong here.

1 Like

I’ve submitted FB22007592 for this.

In projects with many local Swift packages (~160 packages, ~2,100 targets), the "Compute target dependency graph" phase takes 60+ seconds on every single build, even consecutive no-op builds with zero file changes. This phase is not cached between builds, unlike the downstream BuildDescription which has signature-based caching. For large monorepos, this makes iterative development painfully slow — every Cmd+B incurs a fixed 60-second tax before any compilation begins.

Environment

  • Xcode 26.4 beta (17E5159k)
  • macOS 26.3 (25D125)
  • Apple M3 Max, 36 GB RAM
  • Swift 6.2

Project Configuration

  • 1 Xcode project (ElectricSidecar.xcodeproj) with ~87 local Swift package references
  • ~160 resolved packages (local packages + their transitive dependencies, all local paths)
  • ~2,122 targets as reported by the build system (from PIF cache analysis)
  • ~1,155 target definitions across all Package.swift files in the monorepo
  • All packages are local path dependencies (no remote packages)
  • buildImplicitDependencies is set to NO in all schemes

Steps to Reproduce

  1. Open a workspace with 100+ local Swift packages
  2. Build the project (Cmd+B) — observe "Compute target dependency graph for package preparation" takes 60+ seconds
  3. Wait for build to complete successfully (build succeeded, no errors)
  4. Without modifying any files, press Cmd+B again
  5. Observe "Compute target dependency graph for package preparation" takes 60+ seconds again

Expected Behavior

On a no-op rebuild with no file changes, the target dependency graph computation should be cached and effectively free. The build system already caches the BuildDescription using a signature-based mechanism (BuildDescriptionManager with in-memory and on-disk caching). The TargetBuildGraph — which is a prerequisite for computing the BuildDescription signature — should have an analogous caching mechanism.

Actual Behavior

The TargetBuildGraph is computed from scratch on every build. In the open-source Swift Build code (Sources/SWBBuildService/PlanningOperation.swift), the plan() method constructs a new TargetBuildGraph unconditionally:

let graph = await TargetBuildGraph(
workspaceContext: workspaceContext,
buildRequest: buildRequest, ...
)

There is no caching of this intermediate result. The TargetDependencyResolver is instantiated fresh for each planning operation, performing:

  • Target discovery for all ~2,100 targets (parallel, up to 100 concurrent)
  • Build settings evaluation for every target (including macroConfigSignature computation with filesystem stat calls)
  • Implicit dependency resolution (even when disabled at the scheme level, the resolver is still instantiated)
  • Topological sort and deduplication of the full graph
  • Platform specialization and graph pruning
  • Provisioning input gathering for every code-signed target

Measurements

Measured on Apple M3 Max with 36 GB RAM, project fully resolved, DerivedData warm:

Metric Value
Targets in build graph ~2,122
"Compute target dependency graph" (first build) 60-90 seconds
"Compute target dependency graph" (immediate no-op rebuild) 60-90 seconds
"Create build description" (first build) 30-90 seconds
"Create build description" (no-op rebuild, cached) ~5 seconds
PIF cache status on no-op rebuild Cache hit (verified — no new PIFCache entries)
SWBBuildService RAM usage 12-15 GB
SWBBuildService CPU time per build ~3 minutes
xcodebuild -list time ~10 seconds

Note that the BuildDescription cache works correctly — "Create build description" drops from 30-90s to ~5s on cache hit. The TargetBuildGraph computation does not benefit from any caching and takes the same time regardless.

Analysis from Open-Source Swift Build Code

The BuildDescription has a three-tier caching strategy (BuildDescriptionManager.getNewOrCachedBuildDescription):

  1. Compute a BuildDescriptionSignature (hash of all inputs)
  2. Check in-memory cache (HeavyCache)
  3. Check on-disk cache (serialized .msgpack files)
  4. Only construct a new description on cache miss

The TargetBuildGraph has no caching at any tier. This creates a bottleneck: the graph must be computed before the BuildDescriptionSignature can be calculated (since the signature includes per-target metadata from the graph), but computing the graph IS the expensive operation.

Suggested Fix

Add a lightweight fingerprint mechanism for the TargetBuildGraph inputs, analogous to BuildDescriptionSignature but computable without the full graph:

  1. Compute a TargetBuildGraphFingerprint from: PIF workspace signature + build request parameters + xcconfig file modification times
  2. Cache the TargetBuildGraph (or BuildPlanRequest) keyed on this fingerprint
  3. On subsequent builds, compute only the fingerprint (cheap) and check the cache before invoking the full TargetDependencyResolver

This would make no-op rebuilds effectively free for the planning phase, matching the existing BuildDescription caching behavior.

Additional Context

PIF Cache TTL

The in-memory PIF cache has a hardcoded 60-second TTL (Tuning.pifCacheTTL in SWBCore/Tuning.swift). For large projects where builds take longer than 60 seconds, this TTL can expire before the next build starts, causing unnecessary PIF re-loading from disk. Consider either:

  • Making this TTL configurable via UserDefaults
  • Increasing the default for large workspaces
  • Using a "last build" heuristic instead of a fixed TTL

PIF Instability from .playground Files

We discovered that .playground files inside Swift packages cause severe PIF cache instability. Xcode auto-discovers playgrounds and generates synthetic targets that reference ALL package targets in the workspace. These synthetic targets oscillate between having all build file references and having none between consecutive builds, causing the workspace PIF signature to change on every build and invalidating the entire PIF cache.

Specifically, a JWTDecode.playground inside a third-party package was generating a JWTDecode_Sources target with 943 build file references (one for every target in the workspace), and this target's content was non-deterministic between builds.

Non-deterministic PIF Generation for Custom Target Paths

Packages using custom path: parameters in target definitions (e.g., path: "Sources/Labels/ActionButtonLabel") can produce non-deterministic PIF group trees where the same directory appears with duplicate entries and different GUIDs across builds. This causes unnecessary PIF signature changes even when no source files have changed.

Impact

For our team, this 60+ second fixed overhead on every build means:

  • Iterative development cycles are 60 seconds longer than necessary
  • CI builds pay this cost on every job
  • The overhead is proportional to monorepo size, penalizing good modularity practices
  • Developers are incentivized to create fewer, larger modules (worse architecture) to reduce target count

This disproportionately affects teams that follow Apple's recommended practices of breaking code into many small, focused Swift packages.

3 Likes

Great info, but why are these so out of date?

Oops! Updated

This has been a long standing issue, nice to know the root cause. Best practice nowadays is to avoid shipping .playground, or really any unrelated files, in your package repo because you never know what Xcode might do with it. And since it fully clones the repo, you can't hide anything. I'm hopeful that repositories can help hide this, but so far the automatic repositories simply generate their archives from the entire repo, which means you catch a lot of unrelated content.

Note that that isn't necessarily the root cause but did contribute to slower builds due to larger build graphs and the PIF cache getting invalidated more often than was reasonable. Even after removing the playground, I'm still seeing 60+ second iterative builds with zero changes (just hitting Cmd+B over and over; 60+ seconds each time for Xcode to do basically nothing).

Right, I've just seen other build issues, including irregular build failures, simply because a package included a playground. Nice to know that issue still exists and some of the details around its impact.

Have you ever tried Tuist, cmake, or Bazel to compare performance, if those can build your project all?

Ah that makes sense!

I haven't tried Tuist and am not super familiar with it. From what I do know, it (as well as cmake and Bazel) seem to require that it generate your main Xcode project, which I'm not a huge fan of given that this usually results in branching off of the core Xcode toolchain.