I’ve submitted FB22007592 for this.
In projects with many local Swift packages (~160 packages, ~2,100 targets), the "Compute target dependency graph" phase takes 60+ seconds on every single build, even consecutive no-op builds with zero file changes. This phase is not cached between builds, unlike the downstream BuildDescription which has signature-based caching. For large monorepos, this makes iterative development painfully slow — every Cmd+B incurs a fixed 60-second tax before any compilation begins.
Environment
- Xcode 26.4 beta (17E5159k)
- macOS 26.3 (25D125)
- Apple M3 Max, 36 GB RAM
- Swift 6.2
Project Configuration
- 1 Xcode project (
ElectricSidecar.xcodeproj) with ~87 local Swift package references
- ~160 resolved packages (local packages + their transitive dependencies, all local paths)
- ~2,122 targets as reported by the build system (from PIF cache analysis)
- ~1,155 target definitions across all Package.swift files in the monorepo
- All packages are local path dependencies (no remote packages)
buildImplicitDependencies is set to NO in all schemes
Steps to Reproduce
- Open a workspace with 100+ local Swift packages
- Build the project (Cmd+B) — observe "Compute target dependency graph for package preparation" takes 60+ seconds
- Wait for build to complete successfully (build succeeded, no errors)
- Without modifying any files, press Cmd+B again
- Observe "Compute target dependency graph for package preparation" takes 60+ seconds again
Expected Behavior
On a no-op rebuild with no file changes, the target dependency graph computation should be cached and effectively free. The build system already caches the BuildDescription using a signature-based mechanism (BuildDescriptionManager with in-memory and on-disk caching). The TargetBuildGraph — which is a prerequisite for computing the BuildDescription signature — should have an analogous caching mechanism.
Actual Behavior
The TargetBuildGraph is computed from scratch on every build. In the open-source Swift Build code (Sources/SWBBuildService/PlanningOperation.swift), the plan() method constructs a new TargetBuildGraph unconditionally:
let graph = await TargetBuildGraph(
workspaceContext: workspaceContext,
buildRequest: buildRequest, ...
)
There is no caching of this intermediate result. The TargetDependencyResolver is instantiated fresh for each planning operation, performing:
- Target discovery for all ~2,100 targets (parallel, up to 100 concurrent)
- Build settings evaluation for every target (including
macroConfigSignature computation with filesystem stat calls)
- Implicit dependency resolution (even when disabled at the scheme level, the resolver is still instantiated)
- Topological sort and deduplication of the full graph
- Platform specialization and graph pruning
- Provisioning input gathering for every code-signed target
Measurements
Measured on Apple M3 Max with 36 GB RAM, project fully resolved, DerivedData warm:
| Metric |
Value |
| Targets in build graph |
~2,122 |
| "Compute target dependency graph" (first build) |
60-90 seconds |
| "Compute target dependency graph" (immediate no-op rebuild) |
60-90 seconds |
| "Create build description" (first build) |
30-90 seconds |
| "Create build description" (no-op rebuild, cached) |
~5 seconds |
| PIF cache status on no-op rebuild |
Cache hit (verified — no new PIFCache entries) |
| SWBBuildService RAM usage |
12-15 GB |
| SWBBuildService CPU time per build |
~3 minutes |
xcodebuild -list time |
~10 seconds |
Note that the BuildDescription cache works correctly — "Create build description" drops from 30-90s to ~5s on cache hit. The TargetBuildGraph computation does not benefit from any caching and takes the same time regardless.
Analysis from Open-Source Swift Build Code
The BuildDescription has a three-tier caching strategy (BuildDescriptionManager.getNewOrCachedBuildDescription):
- Compute a
BuildDescriptionSignature (hash of all inputs)
- Check in-memory cache (
HeavyCache)
- Check on-disk cache (serialized
.msgpack files)
- Only construct a new description on cache miss
The TargetBuildGraph has no caching at any tier. This creates a bottleneck: the graph must be computed before the BuildDescriptionSignature can be calculated (since the signature includes per-target metadata from the graph), but computing the graph IS the expensive operation.
Suggested Fix
Add a lightweight fingerprint mechanism for the TargetBuildGraph inputs, analogous to BuildDescriptionSignature but computable without the full graph:
- Compute a
TargetBuildGraphFingerprint from: PIF workspace signature + build request parameters + xcconfig file modification times
- Cache the
TargetBuildGraph (or BuildPlanRequest) keyed on this fingerprint
- On subsequent builds, compute only the fingerprint (cheap) and check the cache before invoking the full
TargetDependencyResolver
This would make no-op rebuilds effectively free for the planning phase, matching the existing BuildDescription caching behavior.
Additional Context
PIF Cache TTL
The in-memory PIF cache has a hardcoded 60-second TTL (Tuning.pifCacheTTL in SWBCore/Tuning.swift). For large projects where builds take longer than 60 seconds, this TTL can expire before the next build starts, causing unnecessary PIF re-loading from disk. Consider either:
- Making this TTL configurable via UserDefaults
- Increasing the default for large workspaces
- Using a "last build" heuristic instead of a fixed TTL
PIF Instability from .playground Files
We discovered that .playground files inside Swift packages cause severe PIF cache instability. Xcode auto-discovers playgrounds and generates synthetic targets that reference ALL package targets in the workspace. These synthetic targets oscillate between having all build file references and having none between consecutive builds, causing the workspace PIF signature to change on every build and invalidating the entire PIF cache.
Specifically, a JWTDecode.playground inside a third-party package was generating a JWTDecode_Sources target with 943 build file references (one for every target in the workspace), and this target's content was non-deterministic between builds.
Non-deterministic PIF Generation for Custom Target Paths
Packages using custom path: parameters in target definitions (e.g., path: "Sources/Labels/ActionButtonLabel") can produce non-deterministic PIF group trees where the same directory appears with duplicate entries and different GUIDs across builds. This causes unnecessary PIF signature changes even when no source files have changed.
Impact
For our team, this 60+ second fixed overhead on every build means:
- Iterative development cycles are 60 seconds longer than necessary
- CI builds pay this cost on every job
- The overhead is proportional to monorepo size, penalizing good modularity practices
- Developers are incentivized to create fewer, larger modules (worse architecture) to reduce target count
This disproportionately affects teams that follow Apple's recommended practices of breaking code into many small, focused Swift packages.