Swift Package Manager: 2x faster resolves, 3x smaller disk footprint
At Ordo One, we have a server-side Swift project with 48 dependencies (soto, swift-protobuf, swift-nio, gRPC, etc.), and as the dependency graph grew we noticed dependency resolution and download times becoming a significant part of our development and CI cycle.
SPM currently fetches the full git history for every dependency. For our project, resolution takes 60+ seconds and .build/ reaches 1.8 GB. There have been previous discussions on improving this - shallow cloning, depth-1 clones, reduced download sizes - each with its own challenges. We'd like to suggest a different approach that sidesteps git cloning entirely for the common case.
For context on the scale: soto is 381 MB of git history when the source archive is 18 MB. swift-protobuf transfers ~210 MB due to C++ submodules not needed for building the Swift library - source archives reduce this to ~20 MB.
We spent some time investigating approaches to improve this and have put up a PR with an implementation.
The public API is identical - no changes to Package.swift or any user-facing interfaces.
The improvement
GitHub (and GitLab, Bitbucket) already serve source archives for any tagged release. swift-nio 2.97.1 is a single 2 MB HTTP GET vs 70 MB git clone --mirror.
The implementation downloads ZIP archives directly from GitHub, re-using SPM's existing registry download architecture:
git ls-remote --tags- discover available versions (same as presently)- GET
Package.swiftfrom CDN - check tools-version compatibility - Download ZIP archive from GitHub CDN
Packages with submodules fall back to shallow clones. Any failure falls back to git clone --mirror - per-dependency.
Zero GitHub REST API calls. Private repos work with the same auth that git clone already uses. SSH-only repos gracefully fall back to git. Package.resolved format is unchanged - existing lockfiles work without modification.
Benchmarks
Benchmarked against 6 real-world projects across two machines (Mac and Linux datacenter server with 1 Gbps internet), 5 runs per machine (10 total).
Times shown as p50 (median), p75, and p99 percentiles.
Cold resolve (shared SPM cache + .build/ + Package.resolved wiped - the CI scenario)
| Project | Deps | zip p50 | p75 | p99 | git p50 | p75 | p99 | Faster | |
|---|---|---|---|---|---|---|---|---|---|
| spi-server | 67 | 68s | 76s | 92s | 100s | 104s | 111s | 1.2-1.5x | |
| swiftpm-large-project | 48 | 46s | 46s | 48s | 93s | 95s | 101s | 2.0-2.1x | |
| penny-bot | 47 | 44s | 46s | 49s | 78s | 80s | 81s | 1.7-1.8x | |
| container | 29 | 26s | 27s | 34s | 42s | 44s | 47s | 1.4-1.6x | |
| swift-composable-architecture | 17 | 12s | 13s | 17s | 16s | 17s | 18s | 1.1-1.3x | |
| SwiftLint | 9 | 11s | 12s | 13s | 12s | 13s | 18s | 1.1-1.4x |
Warm resolve (.build/ wiped, shared caches retained)
| Project | Deps | zip p50 | p75 | p99 | git p50 | p75 | p99 | Faster | |
|---|---|---|---|---|---|---|---|---|---|
| container | 29 | 5s | 6s | 11s | 19s | 20s | 24s | 2.2-3.8x | |
| swiftpm-large-project | 48 | 11s | 20s | 28s | 32s | 32s | 38s | 1.4-2.9x | |
| swift-composable-architecture | 17 | 2s | 3s | 3s | 4s | 4s | 5s | 1.7-2.0x | |
| penny-bot | 47 | 11s | 13s | 29s | 14s | 16s | 18s | 0.6-1.3x | |
| spi-server | 67 | 20s | 21s | 28s | 22s | 25s | 26s | 0.9-1.1x | |
| SwiftLint | 9 | 4s | 5s | 6s | 4s | 6s | 7s | 1.0-1.2x |
swift package update (on warm .build/)
Update times are network-dependent and show high variance between runs. Neither approach consistently wins - both perform git ls-remote for version discovery, and the resolution/download phase depends on network conditions at that moment. For smaller projects (< 20 deps) both complete in 1-3 seconds. For larger projects (40+ deps) both are in the 10-25 second range.
.build/ disk usage
| Project | Deps | Source archives | Git | Reduction |
|---|---|---|---|---|
| spi-server | 67 | 514 MB | 1,546 MB | 3.0x |
| swiftpm-large-project | 48 | 609 MB | 1,871 MB | 3.1x |
| penny-bot | 47 | 459 MB | 1,484 MB | 3.2x |
| container | 29 | 352 MB | 889 MB | 2.5x |
| swift-composable-architecture | 17 | 102 MB | 255 MB | 2.5x |
| SwiftLint | 9 | 240 MB | 342 MB | 1.4x |
Source Archives vs Package Registry
Another option for reducing download sizes is hosting a package registry (SE-0292). A registry serves pre-built ZIP archives via a standardized HTTP API, but requires deploying and maintaining a server, populating it with packages, and configuring each client to use it (including URL-to-identity mapping for every dependency). For comparison, we benchmarked source archives against a stateless registry proxy (redirecting ZIP downloads to GitHub, using swift package resolve --replace-scm-with-registry).
For swiftpm-large-project (48 deps): cold resolve takes 93–101s with git, 46–48s with source archives, and 47–66s with a registry. Since both source archives and the registry download the same ZIP files, disk usage is the same. swift package update is where the registry pulls ahead: ~20s for both git and source archives vs 2–3s for the registry, thanks to more efficient version listing. Source archives capture most of the improvement over git for initial resolves without requiring a hosted registry, URL-to-identity mappings, or client configuration.
Cold Resolve
| Project | Deps | zip p50 | p75 | p99 | reg p50 | p75 | p99 | Faster | |
|---|---|---|---|---|---|---|---|---|---|
| swiftpm-large-project | 48 | 47s | 52s | 58s | 47s | 57s | 66s | ~same | |
| container | 29 | 29s | 30s | 35s | 24s | 28s | 35s | 1.0-1.2x |
Warm Resolve (.build/ wiped, shared caches retained)
| Project | Deps | zip p50 | p75 | p99 | reg p50 | p75 | p99 | Faster | |
|---|---|---|---|---|---|---|---|---|---|
| swiftpm-large-project | 48 | 20s | 21s | 21s | 3s | 5s | 6s | 3.5-7x | |
| container | 29 | 5s | 6s | 12s | 3s | 4s | 4s | 1.5-3x |
swift package update (on warm .build/)
| Project | Deps | zip p50 | p75 | p99 | reg p50 | p75 | p99 | Faster | |
|---|---|---|---|---|---|---|---|---|---|
| swiftpm-large-project | 48 | 20s | 20s | 20s | 2s | 2s | 3s | 7-10x | |
| container | 29 | 8s | 9s | 12s | 1s | 2s | 2s | 6-8x |
Manifest reading: local git vs CDN
On macOS, concurrent HTTP fetches from CDN are actually faster than local git for reading manifests - 120ms vs 600ms for 20 swift-nio tags. On Linux, local git is faster (83ms) because process forking is much cheaper than on macOS. In both cases, results are cached permanently by commit SHA, so subsequent resolves pay zero network cost.
Why is HTTP faster than local git?
During resolution, SPM checks Package.swift for every candidate version to determine tools-version compatibility. With git mirrors, this is a local git ls-tree + git cat-file against the bare repo on disk. With source archives, it's an HTTP GET to raw.githubusercontent.com.
| Method | macOS | Linux |
|---|---|---|
git ls-tree + cat-file (local) |
~600ms | ~83ms |
| HTTP GET (8 concurrent, warm CDN) | ~120ms | ~134ms |
- Git path: SPM spawns two separate
gitprocesses per manifest -git ls-treeto find the blob hash, thengit cat-file -pto read the content. Each process fork + exec costs ~5-10ms on macOS, and they run sequentially because each process holds a lock on the repo. For 20 manifests that's 40 process spawns. - HTTP path: In-process network calls over a single HTTP/2 connection with connection reuse inside
HTTPClient. First request pays TLS handshake (~150ms), but subsequent requests reuse the connection (~30-40ms round trip to CDN edge). With 8 concurrent requests, the latency is amortized across the batch.
# Try it yourself - setup
git clone --mirror https://github.com/apple/swift-nio.git /tmp/nio
cd /tmp/nio
git tag -l '2.*' | sort -V | tail -20 > /tmp/nio-tags.txt
while read t; do git rev-parse "$t^{commit}"; done < /tmp/nio-tags.txt > /tmp/nio-shas.txt
# Git path (~600ms)
time (while read t; do git ls-tree "$t" -- Package.swift > /dev/null; git cat-file -p "$t":Package.swift > /dev/null; done < /tmp/nio-tags.txt)
# HTTP concurrent (~120ms)
time (while read sha; do curl -sL -o /dev/null "https://raw.githubusercontent.com/apple/swift-nio/$sha/Package.swift" & [ $(jobs -r | wc -l) -ge 8 ] && wait -n; done < /tmp/nio-shas.txt; wait)
What about the edge cases?
- Submodules: Detected early via a CDN check for
.gitmodules. Packages with submodules get a shallow clone (--depth 1 --recurse-submodules --shallow-submodules) instead of a ZIP. Still much smaller than a full mirror. - Branch/revision pins: Fall back to standard git. Source archives only work with version-pinned dependencies.
- SSH URLs: Fall back to standard git. Archive downloads require HTTPS.
- Private repos: Work if HTTPS auth is configured (netrc, GITHUB_TOKEN, keychain). SSH-only auth falls back to git.
- Git LFS: ZIP archives contain pointer files, not actual LFS content. Rare in Swift packages.
- Any failure: Falls back to
git clone --mirrorper-dependency. One failing package doesn't affect others.
Manifest variants
Version-specific manifests (Package@swift-6.0.swift) are a rarely used feature - scanning all 9,873 packages from the Swift Package Index, only 81 (0.9%) use them, mostly for the Swift 5/6 transition. Source archives handle them via lightweight HEAD requests to probe for variants, with negligible overhead for the 99.1% that use only the base Package.swift.
All 81 Swift Package Index packages using manifest variants, by Swift version
Background: how we got here
As our project grew we explored several approaches to reduce dependency resolution time and disk usage. We have 50+ repositories on a development machine, and SPM clones the full git mirror per dependency for each project - swift-nio alone is 70 MB, duplicated across every project that uses it. With that many repositories, disk usage compounds quickly and swift package update becomes a significant part of the development cycle.
The journey from CI caches to package registries to source archives
We first tried caching .build/ between CI runs - save it to S3 and restore on the next build to get incremental builds. GitHub's cache action wasn't an option because our self-hosted runners are geographically far from GitHub infrastructure, so we had to run our own S3 cluster locally. Even with local S3, compressing and shipping the bytes took longer in most cases than just re-resolving from scratch. We also have a SwiftUI Xcode project that pulls in all our server code, where the build state is even larger (~4 GB). We tried various timestamp restoration tricks to preserve Xcode's incremental build state, but the time spent on zstd compression, S3 transfer, and restoration was often longer than the actual build.
We investigated the package registry next - AWS CodeArtifact, Tuist, and Artifactory - each viable, but with setup and operational overhead that didn't fit our use case. So we built our own registry. It went through several iterations:
- Full archive mirroring - download ZIPs from GitHub, store in S3, serve from the registry. Works, but need to populate it with external tooling.
- Signed S3 URLs - pre-signed URLs so clients download directly from S3 (Cloudflare R2 for free egress). Still need to populate the registry.
- Stateless on-demand proxy - redirects ZIP downloads straight to GitHub, caches only metadata. No S3 storage, no population step, can run multiple instances.
Each iteration simplified the infrastructure, but even with a working registry there were usability gaps when mixing registry and git sources:
- The mirror-based URL mapping requires exact string matching (.git suffix, case sensitivity) between GitHub URLs and registry package IDs. For 100 dependencies, that means building and distributing a mapping file with every exact URL-to-ID pair.
- Even after converting all top-level dependencies to registry, any transitive dependencies still went via the git path. Another flag (
--replace-scm-with-registry) is needed on every command. - The Swift project identified a public registry as a 2023 focus area, and work is ongoing, but in the meantime there's an opportunity to improve things without registry infrastructure.
- Xcode shows duplicate packages when mixing registry and git sources - one from git, one from registry, with different icons:
Looking at how other ecosystems handle this was instructive. Go uses ZIP archives via an HTTP proxy - downloading the CockroachDB source tree as a ZIP took 10 seconds vs nearly 4 minutes for git clone. Rust has served tarballs from crates.io since 2014, never using git-based source distribution. This suggested a similar approach could work for Swift, using existing hosting infrastructure directly.
