I recently explored whether Swift's type system and compiler could eliminate runtime overhead in encoding, and the results were surprising: a 33% performance improvement over Foundation's JSONEncoder using clean, maintainable code.
The Core Idea
Instead of runtime configuration with mutable properties, the encoder's type IS its configuration. The encoder itself is a generic type with with no stored state, and has a single pure function:
(Input) -> throws Output
How it works
The pattern leverages three key Swift features:
1. Protocol Composition via Associated Types
Strategies are composed through protocols with associated type requirements:
public protocol JSONEncodingStrategies: EncodingStrategies {
associatedtype DateStrategy: DateEncodingStrategy
associatedtype KeyTransform: KeyTransformStrategy
associatedtype DataStrategy: DataEncodingStrategy
associatedtype FloatingPointStrategy: FloatingPointEncodingStrategy
}
Each associated type has its own protocol defining the strategy interface:
public protocol DateEncodingStrategy: Sendable {
static func encode(_ date: Date) throws -> String
}
2. Concrete Strategy Implementations
Strategies are zero-size types with static methods:
public struct ISO8601DateFormatterStrategy: DateEncodingStrategy {
public static func encode(_ date: Date) throws -> String {
date.formatted(.iso8601)
}
}
3. Generic Encoder with Type-Level Configuration
The encoder is generic over both sink (output format) and strategies:
public struct StaticJSONEncoder<
Sink: JSONSink,
Strategies: JSONEncodingStrategies
>: Sendable {
public init() {}
@inlinable
public func encode<T: Encodable>(_ value: T) throws -> Sink.Output {
try Self.encode(value)
}
// ... implementation
}
Usage
Compose an encoder by specifying types:
let encoder = StaticJSONEncoder<JSONDataSink, StandardJSONEncodingStrategies>()
let data = try encoder.encode(myModel)
Different strategy combinations create different types, each fully specialized by the compiler.
The pattern enables aggressive compiler optimization:
- All associated types resolved at compile time - the compiler knows the exact implementation for every strategy method
- Static dispatch - no protocol witnesses or vtable lookups, just direct function calls
- Full inlining -
@inlinableallows cross-module optimization, the entire encoding pipeline gets inlined - Specialized overloads - type-specific fast paths for primitives (String, Int, Bool, etc.) eliminate dynamic casts
- Single-pass architecture - direct buffer writing vs Foundation's two-phase (tree construction β serialization)
I wrote a simple Benchmark: 1 million encodings of a simple model
- Foundation: 3.36 seconds
- StaticJSONEncoder: 2.26 seconds
- 33% improvement
Sure, there are some trade-offs:
- The strategies are fixed at runtime. This is rarely a bummer, though.
- Not sure about code bloat, but the resulting code is quite minimal.
- possibly longer compile times
Where is the code?!
The implementation with detailed documentation and benchmarks is available here (not yet a proper package): Static JSON Encoder Β· GitHub
CAUTION: this implementation in the Gist above is buggy regarding the encoding algorithm. I already fixed it, elsewhere but had to increase the complexity of the implementation. Thus, the performance got a little worse, still faster than Foundation, though. The added complexity has a to do with the given API which is not that ideal: the underlying Encoder does not send events when a container is finished. In order to know when to output a closing bracket for example, the implementation needs to track the container hierarchy, basically via using a stack state. That is minuscule code, but it shows - because a few CPU cycle still add up in this implementation.
Questions for the community:
- Have you encountered patterns where compile-time configuration significantly improved performance?
- Are there other Foundation APIs where this pattern could be beneficial?
- What are your thoughts on the compile-time vs runtime flexibility trade-off?