The Context Passing Problem

I want to touch on something tangentially related to the Generic Connection Pool post I made recently. It's been discussed a lot amongst the SSWG and Vapor community and I wanted to put my thoughts down and explain Vapor's approach for the record. I'm not sure if it has a better name, but I'm calling it "the context passing problem".

This problem exists in a lot of situations, but for the purpose of this post I want to highlight connection pooling. When grabbing a connection from a pool, you will probably want the connection to log somewhere else. For example, in Vapor, each request has its own logger with a unique request identifier. When a connection is pulled from the pool, it is vital that this logger be used so that the logs can be correctly identified later.

Explicit context passing

The most straightforward way to implement this is to have all methods on the connection accept an "override" logger. This is the method AsyncHTTPClient employs. Let's call this explicit context passing For example:

// This connection will be using whatever logger it 
// was created with.
let connection = try pool.requestConnection().wait()
defer { pool.releaseConnection(connection)

// However, we want it to log to this special logger.
// Using AHC-style, we would just pass this in.
try pool.doThing(logger: customLogger).wait()

This method works great, but the obvious downside is that the developer must remember to pass the logger in. This problem becomes worse if you need to pass additional context, like BaggageContext for tracing. Imagine this API in a high-level Vapor app:

app.get("users") { req in 
        on: req.db, 
        logger: req.logger, 
        baggageContext: req.baggageContext

Example usages:

Protocol-based context passing

To avoid this, Vapor implements protocols similar to the following. Let's call this protocol-based context passing.

protocol Database {
    var eventLoop: EventLoop { get }
    var logger: Logger { get }
    var baggageContext: BaggageContext { get }

extension Database {
    func delegating(to eventLoop: EventLoop) -> Database
    func logging(to logger: Logger) -> Database
    func tracing(to baggageContext: BaggageContext) -> Database

A real example of this can be seen in PostgresNIO's PostgresDatabase protocol.

The protocol extensions simply wrap the type in a struct that temporarily overrides that property and implements the provided guarantees. This is thread-safe but does add indirection.

Vapor uses these protocols and their extensions to implement req.db in a way that all of the required context is always available, without the developer needing to pass things manually. So the high level API looks more like this:

app.get("users") { req in 
    User.query(on: req.db).all()

Example usages:

Impact on high-level frameworks

Being able to hide context passing like this is critical to create nice, high-level APIs. Having this built into libraries like PostgresNIO natively is very important for Vapor. When libraries do not offer these types of protocols, Vapor must wrap them. AsyncHTTPClient for example uses the explicit-passing method and thus Vapor wraps its interface behind a Client protocol.

Wrapping AsyncHTTPClient is by no means the end of the world, in fact some times wrapping is the better option anyway to make APIs feel more comfortable in the framework. However, there are some situations where this required wrapping can create a lot of unnecessary code duplication.

The main example that has come up recently is RediStack. This package currently offers a protocol similar to the Database protocol declared above: RedisClient. However, instead of implementing a logging(to:) method, it uses setLogging(to:) which is not thread-safe. Because of this, one of the recommendations has been to move to AHC-style "explicit passing":

This change would by no means prevent Vapor from using RediStack, but it would mean that we would need to wrap it in a protocol similar to how we use AHC. This is rather unfortunate since, unlike AHC, RediStack offers a large number of wonderful convenience methods:

By wrapping RedisClient in a new protocol, Vapor hides access to its methods and must re-implement them in its public API. Not only does this create duplicated work and more source for bugs, it also means that as new methods are added to RediStack, they will not be immediately available to users of Vapor.

By using the protocol-based context passing approach instead, RediStack would allow its RedisClient protocol to be used directly in high-level frameworks like Vapor.

I'm sure this issue will come up in many more places as new server-side Swift packages are created.

I'm very interested to hear everyone's thoughts on this. Thanks!


In ACH case logger parameter is kind of temporary, it should be replaced with context, see:

Pinging @ktoso, as he is quite knowledgable in this topic.

Whether logger should be replaced by context or not is related, but slightly different than the point I'm trying to make here. As far as AHC is concerned, I think given its current design, it would make more sense for context: BaggageContext to be a separate parameter. In other words, like:

let client: HTTPClient = ...
client.get("", logger: myLogger, context: myBaggageContext())

However, if we're rethinking its design (major break), then I do think we should at least consider a protocol-based context passing design.

I think some code examples specific to AHC will help clarify what I mean by that:

// Instead of being a concrete type, HTTPClient is a protocol.
protocol HTTPClient {
    var eventLoopPreference: EventLoopPreference { get }
    var logger: Logger { get }
    var baggage: BaggageContext { get }

    // Just the core essential method for executing a request 
    // goes on the protocol.
    func execute(
        request: Request, 
        deadline: NIODeadline? = nil
    ) -> EventLoopFuture<Response> 

// Extensions add in all the nice helpers.
// This means high-level frameworks get these for free!
extension HTTPClient {
    func get(url: String, deadline: NIODeadline? = nil) -> EventLoopFuture<Response> { ... }
    func post(url: String, deadline: NIODeadline? = nil) -> EventLoopFuture<Response> { ... }

// Extension offer ways of getting new clients that use a different
// event loop, log somewhere else, or trace to different context.
// These methods create _copies_ of the client and do not attempt to
// mutate the client they are called on (thread-safe).
extension HTTPClient {
    func delegating(to eventLoop: EventLoopPreference) -> HTTPClient
    func logging(to logger: Logger) -> HTTPClient
    func tracing(to baggage: BaggageContext) -> HTTPClient

// One or more concrete implementations of HTTPClient are provided.
// The single connection / pooled variants could be different, or just one.
// Users can also create their own conformers to HTTPClient that do special logic.
class HTTPConnection: HTTPClient { ... }
class HTTPConnectionPool: HTTPClient { ... }

// Example usage:
let eventLoopGroup = MultiThreadedEventLoopGroup(...)
let client: HTTPClient = HTTPConnectionPool(eventLoopGroup, ...)

// On main thread, use any EL / logger / tracing, don't care. 
try client.get("").wait()

// On a specific EL, with logger and tracing I need to use.
// We'll use Vapor for example.
app.get("test") { req in 
        .delegating(to: req.eventLoop)
        .logging(to: req.logger)
        .tracing(to: req.baggage)

// This allows the high-level framework to expose an HTTPClient that 
// already has everything the user needs.
extension Request {
    var client: HTTPClient {
            .delegating(to: self.eventLoop)
            .logging(to: self.logger)
            .tracing(to: self.baggage)

// Example using this convenience API with all the hard parts handled.
app.get("test") { req in 

For AHC in particular, I'm not as strongly opinionated here. I think explicit context passing is honestly fine given this is a fairly low-level library. Vapor would likely wrap AHC even if it did offer these APIs so that it works better with our high-level content negotiation APIs.

However, for libraries like RediStack, APNSwift, etc needing to duplicate all the convenience methods is a real problem. I could also see other frameworks besides Vapor that don't have high-level content negotiation APIs wanting to use AHC's convenience methods directly without needing to reimplement them.

I pinged on chat but in case you missed it: sorry but I ran out of time in the day/week to five deep here. I want to give this discussion the proper deep dive it deserved before replying. I just took some days off so not sure I’ll be able to get to this over the next few days but I’ll try!

Short version is though: completely hiding context parameters is not viable, making them as painless as possible is a goal we should pursue though. I merged some work toward this in baggage context last week...

I promise to get back here will a full analysis, thank you very much for all the specific links — I want to give all those a proper look before concluding anything. Thanks for your patience and sorry for the timing of my time off. I’ll get to this as soon as I can!


I've just been implementing Logger support in AWS SDK Swift and have been coming across similar issues. So far my implementation is very similar to AHC in that every AWS command takes a Logger struct. With this method every command inherits the logger metadata from the request that made the call. But as you mention @tanner0101 this only works if the user remembers to supply a Logger for each function call.

I do quite like the logging(to...) as an alternative to passing the Logger in every function. I might consider changing to that. Means we can add tracing at a later date without creating breaking changes to all our apis. I guess it would change

s3.listObjects(.init(bucketName: "myBucket"), logger: Logger)


s3.logging(to: Logger).listObjects(.init(bucketName: "myBucket"))

It's not a solution to ensuring the user adds a logger to a call, but it does mean additional context can be added without breaking changes.

1 Like

I was waiting on tackling re-implementing logging in RediStack before replying so I had some design and evidence-based opinions to share.

The refactor can be seen here:

Foremost, I think this is an important topic that having some guidelines established will help the ecosystem, so we can recognize the design patterns and have clear ways of interoperating between them.

I agree with Tanner that the way AHC is passing around the logger makes sense for itself, as there are a lot of details going on with a given request - and the library is lower level than RediStack is.

However, adopting that style is a non-starter for RediStack as the sheer number of commands that not only are supported by Redis, but the number of Swift APIs that I have written (and will continue to develop) on top of them makes it a maintenance nightmare to add even a single new parameter.

Smoke & the AWS services' libraries will be in a similar situation.

As part of my refactor, I went though my entire codebase to see not only what levels I produce logs at, but also where and why.

I found that the vast majority of cases I log at trace or debug level, which is good! No one likes unsolicited noisy logs from a subsystem!

I also found that the majority of the cases where I log at more severe levels, like error or critical are more of "configuration" contextless situations, such as when triggering a precondition failure.

With this in mind, I took the approach that RediStack delineates between "user-space" and "system-space" logs, with both being configurable for additional context, albeit in different ways.

For "system-space" logs, every Connection[Pool] uses a "logger prototype" intended for establishing a standard labeled Logger instance, but even that can be overridden by passing a custom Logger instance in both the Connection & ConnectionPool initializers.

That logger will be used liberally for 90% of logs generated by the library.

For the other 10% that will be the hot path of the majority of user code, I take the protocol-based approach Tanner described of using an internal implementation that is masked behind an existential:

protocol RedisClient {
  func send(command: String, with arguments: [RESPValue]) -> EventLoopFuture<RESPValue>
  func logging(to: Logger) -> RedisClient // having the `some` keyword here would be nice!

internal protocol RedisClientWithCustomLogging: RedisClient {
  func send(/* same as above */, loggingTo: Logger?) -> EventLoopFuture<RESPValue>
internal struct CustomLoggingRedisClient<Client: RedisClientWithCustomLogging>: RedisClient {
  init(client: Client, logger: Logger)
  func send(/* RedisClient requirement */)
  // send invokes the send(command:with:loggingTo:) method

This made it much more clear on how and when to attach RediStack library metadata to the loggers, while being thread safe, and without much refactoring of the rest of the public API.

And if I were to add additional context parameters, the surface to do so is much smaller.

In both situations, it's left up to the developer at the end of the day to make sure they're passing the context they want - but at least in this it's not every single method, where my use cases are likely to be heavily chained.


I created a small proof of concept repo to showcase how the protocol-oriented context passing pattern can work in practice. It covers EventLoop preference, logging, tracing, and connection pooling. It includes tests to demonstrate usage and prove it works.

There are quite a few different ways that the same effect could be achieved here. I just went for what was simplest to convey the concept.

The key requirements are:

  • 1: When you specify a custom logger (or EL preference, or tracing) it is passed through to all layers. This test proves that.

  • 2: The convenience APIs are not cluttered with context passing. As you can see here and here. This makes the Client protocol acceptable for usage in a high-level framework.

Things I think could be improved:

  • 1: CustomClient indirection. If you call Connection.logging(to:), you no longer have a Connection. You have a private type that needs to forward your calls. This prevents casting back to Connection and has overhead. Maybe this could be fixed by making the override methods (i.e., logging(to:)) protocol requirements as mentioned here?

  • 2: Connection and ConnectionPool are exposing baggage, logger, and eventLoopPreference properties that make varying levels of sense. Maybe Connection / ConnectionPool should not conform to Client directly, but be able to create a Client?


Thanks for the small prototype to work with that isolates many of the details we're discussing.

One of the problems that these types of reductions have, is that it misses many small details in a larger scale application - namely edge cases for specific things.

  1. How do you handle deinit warnings to give more details to what's going on in the system?
    • it can be argued that this is a "superfluous" log and the assertion is good enough
  2. How do you provide a custom logger such as in the send method when you establish something outside of that event lifecycle - such as during init?
    • it can also be argued that this situation could also be handled, rather than a static configuration callback

In your sample the following code will require every function that uses client to go through three levels of indirection, which isn't ideal.

        let client = conn
            .logging(to: clientLogs.logger)
            .eventLoop(prefer: .delegate(on: clientLoop))
            .tracing(with: clientBaggage)

If you add logging(to:), eventLoop(prefer) and tracing(with) to the Client protocol. You could override them in CustomClient to reduce this down to one level of indirection. The CustomClient version instead of creating a new indirection just creates a new version of the CustomClient with all the previous changes.

struct CustomClient {
    func logging(to logger: Loggger) -> Client {
        var newClient = self
        newClient.loggerOverride = logger
        return newClient
1 Like

As far as it goes for AWSSDKSwift I'm thinking of going down a slightly different path. First I'll add a little context. Each AWS service struct is structured as follows

struct AWSService {
    /// client used to process AWS requests
    let client: AWSClient
    /// service configuration
    let serviceConfig: AWSServiceConfig

The service then has an individual method for each REST operation from that service. A service can have hundreds of operations. Each of these methods follows this format.

    public func operation(_ input: OperationRequest, logger: Logger) -> EventLoopFuture<OperationResponse> {
        return client.execute(
            operation: "MyOperation", 
            path: "/", 
            httpMethod: .POST,  
            input: input,
            serviceConfig: serviceConfig, 
            logger: logger)

We are passing in the ServiceConfig to every execute call. Given this my plan is to hang the Logger, BaggageContext etc off the ServiceConfig. This would mean I don't need the additional logger parameter in operation or client.execute

First I would add the following to AWSServiceConfig. I can use CoW to avoid copying all the other ServiceConfig members in the logging method.

struct AWSServiceConfig {
    var logger: Logger

    func logging(to logger: Logger) -> AWSServiceConfig {
        var config = self
        config.logger = logger
        return config

And then I'd add a similar function to the service struct AWSService.

extension AWSService {
    func logging(to logger: Logger) -> AWSService {
        return AWSService(client: self.client, serviceConfig: self.serviceConfig.logging(to: logger))

Now the API works in a similar way to how @tanner0101 describes above

let service = AWSService()
let serviceWithLogging = service.logging(to: logger)
let result = serviceWithLogging.operation(input) 

Thoughts? There are so many ways to do this. I could instead go down the redirection method Tanner mentions above but I don't see any real advantage to this over holding a Logger, BaggageContext in the ServiceConfig all the time and just creating a new ServiceConfig for each context call.

I'll add an example of deinit error / warning :+1:. I think the best thing we can do here is use the logger we were initialized with or have the init accept two loggers like:

    // Details about initializing this connection logged here.
    connectLogger: Logger,
    // Things that happen to this connection unrelated to any
    // method calls that get a logger will log here.
    generalLogger: Logger

Similar answer as the previous question I think. You can see how the logger in the connect method is used to send the "connection.init" log. This would be equivalent to the connectLogger from my previous snippet.

Connection pool passes the contextual logger there automatically:

Yup, this is true. I'll mess around with the custom override methods as protocol requirements and see what I can come up with.

That looks great. The one comment I would have is that Logger / EventLoopPreference / BaggageContext don't make as much sense to me on a Configuration struct as they do on a Client. Would it be possible to instead add that context to AWSClient? You'd get a very similar effect:

extension AWSService {
    func logging(to logger: Logger) -> AWSService {
+        return AWSService(client: self.client.logging(to: logger), serviceConfig: self.serviceConfig)
-        return AWSService(client: self.client, serviceConfig: self.serviceConfig.logging(to: logger))

Either that or a separate property like context that contains Logger, Baggage, etc.

struct AWSService {
    /// client used to process AWS requests
    let client: AWSClient
    /// service configuration
    let serviceConfig: AWSServiceConfig
    /// logger, baggage, etc
    let context: AWSContext
1 Like

That looks great. The one comment I would have is that Logger / EventLoopPreference / BaggageContext don't make as much sense to me on a Configuration struct as they do on a Client . Would it be possible to instead add that context to AWSClient ? You'd get a very similar effect:

Copying the AWSClient is not so easy. It manages the AWS credential acquisition and this can be an active process. Not sure what would happen if I copied it while that was in process. If I was to use the
AWSClient I would most likely go for your pattern above.

I know what you mean about adding it to the Config not making sense. In my actual implementation I have created a ServiceContext struct holding the Logger, BaggageContext and ServiceConfig.

Either that or a separate property like context that contains Logger, Baggage, etc.

I had thought of a separate context property as an option. Might give that a go, see how I feel about that.

Hi all,

One more use case for consideration:

I was working on a X-Ray tracer recently.
I am hoping to be able to propagate the trace context in the baggage BaggageContext.
I think it will work well only if the context has "operation" scope NOT "service" scope like in the example above.

Below how it could work (code for ilustration purposes):

// the (context) baggage including but not limited to trace context
var baggage: BaggageContext

let segment = tracer.beginSegment(name: "Segment 1", baggage: baggage)

// note that segment.baggage passes whatever was in the baggage
// AND updates the X-Ray trace context (keeps the trace id but updates parent (segment) id)
// so that segments (aka spans) created "inside" have segment "Segment 1" as their parent
// s3 client could create new subsegments and/or pass along context to instrumented HTTP client
s3.listObjects(.init(bucketName: "myBucket"), baggage: segment.baggage)

// other S3 operations could have different context
let segment2 = tracer.beginSegment(name: "Segment 2", baggage: baggage)
s3.listObjects(.init(bucketName: "myBucket2"), baggage: segment2.baggage)

Sounds good. I've also updated my example to use protocol requirements on Client instead of just extensions. This cuts down on the indirection and also seems cleaner / easier to understand (at least to me):

The way I implemented ConnectionContext (a struct wrapping a ref type) might be helpful for AWSClient:

You would be able to do "operation" specific baggage like so:

var baggage: BaggageContext

// segment 1
let segment = tracer.beginSegment(name: "Segment 1", baggage: baggage)
s3.tracing(with: segment.baggage)
    .listObjects(.init(bucketName: "myBucket"))

// segment 2
let segment2 = tracer.beginSegment(name: "Segment 2", baggage: baggage)
s3.tracing(with: segment2.baggage)
    .listObjects(.init(bucketName: "myBucket2"))
1 Like

If I am understanding this correctly, you are not only proposing that the context being passed be a Protocol/Interface but that all its requirements should also be Protocols/Interfaces.

protocol Database {
    var eventLoop: SomeEventLoopProtocol { get }
    var logger: SomeLoggerProtocol  { get }
    var baggageContext: SomeKeyValueContextProtocol{ get }

If that is the case then too bad that Swift's currently tooling is not able to help us make this contract/convention more explicit.

Nope, just that context be passed via a protocol instead of explicitly to each method call. The actual types on that protocol are the real types (Logger, BaggageContext, etc). You can see an example of a protocol like this in the demo I made here:

Ok, so that's pretty much what I've done in my refactor for RediStack.

The only difference is that from both the comment on the MR and in here is you suggest propagating the runtime user-space logger all the way down in the pool when trying to provide a connection - which I currently treat as "system-space" logs

I do think that requesting a connection from the pool should log to the most contextual logger. Assume you are trying to debug a slow request in your application that is being caused by an exhausted pool. Without the request identifier attached to the pool logs, you can't track what pool operations were related to the request in question. There are some things that really have no context, like Connection.deinit. or perhaps a pool that uses scheduled tasks to do cleanup. But IMO all methods called as a direct result of a method with special context should use that context.

This reminds me a little of SwiftUI's Environment and Dispatch's QoS, both of which propagate information down into callers, potentially through intermediate layers that don't understand the context.

Personally I'd be interested in a language-level solution so that everyone isn't making up their own incompatible tool. Just spitballing here, but vaguely like

@threadscoped var activeLogger = DefaultLogger()

public func connectToDatabase() {
    activeLogger.log("Connecting to database...")
import Framework
//will log with the default logger
with activeLogger = CustomLogger() {
    connectToDatabase() //will log with custom logger

The advantage of thread storage is you can trampoline the context between a high-level caller and any number of intermediate function calls, that may not know/care about participating in a context-passing-bucket-brigrade.

The advantage of a language feature is that packages that desire to coordinate on a context can do so. For example, if activeLogger is defined somewhere convenient (stdlib or a popular package), everyone can log into that, and at top level a specific logger might be chosen. For a web framework this might be a new logger per-request, for an app it might separate by subsystem, etc.

thread scoping doesn’t really work here because a single thread can host thousands of individual and interleaved connections

Terms of Service

Privacy Policy

Cookie Policy