Thanks @artemredkin, totally missed that post! Looks like a great start, I have two pieces of feedback: a) general upload API feedback b) regarding back-pressure & composition with existing APIs.
Generally about the API
I think we should remove this enum
and build all the special cases (ByteBuffer, String, Data) on top of the general version (stream
). The API can remain its look and feel by making it public static func data(_ data: Data) -> BodyProvider { ... }
but this way we can later on extend it without breaking the public API.
One other point: We should probably support chunked encoding uploads (ie. without passing in size
).
It looks like the API supports back-pressure but your example kind of invalidates that thought. Is the idea that a user is only allowed to call writer
when the previous writer
's future has return succeeded?
Because in your example you do
for _ in 0..<2 {
var buffer = allocator.buffer(capacity: 4)
buffer.writeString("1234")
_ = writer(.byteBuffer(buffer))
}
which is obviously fine for 8 bytes but should it maybe have been
let body: HTTPClient.Body = .stream(8) { writer in
var buffer = allocator.buffer(capacity: 4)
buffer.writeString("1234")
return writer(.byteBuffer(buffer)).flatMap {
var buffer = allocator.buffer(capacity: 4)
buffer.writeString("1234")
return writer(.byteBuffer(buffer))
}
}
?
Back-pressure & composition
I think the upload streaming API has two important use-cases:
- uploading an also streamed request body (from swift-nio-http-client itself)
- uploading a file
Let's look into how those compose:
Uploading an also streamed request body
Would you provide an implementation of HTTPClientResponseDelegate
that fits into the writer
that HTTPClient.Body
requires? If yes, how do we propagate back-pressure back into the HTTPClientResponseDelegate
? Because I think right now we'd have a problem: Let's assume we have two simultaneous requests:
- one where we connect to a remote server and we stream the request body, directly into
- another request to another server where we stream the response body from (1) into (2)'s request body
So that'd be similar to a proxy use-case but for server to server communication that's probably important. Now if we have this setup and HTTPClientResponseDelegate
can stream response body much faster than HTTPClient.Body
's writer
can stream them into the request body. We would then "balloon" the bytes into memory and sooner or later we'd crash for out-of-memory, no?
(I realise this is more feedback about the download streaming API rather than the upload one because the upload one appears to support back-pressure but the download one doesn't AFAICS).
File IO (using NIO's NonBlockingFileIO
)
let body: HTTPClient.Body = .stream(8) { writer in
return myFileIO.readChunked(fileHandle: myFileHandle,
byteCount: size,
chunkSize: 128 * 1024,
allocator: myChannel.allocator,
eventLoop: myChannel.eventLoop) { byteBuffer in
return writer(.byteBuffer(byteBuffer))
}
where myFileIO
is an instance of NonBlockingFileIO
.
Which seems to nicely propagate the back-pressure because NonBlockingFileIO
's readChunked
doesn't call it's chunkHandler
again until the future returned in the previous chunkHandler
execution has succeeded.
Is my assumption correct that writer
would succeed the future it returns when it's ready for the next writer
call? And what does the API do if writer
is called before the previous call has succeeded? I imagine the API would buffer the bytes but a well behaved user just shouldn't do that. Accurate?