one possible attack vector against a NIO-based server is to POST/PUSH large amounts of data in the hopes of exhausting memory on the target machine causing it to start swapping.
here’s a channelRead implementation that tries to limit the upload size to 32 MB:
private
var request:(head:HTTPRequestHead, stream:[UInt8])?
func channelRead(context:ChannelHandlerContext, data:NIOAny)
{
switch self.unwrapInboundIn(data)
{
case .head(let head):
self.receiving = head.isKeepAlive
switch head.method
{
case .GET:
self.request = nil
// handle get request...
case .POST, .PUT:
self.request = (head, .init())
case _:
self.send(message: .init(status: .methodNotAllowed), context: context)
}
case .body(let buffer):
guard case (let head, var body)? = self.request
else
{
break
}
// prevent copy-on-write
self.request = nil
// 32 MB size limit
if 1 << 25 < body.count + buffer.readableBytes
{
self.send(message: .init(status: .payloadTooLarge), context: context)
}
else
{
body.append(contentsOf: buffer.readableBytesView)
self.request = (head, body)
}
case .end(_):
guard case let (head, body)? = self.request
else
{
// already responded
break
}
self.request = nil
// parse the request
let operation:Server.Operation?
switch head.method
{
case .POST:
operation = .init(post: head.uri,
address: self.address,
headers: head.headers,
body: body)
case .PUT:
operation = .init(put: head.uri,
address: self.address,
headers: head.headers,
body: body)
case _:
fatalError("unreachable: collected buffers for method \(head.method)!")
}
if let operation:Server.Operation
{
// create an EventLoopPromise and submit the request for processing
self.server.submit(operation, promise: self.accept(context: context))
}
else
{
self.send(message: .init(status: .badRequest), context: context)
}
}
}
what guarantees does swift-nio-http2 provide about the size of an HTTPPart? is it safe to accept HTTP body parts even with a size limit?
This attack vector is only one that can be introduced by user code. By default, NIO does not aggregate bodies but streams them.
None.
As a practical matter, if you have inserted no handlers but the decoder, the maximum size of the body part will be the maximum size of a single ByteBuffer in channelRead. For most sockets, this limit is fairly small, 8MB or less.
Yes. If you have received the body part, we've already allocated memory for it. Receiving it is safe.
This is an OS level setting and so it is configured in a number of places.
On Linux, you can see a system-wide setting in /proc/sys/net/ipv4/tcp_rmem. This prints three numbers: a minimum, default, and maximum buffer size in bytes. This field is writable, and so you can change it. macOS uses a dynamic scaling model so there is limited scope for adjustment in this way.
On both OSes you can pick a specific size at runtime by using the SO_RCVBUF channel option on a bootstrap, like so:
Note that setting this too high or low will trigger runtime errors.
TLS maximum record sizes are not configurable.
Maximum HTTP frame sizes are limited by the HTTP/2 protocol setting SETTINGS_MAX_FRAME_SIZE, which is configured using .maxFrameSize in a HTTP/2 setting. You can pass these as the initial settings to the HTTP/2 handler to override its default choice.
HTTP/2 has MAX_CONCURRENT_STREAMS which controls how many concurrent streams can be open at max per connection
...
NIO and your OS ship all these with small and reasonable limits except for the number of accepted connections, that isn't limited by default but can easily be done.
If you have a very resource constraint system, you may need to limit them further.
@johannesweiss has provided a very good answer to which I want to add one more thing: the answer to your problem is streaming.
If you want to process file uploads without exposing yourself to memory DoS, you must avoid holding all that data in memory. Otherwise in a resource-constrained system you'll be at risk of OOM even from legitimate users. It's easy to imagine a small machine with only 1GB of RAM, which cannot safely hold a reasonably sized doc bundle.
The safe way to achieve this is to use the levers that @johannesweiss has noted to bound how many uploads you process at once, and then stream those uploads to storage. This lets you provide a strict upper bound on memory usage from uploads. You may be streaming them into another network connection, or to disk, it doesn't much matter: the goal is to ensure that you don't allow more reads until you've successfully shifted some data out of your memory.
For smaller requests this will largely be moot, but when you know you're handling big uploads, this is the trick. A common useful shorthand is to stream them to a temporary file initially, and then upload from that tempfile rather than attempt to glue two connections together.
Thanks @lukasa, indeed. FWIW, the backpressure-file-io example does file uploads via HTTP with backpressure and streaming.
It's a pure SwiftNIO example from before Swift Concurrency. So today, you could also create a similar (and simpler) one using NIOAsyncChannel. But to learn SwiftNIO itself the existing example might still be useful.
the requirements are actually a bit lower than you’re assuming, i currently don’t have a need to accept file uploads from the general public, only from authenticated builder machines we control.
right now, the procedure is roughly:
the channel handler accepts a PUT request, from anywhere
the channel handler collects buffers from that PUT request
the channel handler yields the request and its body to the application loop
the application checks the credentials sent in the original header and discards or accepts the upload
as you’ve observed, this is stupidly insecure, so i’m currently manually enabling/disabling the PUT method on a global basis.
what i would want to do instead is to perform the authorization in Step 1, before the channel handler has read any buffers at all. that way it would never read buffers coming from someone who shouldn’t be uploading any in the first place.