This attack vector is only one that can be introduced by user code. By default, NIO does not aggregate bodies but streams them.
As a practical matter, if you have inserted no handlers but the decoder, the maximum size of the body part will be the maximum size of a single ByteBuffer in channelRead. For most sockets, this limit is fairly small, 8MB or less.
Yes. If you have received the body part, we've already allocated memory for it. Receiving it is safe.
This is an OS level setting and so it is configured in a number of places.
On Linux, you can see a system-wide setting in /proc/sys/net/ipv4/tcp_rmem. This prints three numbers: a minimum, default, and maximum buffer size in bytes. This field is writable, and so you can change it. macOS uses a dynamic scaling model so there is limited scope for adjustment in this way.
On both OSes you can pick a specific size at runtime by using the SO_RCVBUF channel option on a bootstrap, like so:
Note that setting this too high or low will trigger runtime errors.
TLS maximum record sizes are not configurable.
Maximum HTTP frame sizes are limited by the HTTP/2 protocol setting SETTINGS_MAX_FRAME_SIZE, which is configured using .maxFrameSize in a HTTP/2 setting. You can pass these as the initial settings to the HTTP/2 handler to override its default choice.
@johannesweiss has provided a very good answer to which I want to add one more thing: the answer to your problem is streaming.
If you want to process file uploads without exposing yourself to memory DoS, you must avoid holding all that data in memory. Otherwise in a resource-constrained system you'll be at risk of OOM even from legitimate users. It's easy to imagine a small machine with only 1GB of RAM, which cannot safely hold a reasonably sized doc bundle.
The safe way to achieve this is to use the levers that @johannesweiss has noted to bound how many uploads you process at once, and then stream those uploads to storage. This lets you provide a strict upper bound on memory usage from uploads. You may be streaming them into another network connection, or to disk, it doesn't much matter: the goal is to ensure that you don't allow more reads until you've successfully shifted some data out of your memory.
For smaller requests this will largely be moot, but when you know you're handling big uploads, this is the trick. A common useful shorthand is to stream them to a temporary file initially, and then upload from that tempfile rather than attempt to glue two connections together.
It's a pure SwiftNIO example from before Swift Concurrency. So today, you could also create a similar (and simpler) one using NIOAsyncChannel. But to learn SwiftNIO itself the existing example might still be useful.
the requirements are actually a bit lower than you’re assuming, i currently don’t have a need to accept file uploads from the general public, only from authenticated builder machines we control.
right now, the procedure is roughly:
the channel handler accepts a PUT request, from anywhere
the channel handler collects buffers from that PUT request
the channel handler yields the request and its body to the application loop
the application checks the credentials sent in the original header and discards or accepts the upload
as you’ve observed, this is stupidly insecure, so i’m currently manually enabling/disabling the PUT method on a global basis.
what i would want to do instead is to perform the authorization in Step 1, before the channel handler has read any buffers at all. that way it would never read buffers coming from someone who shouldn’t be uploading any in the first place.