can you explain what “delay outbound read” means? this seems to contradict the “thou shall not block the event loop” principle.
Yes, don't block the event loop, ever. I meant delay as in "scheduling later".
What I meant by "delaying the outbound read" is that you'd have an OutboundChannelHandler
which will not immediately forward the read
event. Forwarding read
means calling context.read()
. If you don't implement read
at all you get the default implementation which does forward immediately. Instead of forwarding immediately, it would instead figure out if the system has too much load (say > 10k connections) and if so, stop forwarding the read
until we're in a safe territory again.
Here are two explanations of back pressure in NIO:
Both of the explainers explain how to exert backpressure on regular TCP channels (where you read ByteBuffer
s) but the concepts are exactly the same for the server channels which read accepted connections. Essentially, delay the outbound read
call to when you're ready.
by the way, what is read
? channels have write
, and the channel handlers have channelRead
, but i have never heard of read
by itself.
Ah, for NIO to write bytes to the network you trigger the outbound (meaning towards the network) write
/flush
operations that you know. Similarly, to make NIO read some bytes you need to trigger the outbound read
operation. That will make NIO read a fixed amount of bytes from the network when they arrive. And once those bytes have been read, NIO will then call channelRead
to hang the read data to you (in a server channel the read "data" would be accepted connection (aka a new Channel
)).
Now, why have you now come across the outbound read
before? By default, NIO enables the autoRead
property which means that NIO will trigger one read
to start with and whenever channelReadComplete
has been triggered, it automatically fires another read
. So if you don't have any outbound/duplex channel handlers that implement read
and delay it, you'll essentially always read.
But this is also covered by the explainers linked above.
A while ago I also created this diagram which might be helpful:
this might be worth a separate thread, but how can a channel handler access some shared state like an IP rate limit table? the channel handlers live in different concurrency domains, and they cannot await
on any shared state.
That is a good question but the answer is very boring: If I were you, I'd stick with the default which means you accept from one event loop. That means you have just one server channel which "reads" (accepts) all the accepted TCP connections. So you can pop a single channel handler in the server channel which regulates the acceptance of incoming TCP connections (those are the Channel
s you already know). That way, you're in one concurrency domain where you'd need your IP tables, you'd even be in one single ChannelHandler instance. Easy. But even if you wanted to switch to accepting connections from multiple EventLoops at the same time (which NIO supports) this wouldn't be an issue, you'd need to arrange for synchronisation using locks/atomics/...
Examples that might help:
- NIO's ServerQuiescingHelper demonstrates how to user a
serverChannelInitialiser
as well as capturing a bunch of state. It helps to quiesce a server waiting for all in-flight requests to terminate to do a restart without losing connections. Usage example
- The backpressure-file-io example project which goes through great lengths to explain how you could (without any helpers) create a fully backpressured file upload. From NIO into the file system. It explains NIO backpressure as well as using a state machine for this kind of work.
- A github search for
func read(context: ChannelHandlerContext)
brings up a bunch. Particularly NIO's SSH and HTTP/2 implementations are probably interesting because they also handle multiplexing