Right, using something less CPU-heavy and micro-optimisation-dependent than fib(10k)
would probably make more sense when investigating web server performance.
Regarding the 'NIO complexity' I think there's something important that got lost in translation. Regardless of the number of connections/concurrency you choose, wrk
will send requests as quickly as possible. So even with just 4 connections you'll max out a 4 core machine, in any web framework, in any language. The faster you produce those responses, the faster wrk
will be sending new requests.
So with these benchmarks that will always fully load the server machine, when a new connection comes in, the server needs to make a decision: It can
- Either accept the new connection immediately, slowing the existing connections down a little (because now there are more connections to service with the same resources as before)
- Or it can prioritise the existing connections and slow the connection acceptance (increasing the latency of the first request in the new connection which now has to wait).
This is true for any framework in any language. This choice can be either explicit or implicit or a mixture of both.
The only reason we have discussed how SwiftNIO's default setting works is because this particular benchmark immediately records a failure if even only a single request hits a >2s latency.
For example:
- 10,000 requests at 0.1s latency & one request at 2.1s latency -> 1 error [avg latency 0.1002s]
- 10,001 requests at 1.99s latency each -> 0 errors [avg latency 1.99s]
I'm not saying having a cut off at 2 seconds is bad or wrong but it's a peculiarity.
The main reason I recommended that the Vapor devs raise the default setting of maxMessagesPerRead
is that benchmarking tools like wrk
like to open a lot of connections to start with and immediately load any connection to the max. It's important to not look bad at wrk
just to avoid having to have a 100+-message long discussions over it .