'Standard' vapor website drops 1.5% of requests, even at concurrency of 100!

Thanks @lorentey for benchmarking vs. JS. So @axello as others have pointed out, Swift is doing about 3x the work that the JavaScript/Java implementations are doing. That means the server gets more busy and it cannot be competitive with languages that use a 3x faster BigInt implementation because in this benchmark 92% of the time is spent in BitInt.+.

But you are right to point out the "dropped" requests. They're technically not dropped but just responded to with high latency and the load testing software then times out and counts them as dropped.

I had a look into that, the first curious observation is that it's always the first request in each connection that is much slower than the others. That can amount to over 2 seconds and then your client gives up and counts it as dropped -- fair enough I guess.

So why is the first request slow? SwiftNIO's default setting is to only accept 4 connections in a burst (even if there are 100 new connections, it'll just accept 4 each EventLoop tick). So in a way it prioritises existing connections over new connections under high load. That doesn't play well with benchmarks that are just burning through CPU and open a load of connections at the same time. On my machine, each fib(10k) is around 5ms each. So we're accepting 4 connections, calculating their first fibs (20ms at least), then accepting another 4 connections, calculating 8 fibs, accepting another 4 connections, calculating 12 fibs (60ms), accepting another 4 connections, calculating 16 fibs, .... So the 100th connection will takes ages (over 2 seconds) to accept. To make it worse, if we see over 2 seconds, we'll get more new connections :slight_smile:.

Now is this a good default in SwiftNIO? Debatable. Clearly the other frameworks you have tested accept more connections in one go, maybe SwiftNIO should raise that number or maybe Vapor should. The good news is that the fix is easy in Vapor. If you do

swift package edit vapor

and then apply this patch (which will accept up to 256 connections in a lot)

diff --git a/Sources/Vapor/HTTP/Server/HTTPServer.swift b/Sources/Vapor/HTTP/Server/HTTPServer.swift
index 135fa752f..fac66c413 100644
--- a/Sources/Vapor/HTTP/Server/HTTPServer.swift
+++ b/Sources/Vapor/HTTP/Server/HTTPServer.swift
@@ -348,6 +348,7 @@ private final class HTTPServerConnection: Sendable {
         let quiesce = ServerQuiescingHelper(group: eventLoopGroup)
         let bootstrap = ServerBootstrap(group: eventLoopGroup)
             // Specify backlog and enable SO_REUSEADDR for the server itself
+            .serverChannelOption(ChannelOptions.maxMessagesPerRead, value: 256)
             .serverChannelOption(ChannelOptions.backlog, value: Int32(configuration.backlog))
             .serverChannelOption(ChannelOptions.socket(SocketOptionLevel(SOL_SOCKET), SO_REUSEADDR), value: configuration.reuseAddress ? SocketOptionValue(1) : SocketOptionValue(0))

you shouldn't see the dropped requests nearly as early.


  • This will of course just delay rejecting requests. It just doesn't prioritise servicing existing connections over new ones
  • This is mostly working around a benchmarking artefact where 100 connections are created at the same time

@0xTim / @graskind it might be worth to setting .serverChannelOption(ChannelOptions.maxMessagesPerRead, value: 256) or even a higher value as the default setting. Also this should be user configurable.