Deploying and updating a SwiftNIO based server app best practices

Suppose there is a binary of a SwiftNIO-based HTTP server app deployed to a Linux machine. It will bind to a local port or UNIX socket; nginx will then serve as a reverse proxy taking care of SSL and HTTP/2 for me.

Two big questions for me are:

(1) What would a script that runs my binary look like if I want it to be automatically restarted in case the app crashes? I would like to have some kind of a log and ideally a notification mechanism for crashes too. The script would also be smart enough to detect very frequent crashes and bail completely - again, with logging and some notification. The script will be part of init.d.

I'm friends with bash and might end up writing it myself, but some battle-tested examples would be great to have.

(2) What is the best graceful way to update the binary in this nginx+app setup without failing any of the client requests? Worst case, I would like nginx to return some specific status (say, one of 5xx) that would tell the clients to retry soon, while my app binary is being updated and restarted.

I actually asked this question on StackOverflow once, but didn't get any acceptable solutions. Is it that hard to achieve? In fact I realize that this is not specific to SwiftNIO but I thought someone could give advice, pointers, or examples from their own deployments.


For (1) you probably want to use supervise.

1 Like

Thanks! It's an interesting option though not installed by default e.g. on Ubuntu. While researching supervise I found that systemd can also automatically restart your service; you can control burst restarts etc.

(Before this I only knew about init.d which is way way outdated. Well I am that old :smiley: and need to refresh my knowledge of Unix!)

As for (2) there doesn't seem to be any good solutions without involving additional scripting. One such solution involves writing a Lua plugin for nginx. I think if you own the client you can assume that 502 Bad gateway is a way of telling it to retry after some reasonable delay, up to N times of course. That should solve the problem maybe not ideally but close.

Ah OK, I'd assumed you'd use a Docker container just for the service.

Some interesting docs for you:

1 Like

One simple-ish way I find to get this done is to dockerize your application and run the container as a docker swarm service.

It is way simpler than k8s, and you can run a bare-bone, single-node docker swarm with a few commands that ship with docker directly.

You could put both nginx and your app inside a docker compose file and use the start-first swarm option on your application service (ie: start new instance(s), wait a bit, stop old instance(s)).

There is even a vapor docs page about docker.

1 Like