De minimis server Containerfile for a Musl-based statically linked web app

I have a 40MB 197MB statically compiled binary based on a Hummingbird server nabbed from the swift-container-plugin examples. (handy plugin, btw!)

I made a Containerfile for it based on Alpine (MUSL compliant) that I'd like some feedback on. The resulting image was 200MB (vs closer to 500 with some others based on Ubuntu, which seems to be the norm).

Is it TOO small? Missing something you'd never leave out? Is there something even smaller to use? I'm looking to minimize size as much as possible but still be reasonably stable.

It doesn't have to be fully production viable. Just website proof of life viable. I left out the commonly seen ENV info below because I assumed I couldn't use it because there was no Swift installed? Is that correct?

ENV SWIFT_BACKTRACE=enable=yes,sanitize=yes,threads=all,images=all,interactive=no,swift-backtrace=./swift-backtrace-static

# vs Ubuntu. 
FROM alpine

# Create a hummingbird user and group with /app as its home directory
RUN addgroup \
    -S \
    hbGroup \
&& adduser \
    -S \
    hbUser \
    -h /app/ \
    -k /dev/null \
    -G hbGroup

# Switch to the new home directory
WORKDIR /app

# give the binary to the hummingbird user in the copy
COPY --chown=hbUser:hbGroup ./binary /app/

# Ensure all further commands run as the hummingbird user
USER hbUser:hbGroup

# Let Docker bind to port 8080
EXPOSE 8080

# Start the Hummingbird service when the image is run, default to listening on 8080 in production environment
ENTRYPOINT ["./hello-world"]
CMD ["--hostname", "0.0.0.0", "--port", "8080"]
1 Like

Neat Carly!

I was recently starting to explore something pretty similar. For the traceback, I’m not confident in my answer, but I think it should work since it’s now built into the Swift runtime. The backtrace controls I’m using are:

SWIFT_BACKTRACE=interactive=no,color=no,output-to=/logs,format=json,symbolicate=fast - and it should work regardless of any external swift tooling being there or not.

I’m curious if your use case needs a shell available from Hummingbird? If you’ve already got a bare OCI and a single image, that’s pretty much the smallest attack surface possible - but if you need a shell to do things, yeah.

If you want/need to do this kind of build with a Dockerfile and you have a fully static binary, you can start an image from:

FROM scratch

But I haven’t (yet) followed through on that path.

1 Like

I am very new to containers.

My example doesn't need a shell, so that's okay. It's just an isolated little app server.

My goal is to understand if I can deploy a Swift app on something like Digital Ocean's App Service as opposed a hand rolled Droplet, which is more my comfort zone.

I'll look into FROM scratch! Thanks for the search path. I'm confused on how the service will even know what platform it's even compiled for if I don't tell it... I'll do more reading, maybe the docs indicate a default OS.

About the SWIFT_BACKTRACE env:

If you aren’t using a statically compiled binary, it’ll just work.

However, for statically-linked binaries, backtracing isn’t available in the binary, so like in the Vapor template’s Dockerfile, you need to manually copy the backtrace binary over. That has been a relatively long-standing issue for a few years, and AFAIK the Swift team would want to resolve that eventually.

If you’re not putting the backtrace binary in a default location that the Swift runtime will check, you’ll also need to use the env var to mention where the backtrace binary is. In SWIFT_BACKTRACE=enable=yes,...,swift-backtrace=./swift-backtrace-static, the last key value is specifying the place where the binary can be found: swift-backtrace=./swift-backtrace-static (relative to the executable).

There are some Backtracing docs in the Swift repo if you’re curious about it.

I’d say having backtracing working is pretty important so a user has something to work with if a crash happens.

2 Likes

Since you care about the binary size, looks like the backtracing binary is ~9.5MB.

1 Like

Conceptually, I feel like any container should have only the essentials needed to have the application run. If there are issues, or a need for more tools to be installed, then one could always build a new container atop this one.

200MB is too large. Are you stripping the released binary? See example here: MultiArchSwiftDockerfileExample/Package.swift at 650fe354e4b40e827e412bd48521833a643175ea · Cyberbeni/MultiArchSwiftDockerfileExample · GitHub

My two images are 82.1MB and 76.6MB. They are both Alpine based and both use Foundation.

Not using the whole Foundation (which imports FoundationInternationalization), only FoundationEssentials, can save like 30-40MBs IIRC. (I think all of the swift-nio based packages still import the whole Foundation. There was some discussion about using package traits to control this behavior but I don't think anyone is actually working on it.)

1 Like

It does seem like back tracing would be worth the percentage-wise small increase!

So I got my binary size wrong in the OP so the base container isn't as much bloat as I thought!

Turns out the 40MB binary was the executable that came from doing the static cross compile demo which uses the executable package generated by package init. I named them both the same and got confused!

I saw the actual size when I went back to recompile so in fact 197MB of that 206 MB is in fact Swift, so at this point going slimmer than Alpine probably isn't necessary, but I notice your build base is scratch.

swift build --configuration release --swift-sdk x86_64-swift-linux-musl

Brings that 197MB down to 183MB, so there's my headroom for backtracing ;)

The demo server I snagged uses Hummingbird, which does have a lot of dependencies, and Foundation.

Since this will ultimately be a demo project that might be used by teachers/students I need to walk the line between "small enough to host for free" and "easy enough to build off of" so dropping HB and the SwiftNIO dependencies isn't an option.

I'm keeping my eye on package traits as well. Fingers crossed it or something comparable comes down the pipe!

Are you saying that maybe the backtracking could/should be in a different layer?

I don’t know much about the Swift backtracing facilities, but if there’s a crash in your application, it might be worthwhile to have useful backtracing. I don’t know if there’s a valid use-case for not enabling backtracing for an application unless you’re running it in very constrained environments.

If you look in the SDK bundle, in musl-<version>.sdk/<architecture>/usr/libexec/swift/linux-static you will find a swift-backtrace-static executable. If you copy that into the same directory as your program and rename it to swift-backtrace , I think the runtime will find it and you should get a backtrace.

I did this, added a force unwrap and it just gets stuck at *** Signal 4: Backtracing from 0x3170e6... both FROM scratch and FROM docker.io/alpine:latest, both release and debug configuration, with or without stripping. (I'm using Swift 6.1.0)

Brings that 197MB down to 183MB

You can cut another ~100MB by stripping the binary:

inkerSettings: [
	.unsafeFlags(["-Xlinker", "-s"], .when(configuration: .release)), // STRIP_STYLE = all
]
2 Likes

That cut MORE!!! It's down to 66.1 MB !!! Nice.

For posterity as to what that flag is doing:

I've gone ahead and made the repo public with a branch for testing the backtraces

I used a combination of that helpful Vapor template and this other forum post

And I'm not sure if I am running into the same problem that he did an @Cyberbeni did

The new Containerfile... much to my chagrin I am having to have that build section pull down a matching Swift Image because I have a homebrew installed version of swiftly on my Mac and I just could not figure out where the SDK bundle got stashed to pull out that file directly, which for this stage of the demo I would have preferred. I have a sudo find running to look again.

FROM swift:6.1.2-slim AS build

WORKDIR /staging

# Copy static swift backtrace binary to the staging area.
RUN cp "/usr/libexec/swift/linux/swift-backtrace-static" ./

FROM alpine

# Create a hummingbird user and group with /app as its home directory
# RUN useradd --user-group --create-home --system --skel /dev/null --home-dir /app hummingbird
# Create a group and user
# https://stackoverflow.com/questions/49955097/how-do-i-add-a-user-when-im-using-alpine-as-a-base-image
RUN addgroup \
    -S \
    hbGroup \
&& adduser \
    -S \
    hbUser \
    -h /app/ \
    -k /dev/null \
    -G hbGroup

# Switch to the new home directory
WORKDIR /app

COPY --chown=hbUser:hbGroup ./binary/ /app/

COPY --from=build --chown=hbUser:hbGroup /staging /app/

# Provide configuration needed by the built-in crash reporter and some sensible default behaviors.
# ENV SWIFT_BACKTRACE=enable=yes,sanitize=yes,threads=all,images=all,interactive=no
ENV SWIFT_BACKTRACE=enable=yes,sanitize=yes,threads=all,images=all,interactive=no,swift-backtrace=./swift-backtrace-static

# Ensure all further commands run as the hummingbird user
USER hbUser:hbGroup

# Let Docker bind to port 8080
EXPOSE 8080

# Start the Hummingbird service when the image is run, default to listening on 8080 in production environment
ENTRYPOINT ["./hello-world"]
CMD ["--hostname", "0.0.0.0", "--port", "8080"]

This is what I get with a debug build

2025-08-31T00:04:53+0000 info HelloWorldHummingbird: [HummingbirdCore] Server started and listening on 0.0.0.0:8080
2025-08-31T00:05:06+0000 info HelloWorldHummingbird: hb.request.id=9590a31044520e36e2a8ba4dea357f82 hb.request.method=GET hb.request.path=/ [Hummingbird] Request
2025-08-31T00:05:12+0000 info HelloWorldHummingbird: hb.request.id=9590a31044520e36e2a8ba4dea357f83 hb.request.method=GET hb.request.path=/crashme [Hummingbird] Request
hello_world/Application+build.swift:22: Fatal error: Whoops
swift-runtime: failed to suspend thread 2 while processing a crash; backtraces will be missing information
swift-runtime: failed to suspend thread 2 while processing a crash; backtraces will be missing information

*** Signal 4: Backtracing from 0x255b6aa... failed ***

qemu: uncaught target signal 4 (Illegal instruction) - core dumped

Release build is similar.

1 Like
  • Found the SDK's: /Users/$USER/Library/org.swift.swiftpm/swift-sdks/swift-6.1.2-RELEASE_static-linux-0.0.1.artifactbundle/swift-6.1.2-RELEASE_static-linux-0.0.1/swift-linux-musl/musl-1.2.5.sdk/x86_64/usr/libexec/swift/linux-static/swift-backtrace-static HelloWrapper/binary/swift-backtrace-static

EDIT - Maybe not the below! The behavior changed AGAIN on another run...

Ripping out the user didn't make a difference to the backtrace, but ripping out the user and then also forcing root

FROM alpine
USER root

stopped it from core dumping and I could return to the index page. Don't know where the backtrace goes yet, but that is interesting. @al45tair 's instincts about it being a permissions problem seems to have been correct.

Also don't know how I feel about putting a server up running as root.

2 Likes

I decided to step away from troubleshooting backtracing for the static sdk build, at least in the short term. Seems like for actual production the Vapor Dockerfile would be the place to start from. My understanding is that it's okay to not have a new user because scratch is given a fairly short leash by the parent OS and there isn't anything in there with it? Feels weird, but okay.

For future reference for anyone who needs it:

the 66.2 MB image

https://hub.docker.com/repository/docker/carlynorama/testserver/general

the code for that server

Removed /crashme, added IP address print out to home page.

the (temporary) live site

Driven by a $5/mo Digital Ocean App Server

The container file

FROM scratch

# Make and change into app directory
WORKDIR /app

# The binary directory contains the compiled app 
COPY ./binary/ /app/

# Let Docker bind to port 8080
EXPOSE 8080

# Start the Hummingbird service when the image is run, default to listening on 8080 in production environment
ENTRYPOINT ["./hello-world"]
CMD ["--hostname", "0.0.0.0", "--port", "8080"]

The Build and Run Locally Script

#!/bin/sh

DEFAULT_CONFIG_VALUE="release"
CONFIGURATION="${1:-$DEFAULT_CONFIG_VALUE}"

mkdir -p HelloWrapper/binary/
# cp /Users/$USER/Library/org.swift.swiftpm/swift-sdks/swift-6.1.2-RELEASE_static-linux-0.0.1.artifactbundle/swift-6.1.2-RELEASE_static-linux-0.0.1/swift-linux-musl/musl-1.2.5.sdk/x86_64/usr/libexec/swift/linux-static/swift-backtrace-static HelloWrapper/binary/swift-backtrace

swift build -c $CONFIGURATION --swift-sdk x86_64-swift-linux-musl
cp .build/x86_64-swift-linux-musl/$CONFIGURATION/hello-world HelloWrapper/binary/hello-world

TAG=`date +"%a%H%M%S" | tr '[:upper:]' '[:lower:]'`

podman build -f HelloWrapper/Containerfile -t wrapped-hello:$TAG HelloWrapper/
# no -d because want to see errors inline
# podman run --rm --rmi -p 1234:8080 wrapped-hello

podman run -p 1234:8080 wrapped-hello:$TAG

Push To Registry

DO account linked registry (not actually this one) will update deployment.

podman tag localhost/wrapped-hello:$TAG docker.io/$DOCKER_USER/testserver:v1
podman push docker.io/$DOCKER_USER/testserver:v1
2 Likes