Idea: runtime/slim images

(Ian Partridge) #1

Currently, the Swift Docker images come "batteries included", with everything users need to build and run Swift programs, run the REPL etc. This means that they are necessarily quite large.

Other languages also offer "slim" images, which are images that only contain what's required to run a program. For example, in the case of Java this means the slim image includes the JVM but not the Java compiler. For Node.js it includes the runtime but not npm.

I would like us to publish slim images for Swift as well and I have prototyped what such images would look like. There would be two main differences from the current images.

  1. The system package requirements would be much lighter, just: libatomic1 libbsd0 libcurl4 libxml2 tzdata I think.
  2. We would not include the entire Swift toolchain, just the /usr/lib/swift/linux shared libraries from the tar.gz.

Taking this approach, my slim image is ~250MB, in comparison to 1.37GB for the full image.

One nice thing about this is it means users can use a multi-stage Dockerfile to build their app, something like this:

FROM swift:5.0 as builder
COPY . .
RUN swift build

FROM swift:5.0-slim
COPY --from=builder /root .
CMD [".build/x86_64-unknown-linux/debug/docker-test"]

What do people think about this?

(Johannes Weiss) #2

I think that's a great idea.

Will lldb work in the -slim ones?

(Ian Partridge) #3

In my current prototype, no. The problem is that lldb depends on which is 130MB, plus libpython2.7... So the slim image rapidly becomes not so slim after all...

Perhaps the closest comparison to Swift is Rust (as a compiled language built on LLVM). rust:slim doesn't include a debugger.

Does anyone have experience from deploying containers in production. Is a debugger essential? For Golang I thought it was common to just deploy the static binary with nothing else in the container.

(Keith Smiley) #4

We run tests in slimmed down containers that we build ourselves. I've found sometimes having a debugger is useful but I'd rather be forced to manually install it in the container when I needed it than incur the download time overhead.

(Johannes Weiss) #5

Hmm, that means no symbolicated stack traces and no debugging at all... Is that useful?

Hmm, Rust just doesn't have the great lldb support that Swift has, so it's less useful anyway...

We run symbolicate-swift-fatal to symbolicate the stack traces after a crash which requires lldb.

(Nathan Harris) #6

What's the cost / overhead of adding a slim-lldb flavor?

  • swift:5.0 - runtime & compiler, recommended to be used in multi-stage builds
  • swift:5.0:slim - just the runtime and handful of system libraries
  • swift:5.0:slim-lldb - same as above, but with lldb

(Tanner) #7

Maybe we could include this file in the slim-lldb image, too? Is there anyway with Docker that we could run it on crash output automatically?

(Johannes Weiss) #8

We should!

(Jari (LotU)) #9

All of this sounds really promising! I know that at my work, one big reason not to use Swift is because of the huge docker images. This would greatly improve this :smiley:

(Joe Smith) #10

A supported slim version for release would be excellent; for what it's worth I'd almost always want to run the version with lldb support in production to capture the "one weird crash" in the wild and get as much data from a problem as possible. Schlepping over and extra ~200m is worth it from my experience.

(Slava Pestov) #11

Assuming you're going to be looking at a core dump offline, why is it necessary to include lldb as part of the docker image at all?

(Joe Smith) #12

Ah, thanks for the save & great question. I misunderstood what was being provided. I’d run most containers as slim, and give best-effort to keeping a core dump for offline review later (using whatever container orchestration system/tools to do so).

You’re correct in I’d like to have an lldb-friendly container occasionally, but since I’d have core dumps from a slim container I would not care to bring along debugging tools for normal, prod containers.

(Johannes Weiss) #13

it's online. In server environments you typically want to have your crash reports at least in Splunk (I say 'at least' because you might have a better story around crash reports but baseline would be Splunk/ELK/...) etc, and that's only useful if symbolicated. We run ./binary | symbolicate-swift-fatal ./binary so the crash output is already symbolicated when it hits stdout/err which gets send to splunk anyway.

Of course, we could build infrastructure to have access to the un-symbolicated stack trace, all binaries and also a way to identify which crash came from what deployment. Then we could have some software that then takes all this information and symbolicates, but that's just a lot of work for something that's much easier done straight away.

Arguably, this all shouldn't be necessary and all the stack traces should come symbolicated straight away.


this sounds great, but doesn‘t this already exist with the images of ibm:
(they have a builder and runtime image)
Or what would be the difference?

(Ian Partridge) #15

Thanks everyone for your feedback. It's really great to hear people's interest in this.

Doing this requires Python to be installed as well, which will make the image even larger. Even the full swift:5.0 image doesn't include Python - if we're making Python a requirement maybe swift:5.0 should start FROM python... (joke)

This. Does anyone know what prevents that at the moment? I'm afraid I have no idea.

I think we should go further, and include it in the Linux release .tar.gz, as its so useful. Then it will be in the main swift:5.0 Docker image. It's not there today. What do people think?

I don't know whether Docker would accept that - the other languages I've seen only have two flavours: normal and slim. But we could ask, if having a third image is the solution we prefer. Personally I'd like to try hard to avoid it as it complicates things for users and image maintainers.

You're right, it isn't for offline analysis as you can docker commit the container containing the core file and then load the core into another image that contains LLDB for analysis.

IBM has maintained our own build and run images for several years. The images have some historical quirks though and we're keen to improve the upstream images so the whole community benefits.

(Chris Lunsford) #16

This is pretty standard fare in the Docker container world. You typically can post as many images and tags as are needed. Though the tag list should be kept as small as possible (to reduce complexity). The tags and their proper usage should be clearly and concisely communicated in the README.

I for one will benefit from a slim image. The value of including the debugger in the slim image isn’t worth the increased image size for something that is an irregular activity (for me) - especially as there are other options as noted above.


who is the right person that can answer if this is possible? it will drastically improve debuggability imo

(Ian Partridge) #18

I've opened to track shipping symbolicate-linux-fatal as part of the releases.

(Ian Partridge) #19

This is true for normal accounts on Docker Hub, but the Swift images are part of the Docker Official Images program, which is curated by Docker to ensure best practices etc. So we'd need to work with Docker to make this happen, if we want it.

(Chris Lunsford) #20

Yes, and it doesn't appear to be an issue for official images either. Just look at the Python image(s). It's a sizable matrix of:

  • python:<version>
  • python:<version>-slim
  • python:<version>-alpine
  • python:<version>-windowsservercore