Improved build system integration

Okay. Pipelining! Here's a diagram I've used a few times over the years to talk about what I see as the opportunities:

One thing that isn't exactly on the diagram is batch mode, which is similar to "pre-checked decls" except that you run N processes all doing this, and only checking some of the decls in the "Sema" stage. It also doesn't parallelize the separate tracks after Sema, because you're already running N processes and in the simple case N is the number of CPUs already.


The suggestion in the original post sounds like "Pre-checked Decls" at first, but it's probably closer to "Split SIL" (due to the mention of SIB). There's a few reasons why I wouldn't suggest "Pre-checked Decls" as a good model to solve the pipelining problem:

  • If the goal is to produce a swiftmodule so that you can start compiling lib2, you need to at least have the SIL of the inlinable functions as well.

  • There's currently no serialization implemented for statements and expressions in Swift, just SIL.

Having someone revive SIB, however, does seem reasonable. Note that a SIB file is not standalone; it's meant to be loaded along with all the other SIB (or source) files for a module to get access to AST declarations in other files.


All that said, it's worth noting that a set of SIB files would contain all of the SIL for a module, which means type-checking all the function bodies. This isn't the slowest part of an optimized build, but it probably is the slowest part of a debug build. So you may still get significantly faster behavior with separate -emit-module and -c invocations if your module doesn't have inlinable code, even with the repeated work of type-checking declarations. (At least in theory. @harlanhaskins, did you manage to get in the change to not type-check non-inlinable function bodies for -emit-module?)

1 Like