Refactoring plan of SILVisitor for LTO

First of all, I know the sausage factory floor is messy, but I sort of feel sorry for the cat = p.

Let me lay out what you are saying in a bit more detail to make sure I am understanding correctly:

The clang LTO model is to use the linker. I would assume, Arnold, that you would argue that this was created so we can work with normal Makefiles/etc (similar reasoning behind auto linking being created). I assume you are questioning if doing this in the linker is the appropriate time to do so and if instead we can just do it earlier using the fact that the driver has significantly more control when compiling swift code?

Yes. Also, I think the fact that we are emitting extra .sib files as intermediate products seems derived from that architecture?

@Arnold yes. The nice thing about us not going through that system is that we do not need to write linker plug-ins for all of the various linkers. So if we can do it, I think we should.

It seems reasonable to follow the thin lto way. I talked that with @compnerd a few days ago.

And I agree that we don't need to embed the optimization within each linker. What do you think @compnerd?

@Arnold I was thinking about this a bit more and realized that there is an important design decision that has not been spoken about: do we want to rely on CMO serializing everything.

If we do not want to rely on CMO doing that, then we need some intermediate representation for the SIL in between the initial compilations and the merging of summaries. Otherwise, we do not have an appropriate thing that we can codegen /after/ we have finished the cross module optimization phase.

My thoughts are that most likely we will not want to serialize everything at CMO time in the near future due to code-size concerns. But if other people have other thoughts, I am happy to discuss.

That being said, given that CMO is not going to serialize all the SIL, I imagine this is how the architecture would be:

noting that CMO would be serialized early and we would produce a summary file at that time for the swift module and that the SIB file serialization would happen where we serialize today in the middle of the optimizer pipeline. The SIB file would then be used to codegen (and would use CMO code from other modules) using merged summaries.

One thing that is not completely clear to me is if SIB files themselves would want a summary of some sort to enable DCE. I am imagining information that we would discover later after optimization like we DCE-ed enough that this vtable is not used anywhere in the SIB file even though it /could/ have been. But I haven't thought in great depth. It would not be useful for inlining of course.

One thing that is not completely clear to me is if SIB files themselves would want a summary of some sort to enable DCE. I am imagining information that we would discover later after optimization like we DCE-ed enough that this vtable is not used anywhere in the SIB file even though it /could/ have been. But I haven't thought in great depth. It would not be useful for inlining of course.

I think if DCE just removes llvm.used attribute from the declarations of table, it would be no problem even they are used after the CMO.

I basically agree with Arnold.
Also, we should answer the question, how much benefit we would get from building up such an infrastructure - compared to the cross-module optimization we have now.
"Bottom-up" optimizations, like cross-module inlining, specialization are already supported with our current cross-module-optimization approach.

With swift LTO, we could do "top-down" interprocedural optimizations, most importantly DCE.
Though, DCE can also be done by the linker. Currently it does not work well with swift's witness tables, but probably that could be supported in the linker in some way.
So the question is: what's the amount of improvement we can get from cross-module "top-down" optimizations (beside DCE)? Is it worth investing in building this framework with the benefits we are expecting?

Good point. Can we do DCE as an LLVM LTO optimization that is Swift specific by providing summary info which top-level entry points (witness/v-table-entries/metadata) are may-use of a llvm-module?

There is the possibility of more de-virtualization if our world is more closed. Not clear to me how beneficial that would be in practice where we use opaque (non-cmo'ed) libraries. If those libraries come with summaries what they import/extend (that would be ABI though) you might get close to a real closed world ...

An architecture aspect is the possibility for parallelization. How does the architecture support parallelization with as short sequential stages as possible?

An example thought:

The initial stages of generating individual modules: swiftinterface, llvm bc and individual summaries is naturally parallel.
Potential next stage of collating summaries is a linearization point.
Optimization based on collated summaries (be it a Swift CMO step or a Swift-LLVM-LTO) and code generation can be done (mostly) in parallel.
Ultimate link step to produce final object is sequential.

Hi, everyone. Since last week, I had been prototyping @Arnold and @Michael_Gottesman 's architecture.

Here is my rough implementation of the architecture:

Overview

Almost the same as LLVM's Thin LTO

  1. Emit module summary
    • Add a new file type .swiftmodule.sumary.
    • It serializes a module's call graph, Witness Table, and VTable information.
    • The structure is similar to LLVM Thin LTO summary.
    • swift-frontend's option -emit-module-summary-path corresponds the emission.
    • This can be done in parallel.
  2. Merge summaries
    • swift-frontend -cross-module-opt [module summary file...] links and merges multiple summaries.
    • And prepare for optimization at this phase (e.g. Marking dead functions)
    • This is sequential stage
  • Performing Optimizations for each modules
    • Pass merged module summary to swift-frontend via -module-summary-path
    • My prototype implements only simple Dead Function Elimination.
    • This can be done in parallel
# 1. Emit module summary for 'module1' into './module1.swiftmodule.summary'
$ swift-frontend -emit-sib module1.swift \
                 -emit-module-summary-path module1.swiftmodule.summary \
                 -parse-as-library

$ swift-frontend -emit-module module1.swift -parse-as-library

# 2. Emit module summary for 'main' into './main.swiftmodule.summary'
$ swift-frontend -emit-sib main.swift \
                 -emit-module-summary-path main.swiftmodule.summary

# 3. Merge module summaries into one summary file, link them and mark dead functions
$ swift-frontend -cross-module-opt \
                 main.swiftmodule.summary module1.swiftmodule.summary \
                 -o merged-module.summary

# 4. Do Dead Function Elimination for 'module1' module using the combined module summary
$ sil-opt -emit-sil module1.sib \
          -module-summary-path merged-module.summary \
          --sil-cross-deadfuncelim

# 5. Do again for 'main' module.
$ sil-opt -emit-sil main.sib \
          -module-summary-path merged-module.summary \
          --sil-cross-deadfuncelim

Module Summary file format

The module summary file consists of call graph information and virtual method table information.

func myPrint(_ text: String) { ... }
public protocol Animal {
  func bark()
}

public struct Cat: Animal {
  public func bark() { myPrint("mew") }
}

public struct Dog: Animal {
  public func bark() { myPrint("bow")}
}

public func callBark<T: Animal>(_ animal: T) {
  animal.bark()
}

For example, this swift file would be summarized as:

3 Likes

I basically like the approach laid out here for bootstrapping. I'm not sure how will it will integrate with the Swift driver in the future or whether splitting compilation units into .sib files is well supported today. For example, sil-opt often fails today when supplied with .sil produced by swift-frontend.

I think calling this "LTO" is a misnomer, as this seems like a feature that's internal to the swift driver, not driven by the linker. Although I do think LLVM thin-LTO is a good archictecture to emulate.

Reading between the lines, here's how I understand this proposal...

Given modules A and B...

Compile A

$ swift-frontend A1.swift A2.swift -whole-module-optimization -emit-module -emit-module-summary -emit-sib -o A.swiftmodule

Output: A.swiftmodule, A.swiftmodule.summary, A.sib

The three output file types are likely all generated during SIL module serialization, but we have the option of deferring the .summary and .sib output in the future if it's useful to run more passes on those.

A.sib must contain the additional SIL function bodies that are not exported for cross-module optimization. It may also contain a copy of exported function bodies, for example, if they have been further optimized after the module was serialized. A.sib is the same file format as .swiftmodule but does not include any AST-level type-information (A.sib is useless on its own). A.sib can be arbitrarily broken down into Ax.sib, Ay.sib, Az.sib either for parallelism or incremental builds.

Compile B

$ swift-frontend B1.swift B2.swift -whole-module-optimization -emit-module -emit-module-summary -emit-sib -o B.swiftmodule

Output: B.swiftmodule, B.swiftmodule.summary, B.sib

$ swift-frontend -merge-module-summary \
                 A.swiftmodule.summary B.swiftmodule.summary \
                 -o merged_module.summary

The summary merge step seems unnecessary, but it may save compilation time because each module does not need to "re-merge" the summaries as it imports them.

I'm not sure why the proposal calls this "-cross-module-opt".

Test and debug the SIL optimizer

$ sil-opt A.sib -emit-sil \
          -module-summary-path merged-module.summary \
          --sil-cross-deadfuncelim
  • Finds A.swiftmodule and B.swiftmodule in the include path

  • I think we currently need to specify the .sib file's parent
    .swiftmodule on the command line, but that seems silly. A.sib should
    know that it comes from A.swiftmodule

CodeGen A

$ swift-frontend A.sib -c -o A.o -module-summary-path merged-module.summary
  • Finds A.swiftmodule and B.swiftmodule in the include path

There seemed to be some confusion regarding the artifacts produced by the compiler. Here's my take on that...

It's useful to separate information that has different dependence information, different lifetime, or needs to be individuated on a command line into separate files.

.swiftmodule: "what a module exports"

  • somewhat analogous to a combined header

  • produced by a single well-defined SIL serialization point in the
    optimizer pipeline (prior to dropping any semantics).

  • self-contained, AST, SIL-level declarations, and exported function
    body definitions.

  • may depend on information from function bodies that aren't included
    (at least as-is today). Ideally we would have a way of recording
    those dependencies on .swift files, and/or avoid introducing them, to
    support incremental cross-module optimization.

.swiftmodule.summary: "inclusive module summary"

  • augments .swiftmodule with summary information that's inclusive over
    the module's implementation

  • could (should?) be embedded within the .swiftmodule, but separating them
    allows for a single merged summary file

  • additional source of dependencies on .swift files. It's possible
    that updating a .swift file changes the summary but not the
    .swiftmodule

  • could potentially be emitted later in the pipeline to provide more
    refined summary

.sib/.sil: "SIL-level compilation unit"

  • somewhat analogous to .cpp/.bc./.ll

  • arbitrary subset of SIL function bodies for codegen within a
    module that can be merged or split. Never seen by other modules.

  • these may be emitted at any time during SIL optimization for testing and debugging.

  • .sib should (ideally) be isomorphic and interchangeable with .sil files

1 Like

Thanks for re-organizing my proposal. That explains my thought perfectly!

In my opinion, the Swift compiler driver should focus on single module compilation and build systems like SwiftPM should drive these kinds of cross-module cooperation things.

Right, I know those issues. I'm sending patches to fix them now.

Yes, as you said, this emulates LLVM thin-LTO architecture but not performs at link-time :sweat_smile:
Do you have any idea of name for this optimization?

.sib/.sil and .swiftmodule.summary have same lifetime because they depends on internal things that are not exported as .swiftmodule. So I think that it should be better to embed summary info in .sib rather than .swiftmodule.

My 2 cents.

Have a look at the thinlto video. It is slightly different from your proposal. The thinlink step is actually doing optimizations instead of merging the summaries.

Here is an example. It creates one summary for each module resp. translation unit:
http://lists.llvm.org/pipermail/llvm-dev/2019-January/128955.html

As far as I know, thinlink phase merge summaries and compute dead symbols. And lto backend actually do optimizations and codegen based on merged summary and computed dead flags.

My prototype implementation follows this approach.

It also does an analysis for cross-TU inlining and provides these information in the per TU summaries.

E.g. is it beneficial to inline foo from Module A into Module B depending on the size of foo and the frequency it is used in Module B.

Interesting. But I think similar things are already done with current cross-module-optimization based .swiftmodule.

If you have the time, have a look at:

It gives detailed information about the cross-TU analysis and the overall architecture.

Feel free to ignore me.

I totally agree that the LTO backend does optimisations and codegen.

Thanks for your information!

Thanks for your work. I am looking forward to use LTO in Swift.

Yes.

Great. First step is having a working serialized SIL.

To me, this is just cross-module optimization. You could call it thin-cross-module-optimization if you want to distinguish it from the less-scalable approach.

I don't understand that. An important distinction between .swiftmodule and .sib, is that the former needs to be loaded when its imported by other modules and the later is purely used for codegen within a module. There's nothing cross-module about the .sib. The module summary is only useful to export information that's already in the .sib to other modules. Both the .swiftmodule and the summary can embed assumptions about code within the .sib files.

I'm perfectly fine using separate files anyway. I think that makes debugging easier.

1 Like