I am using local package to structure my application code. In the local packages, I have a lot of generated code which greatly helps me writing code. But a lot, like 90% of all that generated code, I don't use.
I couldn't find a lot of resources, but I was wondering if I can decrease my app size (which is fairly large, I hope because of the generated code) by putting the code from my local package directly inside my project code.
I don't have experience with this. I read something about Whole Module Optimalization, but that isn't possible I guess with packages, so I would expect the app size to decrease when I move the generated code. That way, the compiler can detect easier that some code isn't used and can thus be removed.
Is this an example of pre-optimalization or could this actually work? I don't have an app in production so I can not test it myself.
Whole module optimization (WMO) is the default (for the release configuration) and has been for a long time. That works within a module, and should, for example, remove any unused
Link time optimization (LTO) is less established. That is what works between modules, and might, for example, remove an unused
public symbol. You can search the forums for the term to track its evolving status.
So if the unused code in question is reachable from a
public entry point, then what you are asking about is in the LTO domain. Eventually, when LTO is standard, your code should be fine the way it is. For now, if you have the ability to turn on LTO, you can try it and see how much of an effect it has. If you cannot turn on LTO, or it does not do a good enough job yet, then you have two choices:
- Split the code into smaller modules and only import the ones you need at any given time. That way the unused code does not end up in the build in the first place.
- Move it (or use a Git submodule, symbolic link or similar strategy) so that it is directly in the application module and remove
public from all its declarations. That way it moves from LTO territory into WMO territory.
P.S. When you are comparing things, make sure you are building in the release configuration. The debug configuration sacrifices a lot of optimization in exchange for compilation speed.
Whole module optimization only works within a module, so it can only optimize code within a package's targets. There is, however, a setting called cross-module optimization that can optimize code across several modules. Cross-module optimization is much more powerful than link-time optimization (the optimization that @SDGGiesbrecht discusses in their post).
To use cross-module optimization in an Xcode project, select your target in the project editor, select "Build Settings", scroll down to the "Swift Compiler - Custom Flags" section, then add
-cross-module-optimization to the "Other Swift Flags" setting. Because code optimizations can take a while, I recommend only using it in release mode.
To use cross-module optimization when building from the command line, simply use the
-cross-module-optimization option like so:
swiftc code.swift -cross-module-optimization
It should be noted that while cross-module optimization can decrease your code size by removing unnecessary code, it also has the potential to increase your code size since it can specialize generic constructs better. It would be a good idea to look at your app's size before and after enabling cross-module optimization to make sure it's helping. And if you're willing to sacrifice performance for a smaller app, then compiling with
-Osize optimization might be a good idea too.
Can you verify this with a reference? To my knowledge all it actually does so far is flag every symbol with
@usableFromInline. If that is still true, then it is counterproductive for code size at the moment (though very useful for runtime performance).
Regardless, both cross‐module and link time optimization are in the same boat regarding their experimental state and the fact that they will likely become the default eventually. So toy with them, see how well they work, and report back any issues you encounter to help us improve them.
i don’t think
-cross-module-optimization is very useful. library authors often put a lot of thought into placement of
@inlinables, which means that APIs where it has been omitted are often things like debugging/introspection APIs where the performance gain would not be worth the code size increase.
if a module benefits significantly from
-cross-module-optimization, that usually means the library is missing
Those who only vend their libraries as source and have no ABI concerns generally wish not to think about
@inlinable since everything can be safely inlined. Not only is marking everything
@inlinable annoying, but those who vend their libraries both ways face a conundrum. A symbol that is safely inlinable from as source may not be safely inlinable from a binary. If everything relies solely on
@inlinable markers, then the same source cannot support both products.
Given that directing clients to activate an experimental feature is a bad customer experience, I agree with you for now. But in time, I do believe it makes sense for SwiftPM & Co. to apply it by default to source dependencies. At that point many of the
@inlinable attributes we see today can be cleaned out and thrown in the dustbin.
@inlinable was never supposed to be about optimization in the first place, but rather about ABI boundaries.