I understood that cached them locally so as not to need network activity again, but that it still clones again from the cache to the workspace. Have more changes been made since that I didn’t notice? Because that is good for speed, but memory‐wise it is a step backward, because it involves an additional clone over what we had before.
If the edge cases involving simultaneous builds can be solved, it should be possible to massively reduce drive usage and continue to improve speed at the same time.
-
[global cache]/Repositories/[package]
would hold the repositories.- When working from pins, any new clones are only made as shallow clones.
- When resolving anew, and new clones are made as deep clones and any existing shallow clones that are touched are turned into deep clones.
-
[global cache/Checkouts/[package]/[commit]
would hold the working copies of the sources, and each would only be created the first time it is needed. There would only ever be one for any package‐commit combination, no matter how many packages elsewhere on the file system depend on it. -
[global cache]/Products/[package]/[commit]
would hold the build artifacts for that commit (before any cross‐module optimizations). Just like sources, there would only ever be one for any package‐commit combination, no matter how many packages elsewhere on the file system depend on it.
Dates of last use could be tracked, and the clear cache command could be parameterized to remove anything not touched in x amount of time, or to remove the oldest things until the cache has shrunk to x size.
Maybe this would make a good GSoC project for someone?