In my efforts to analyze our build performance, I've been using the very helpful JSON files that
-stats-output-dir spits out. But I'm struggling to make sense of some of the numbers based on my understanding of the compiler (which could be wrong) and what I'm seeing in the logs from the network file system daemon that our source files are read from.
This is a WMO build, so I have stats from the driver invocation and the single frontend invocation. To be extra-safe, I also defined
SWIFTC_MAXIMUM_DETERMINISM to disable threading. (I also ran
sudo purge to blow away macOS's in-memory file cache, because otherwise some of the file system calls wouldn't make it to FUSE so I could see them in the log, but I'm not sure if that's relevant aside from making the build take longer.)
According to the stats,
- the driver started at Unix time 1562946109.388913 (8:41:49 AM) and wall clock time was 266.666 sec.
- the frontend started at Unix time 1562946200.311746 (8:43:20 AM) and wall clock time was 175.606 sec.
So it looks like the driver is doing 91 seconds of work before it launches the frontend. I looked at my fs log to see what files were being read, and it looks like it's mostly reading module maps and header files. This module doesn't have a bridging header so I think that rules out PCH generation.
I thought that the driver could possibly be warming the module cache before invoking the frontend, but I didn't see any code that would do that, and all the timestamps of files in the module cache fall after that 91 seconds (i.e., during the frontend's execution).
What is the driver doing with these headers that takes this long?