First off, props to the folks in the Swift Discord channel, especially the folks in #troubleshooting.
I was having trouble getting optimal performance out of a breadth-first search algorithm. It uses a bitvector to keep track of visited vertices (using a bitvector in this way tends to provide better cache coherency for large graphs).
In this case, the compiler proved that Int.bitWidth was constant and avoid the lazy initialization overhead. In contrast, the compiler failed to prove that MemoryLayout<Int>.size * 8 is constant and therefore lazy initialization overhead was used.
The result is "obviously" constant, but not to the compiler apparently. Please file a bug and we'll see what the optimization experts have to say. Thanks!
If you want to look at this... warning nerd snipe.
I think this is most likely a global opt property but my memory might be wrong. But it could also be due to a phase ordering issue. I would run swiftc with the flag -Xllvm -sil-print-all. That will print out the result of each optimization pass on each function. I would look and see what pass gets rid of a, but not b after it runs. We have similar tests for stuff like this here: