Some Combine cases that were always slow are now obscenely slow:
let actionsEnabledPublisher = Publishers.CombineLatest3($sendingAction, conversationState, $shouldDisableUserInput)
.map { sendingAction, state, disableInput in
sendingAction == false && state == .active && disableInput == false
}
.removeDuplicates()
.eraseToAnyPublisher()
Previously under 500ms, now over 5s. Fix is to add types to map's closure parameters.
Overall it's possible the total build time is slightly better? Hard to tell with the operating system upgrade downgrade to Tahoe & Xcode seems to be much worse about doing a ton of work before it even invokes the Swift compiler, too.
Overall build performance is about the same for my small project, haven't tried the large yet. I don't whether these are new, but I did notice two things.
Lots of repeated clang module compilation for every package dependency:
And of course incremental builds are still horribly slow. A clean cached build for my larger project takes about 100s, an incremental build after adding a single newline takes 140s.
It's the build visualizer in Xcode. View a build log from the Report navigator tab and in the Editor Options menu (upper right lines button) click Assistant and the visualizer should come up (sometimes it's bugged, like in 26.4 beta right now). It's just good enough to be useful, but there isn't nearly enough information visible to actually get useful build improvements. But it does help spot obvious issues, like repeated clang module building or slow Swift build preparation.
one thing that may be contributing to the discrepancy is the fact that prerelease toolchains are built with assertions enabled, which slows down compilation considerably. while i don’t contradict that there could be real slowdowns between 6.2 and 6.3, i would be interested in typechecker benchmarks that compare 6.2 toolchains with assertions enabled (such as downloadable macOS toolchains), or (probably more work, as it requires a custom toolchain build) a 6.3 prerelease toolchain with assertions disabled.
@KeithBauerANZ could you please try using the same literals - XCTAssertEqual(product1?.lightThemedAssets?.backgroundColor, .init(red: 244.0 / 255.0, green: 218.0 / 255.0, blue: 129.0 / 255.0, alpha: 1.0)) and for the second expression replace == false with !sendingAction? I think that should improve the performance enough to allow you to remove the explicit type of the closure. Operator lookup is global and == has a lot of overloads because of Equatable conformance which the type-checker cannot always rule out easily because some of the types also conform to ExpressibleByBooleanLiteral.
I don't believe Apple's shipped toolchains, even in beta, enable assertions. The development toolchains might, but I've never been able to get them to build my iOS projects, so all of my comparisons have to wait for Xcode betas. Though technically the version information doesn't say.
swift-driver version: 1.148.6 Apple Swift version 6.3 (swiftlang-6.3.0.119.2 clang-2100.0.119.1)
Target: arm64-apple-macosx26.0
We have a global style guide to use abc == false instead of !abc. People before me felt it was more readable. I personally dislike it, and previously set out to prove that it was causing slow compilation. I used SourceKit to modify the entire project to replace all the occurrences (then manually fixed a few where the lhs was actually optional). At that time, it made ~zero difference to the compilation time. So if == false is that big of a problem in 6.3, it's certainly a dramatic regression from previous.
As for adding explicit .0 onto literals that resolve to fp types, if we wanted to be Rust and require them, we should have done that from the start? Regardless of that, there aren't any 4-argument initializers for Optional<XYZ>, so this .init must clearly be a raw XYZ, the argument labels then identify the initializer uniquely, so there's no reason to be chasing every possible combination of types for the literals to represent. It really doesn't feel like it "should" be a hard thing to typecheck, if you chase the resolution in a "sensible" order (obviously, easier said than done, but given literals and operators are an eternal source of pain, shouldn't the typechecker avoid expanding those like the plague?). Anyway, it's certainly a dramatic regression from 6.2, and provides more fuel on the fire for a stylistic ban on .init, which a bunch of people are already in favor of.
It might have been the case before but it's not necessarily the case now because all of the visible operators would go into an overload set so if some dependency of a framework you import makes their type Equatable which is also ExpressibleByBooleanLiteral that would cause more solving to happen. Some of the performance for things like this was supported by the hacks we had in the solver, the problem with these hacks is they make some code faster but other code slower or even reject some valid code because they are too aggressive. 6.3 was an effort to overhaul all that, so it's possible that we made some expressions slower that replied on the hacks before, but it opens us a path forward with more principled improvements. We made a post about this some time ago.
I think the type-checker is doing a "sensible" thing today because literals don't drive type-checking, calls do. In this expression we have an overload of / that accepts (Int, Int) -> Int and a lot of other ones that accept e.g. (Float, Float) -> Float, (Double, Double) -> Double, (CGFloat, CGFloat) -> CGFloat, (UInt{...}, UInt{...}) -> UInt{...}etc, all these types areExpressibleBy{Integer, FloatingPoint}Literal. Since 1and1.0are used together here in application of/there is clash because1s default type is Int but 1.0 is Double, there is no (Int, Double) -> ... overload so the solver ends up checking other overloads there and they all produce sub-optimal solutions because they are all are using non-default literals in some way, it's even worse for CGFloat because it's interchangeable with Double so now the solver needs to figure out how it's result used to avoid narrowing conversions.
hey Jon, does Clang modules report or Swift modules report in your build logs show a surprising numbers of module variants? That report could also provide us with some insights about why module variants are necessarily built.
In addition to changing the integer literals into double literals, another way to make it fast is to change .init into NSColor.
From looking at the -debug-constraints output I can see the problem is that we don't bind the type of the .init before looking at the divisions. This is a known problem with "leading dot" inference and not a new regression, but in this specific expressions, the overloaded divisions were previously resolved in 6.2 as part of a pre-processing step which is now disabled by default in 6.3 (and gone on main).
Can you share a self-contained test case for this one? I'll add both the above XCTAssertEqual example and this one to the test suite.
I see no such steps in the 26.4 build log, do I need to enable them in some way? I can see the clang modules in the log, but no report. The project does have explicit built modules enabled. If there's some way you'd prefer I can provide an xcresult bundle or textual log.
Unfortunately this one defies self-contained-ness; the function that it's in is massively complex. Whilst it isCombineLatest3....map that's "the problem" (it's in our style guide to explicitly type the parameters to a closure following a CombineLatest3/4 because it's so often a problem), it's not a problem in isolation; it needs significant surrounding complexity to become ambiguous enough to be slow. And in this case I'm not sure why it would be so much worse in 6.3 than 6.2 :(
What are the types of sendingAction, conversationState, and shouldDisableUserInput? This example type checks quickly, faster in 6.3 than 6.2 in fact, so there must be something missing:
import Combine
enum E {
case active
}
func f<T: Publisher, U: Publisher, V: Publisher>(
_ sendingAction: T, _ conversationState: U, _ shouldDisableUserInput: V)
where T.Failure == U.Failure,
U.Failure == V.Failure,
T.Output == Bool,
U.Output == E,
V.Output == Bool {
let actionsEnabledPublisher = Publishers.CombineLatest3(sendingAction, conversationState, shouldDisableUserInput)
.map { sendingAction, state, disableInput in
sendingAction == false && state == .active && disableInput == false
}
.removeDuplicates()
.eraseToAnyPublisher()
}
Yes, the actual function is 150 lines creating 8 publisher chains, including several complex closures, one of which switches over ($0, $1) with 10 cases like case (.xyz, _):. It eventually assigns a couple of these chains to a couple @Published properties. I'm definitely not holding this up as an exemplar of "quick-to-compile" or necessarily even "a sensible way to write code".
That said, this all compiled in <500ms (on my M1) with 6.2, and takes >5s on the same machine with 6.3, and adding type signatures to that one CombineLatest3.map situation fixes the problem. So…
The contents of those other closures and the specific expressions that constructed those publishers don't matter for type checking the let actionsEnabledPublisher = ... expression, only the types of those publishers do.
You're right; if I comment out all the rest of the code in that function, replacing the locals that it depends on with explicitly-typed variables with = { fatalError() }() initializers, it's still slow to compile.
However, if I pull that code out into an empty Swift file and build it on the CLI, it's no longer slow to compile.
So I guess something else in the project or its dependencies is somehow slowing this down?