Support debug-only code

All right, it's a matter of aesthetics, and I understand, thanks! Runtime checks go beyond (my) expectations for the pitch, but I have no problem with them.

Is it possible to use the same isOptimized() name for the compile-time and run-time checks?

#if !isOptimized()
#endif
if !isOptimized() {
}

(I'm not suggesting that the compile-time check would invoke a stdlib function.)

Could you expand on some of the reasoning you present in the text? Specifically, I'm curious about the following:

  • Why have three notions of "debug" folded into one?

  • Why have both runtime functions and compile-time configuration tests, if the runtime functions are guaranteed to be optimized away? I understand that compile-time configuration tests would allow for the declaration of whole methods only in debug mode, for example, but is that something we should support at all?

  • Why create a single negated version of one function instead of promoting the existing underscored runtime functions to full public status?

  • When you say that Swift has "matured significantly" since Joe Groff's comments, in what ways have the serialization of SIL changed such that Joe's concerns no longer apply?

Because I believe most developers have one concept of "debug", which is "in house development where asserts and other private details like extra logging are guaranteed to be included."

Compile time tests are guaranteed to be optimized away. Runtime are not. I included it at request.

Because items that are used in house-development may not be app store safe and must be removed at compile time for release.

I think you're putting a different spin on my words than I intended. I believe the groundwork was established by canImport. I remember the sticking point in that implementation was figuring out when the test would happen. The setTarget method is now called more than once, each time resetting existing flags and re-triggering those that test positive. We added our optimization flag into the SIL processing function so the test occurs sufficiently late. The optimization check is part of that final sweep because nothing knows about optimization flags until then. This prevented just "wrapping" any of the three runtime stdlib checks and was a significant headache. A simple "just use the stdlib runtime checks" was not possible until the SIL options were established.

Getting the prototype up and running was a big pain. Once again it increased my gratitude to @codafi, @rintaro, @jrose, and everyone who worked on SE-0075. I expect my approach will have to be carefully considered and possibly rewritten from scratch. My goal was to show that it could be accomplished before the proposal moved to a formal review stage, an important lesson from SE-0075. (Also heapsdittothanks to @Graydon_Hoare for his help on SE-0190)

I chose not to as each check promises a different thing: full code exclusion or runtime test with normal code.

That's fair. Does it fit with the current design of Swift to equate non-optimized with debug? If so, for what purpose does the stdlib distinguish between "fast assert" and "release assert," and might third-party users want to make that same distinction?

But, to be clear, the particular runtime functions at issue are guaranteed to be optimized away, are they not? If so, isn't it duplicative to have both compile time tests and runtime tests? What use cases require the existence of both?

I'm not sure I understand your reply here. The existing underscored runtime functions make that possible too, do they not?

if !_isDebugAssertConfiguration() {
  // This is removed at compile time for release.
}

Interesting. Would compile-time configuration testing negatively impact compile times?

No, they are not. They are just in-language access that promotes the concept of "is this debug or not" to public code.

It's one test just after the compile flags are parsed. It's only run once.

That's interesting. Joe Groff had written that the existing underscored facilities are "guaranteed to be constant-folded away before final code generation"--but that must have changed in the intervening time? If constant folding no longer takes place, it seems that it'd be best not to offer the runtime check at all if we can use a compilation conditional statement instead that works properly.

Sorry, I guess I should be more clear about the question. Given:

#if condition
  // A
#else
  // B
#endif

I know the compiler checks that both "A" and "B" are syntactically well-formed, but my understanding was that there's a significant amount of work that's not done for both "A" and "B" at compile time. If the compile-time optimization check is towards the end of SIL generation, does that mean that the compiler will be doing a lot more work for both "A" and "B"? Is it able to do any less work than a runtime check that is properly optimized away?

While I can't address this directly, I should note that attempting to use those functions today to omit code in release builds causes the emission of compiler warnings. This is presumably because after the constant-folding step, the compiler cannot tell the difference between my writing if _isDebugAssertConfiguration() and if false: it would like to warn me about the latter, and so warns me about the former, even though I clearly understood that the code wouldn't be run in release builds.

The advantage of a runtime check that does not constant fold is that the compiler does not have to become specially aware of it to avoid emitting those warnings.

With that said, I should stress that I'm largely uninterested in a runtime check. For my part, compile time checks would be perfect.

I'll review Erica's proposal shortly and provide any comments I have.

Generally I'm happy with Erica's proposal.

As noted before, I am not hugely motivated by runtime checks, so I don't consider that part of the proposal critical. In general while I appreciate that runtime checks lead to more concise code they tend to require special-case compiler behaviour if you want to silence compiler warnings about the constant folding they cause, and I'm not sure the trade-off is entirely worth it. Swift's @autoclosure allows us to construct a "runtime" version of the ternary

For example, let's revisit Erica's example this way:

let urlString = isOptimized() ? releaseURLString : debugURLString
let url: URL = URL(string: urlString)!

// or even

let url: URL = URL(string: isOptimized() ? releaseURLString : debugURLString)!

The runtime check can be generated like this:

func isOptimised<T>(_ optimisedCase: @autoclosure () -> T, _ unoptimisedCase: @autoclosure () -> T) -> T {
    #if optimization(enabled)
        return optimisedCase()
    #else
        return unoptimisedCase()
    #endif
}

let url: URL = URL(string: isOptimised(releaseURLString, debugURLString))!

I certainly wouldn't be opposed to having the runtime check, but if the core team are concerned about the runtime check I think almost all uses can be derived from the compile-time one without too much difficulty.

Otherwise, I'm a great big +1 on this: it would solve a real pain point that tends to be hit in sufficiently complex applications.

1 Like

(This all predates my involvement, so take this with a grain of salt).

I would exercise caution regarding "fast". Reading the code, it seems like it was created to enable -Ofast/-Ounchecked, which is probably not a generally recommended configuration.

(I apologize if the following balloons the scope of this pitch. Feel free to explicitly exclude)

I agree that most users have a Debug/Release dichotomy. However, a build with optimization and assertions is often orders of magnitude faster and it's an important configuration for large apps. This is even more so the case for library developers with long running tests, and they may want to have additional internal assertions present (even with optimizations) when testing and separate external assertions present based on the user's configuration.

For example, the standard library has 3 kinds of assertions (copied from the programmer's manual entry):

(_sanityCheck is governed by the build define INTERNAL_CHECKS_ENABLED)

@lukasa is SwiftNIO in a similar boat?

I believe these are swift-dev facing not developer-facing. Developers have preconditions (for argument hygiene) and assertions (for sanity/coherence checks). They can easily create a custom assert feature for optimized builds beyond precondition.

    _ condition: @autoclosure () -> Bool,
    _ message: @autoclosure () -> String = String(),
    file: StaticString = #file,
    line: UInt = #line
    ) {
    guard !condition() else { return }
    fatalError("\(file):\(line): \(message())")
}

That aside, I'm not sure if optimized debug builds are a big thing

Not sure what you mean. This was just an example of a large Swift library that also cares about internal assertions.

They are absolutely essential for large libraries (and I assume large apps). For example, large C++ code bases that rely on inlining from templates see a multiple-X performance speedup in Optimized+Assert configurations, and only around 10% from suppressing assertions. Swift, which depends even more heavily on optimizations, sees a larger effect.

I'm not arguing that you should include this configuration (though I think it's more legitimate than a "fast" one). I just want to make sure we are carefully consider whether we really want #if optimization(enabled) to conflate assertions being enabled, especially for library maintainers.

How would these conditions ("I am optimizing but want some kind of assert conditions to fire and need to add conditional "optimized debug" logging or other features") be detectable? The only thing I can think of is -O -Ddebug, because there's no way for Swift to figure this out, is there?

swiftc supports the -assert-config flag to override assertion behavior, but it's not well known or advertised. This hasn't been a huge deal so far, although part of that may be the relative immaturity of Swift's library ecosystem and IDE conventions. With cross-module inlining, eventual module stability, and larger Swift libraries in the future, the distinction starts to matter more.

Optimizations and assertions are somewhat orthogonal axes, its just that the two extreme corners see the most usage in smaller code bases and IDE conventions. In contrast, the recommended "for development" build for any large code base I've worked on is optimizations+assertions (acknowledging that most of these were C++ and considerably larger than any Swift package I've seen). There could be a progressive disclosure of conditional compilation here.

Drat, I really don't mean to balloon the scope. I just want to make sure we don't accidentally gloss over the needs of current and future libraries. I'll otherwise try to stay out of the way ;-)

3 Likes

Not really.

In our case, any time we want an assertion to remain present in optimised builds, we use precondition. These preconditions generally cover invariants in our API that are not covered by the type system (e.g. "our API accepts Int but this should actually fit in a UInt16") or that provide explicit enforcement of what we believe to be invariants in the code (e.g. "this state machine should not receive input X in state Y because that would represent time travel").

The issues we're bumping into is dealing with situations where we want to maintain more state about the system. As a hypothetical, if we take the FSM example again, one thing we could conceivably want to do is track state transitions in FSMs for debugging purposes (if you hit the precondition above, what path did you take to get there?). Such state tracking is expensive, as an FSM state transition is normally very cheap, but building a string and doing an array insertion is very not cheap both in terms of computation and total system memory usage. As a result, it's highly unlikely that anyone would want to keep this kind of tracking around when performance is a big concern.

That said, I can see one major use case, which is when you are attempting to debug a behaviour that can only occur in extreme load conditions. Particularly with asynchronous code like that found in Swift NIO, it's very possible to encounter window conditions that require extremely busy applications to hit. In that situation, "asserts + optimisation" is valuable. We have in the past traced some issues using -assert-config and found that to be helpful. However, given that this is a low-level flag not currently supported by SwiftPM, I'd probably consider wanting to make this a bit more granular by using specific debug flags for more complex assertion tracking. Something like -D NIOTraceFSMState, or -D NIOTracePromiseAllocation.

Do you have any extensive testing inside SwiftNIO that's untenable without optimizations?

What is your beta testing strategy for future releases? I would imagine you'd want to distribute a beta with assertions enabled (catch bugs early) and optimizations enabled as well (otherwise unlivable for beta testers).

No, not currently.

We're a Swift Package Manager project, so we don't ship compiled binaries to anyone: we just ship source. That's why we needed this option in the first place: to enable us to provide better tracing for users when they run tests.

If we want to have this kind of beta support in the future, we can do it by means of a conditional compilation flag. Obviously if Swift adopted some generalised notion of this kind of "optimised-testing" mode we'd additionally want to support that with some kind of granularity, but right now we're happy enough with just the two modes.

@Michael_Ilseman What you're suggesting is that there are three common concrete scenarios:

  • Release code
  • Optimized code for beta testing that retains assertion tests
  • Unoptimized for development that retains assertion tests

If so, this proposal needs two configuration tests for compilation:

  • If optimized; and
  • If asserts can fire

And unfortunately, I don't know how to make the second half of your scope happen with my limited skills in C++.

That brings up the following questions:

  • Is there sufficient utility in checking for optimization for the adoption to be of use and have impact on Swift users? You can already use -d DEBUG, even if it makes code look unSwiftlike
  • Is your use-case sufficiently widespread to require an SE proposal? Is it dominant compared to simple "assertions are not enabled"?
  • Is it okay that you can enable assertions and set a custom conditional compilation flag to get exactly what you want today without sugar?

My primary motivation in a nutshell: I want a conditional build configuration flag that lets developers safely exclude code that should not be in release builds. I am specifically motivated by the AppStore scenario and less so by responsive betas.

I'd appreciate hearing your thoughts on these points.

You have separate compilation of modules even as a source distribution. For example, your users could choose to build SwiftNIO as Release even if their code is built in Debug. If SwiftNIO was unacceptably slow in Debug for their uses (as is the case for e.g. the standard library), they would choose Release. Of course, this is highly dependent on the particular project.

If you wanted them to be able to validate some assertions in Release mode, yes you can use conditional compilation flags. However, you now have to make sure such checks are not guarded by #if optimization(enabled), where "optimization" is a misnomer and may be an actively harmful name.

Sounds reasonable. Is this your plan for assertions and checks in @inlineable methods?


Not quite, the beta testing was just an example for illustration. I would state the second configuration as "Optimized code for development and/or testing".

A concrete example of this is the Swift project's README on Building Swift, where it is recommended to build as many things as possible with both optimizations and assertions enabled. All of those examples have assertions enabled in all components as that is the default (even in "full" release). You have to additionally pass --no-assertions to disable them, and of course there's flags for selectively disabling assertions for different components.

This is the case if you wish to spell this #if optimization(enabled). If it was spelled more like (strawman) #if configuration(release), where what a "release" configuration means is specifically defined somewhere, then this distinction is present in that definition. We could then add granularity in the future here, e.g. #if configuration(release, assertions), #if !configuration(assertions), etc.

My main worry is that:

#if !optimizations(enabled)
assert(false)
#endif

is an assertion that can never fire, but knowing why it can never fire requires remembering that "optimizations" is a minor misnomer.

It doesn't have to happen now, I just want to retain the flexibility to make this distinction in a world of large Swift libraries.

No, there is only something like _isDebugAssertConfiguration. I'm glad this pitch is trying to do something about it.

Not yet, but could be more common in Swift's future. E.g., I would imagine this configuration to be embraced by libSyntax in the future, extrapolating from the history of other source tools and compilers.

I'm very happy to see any incremental progress here and subscribe to "the perfect is the enemy of the good" motto. And of course we will continue to evolve and deprecate over time. I guess I'm trying to detect design time-bombs wrapped inside "the good".

Yes, and I'm worried that this pitch might harm that ability by conflating assertions with optimizations. Example using -DEXTRA_CHECKS:

#if !optimization(enabled)
  slowChecks()
  logging()
  ...
#else if EXTRA_CHECKS
  assert(...) // <-- Never checked!
#endif
  precondition(someFastCheck())

I agree, this should be the primary motivation here.

1 Like