Example uses & benefits of `@_effects(readnone)`

I'm curious if anyone's encountered a use of @_effects(readnone) which was significantly beneficial (presumably to performance, but for any other reason)?

It's pretty close to the intent of a hypothetical pure keyword, and I'm curious if it's worth dabbling with or if in practice it doesn't seem to help (e.g. perhaps the optimiser doesn't currently do anything useful with that knowledge?).

Phrases like
"has anyone encountered a use of <#keyword#>" and
"a hypothetical <#another keyword#>"
make me feel like we are involved is a science of studying an ancient forgotten language, such as Pictish.

It's easy to exhibit an example where the optimizer takes advantage of it. jawn(_:Int) -> Int is marked readnone, and so the compiler knows that it can eliminate the second call (because it necessarily has the same result as the first), so it just does one call and doubles the result.

Usual note about "these are underscored attributes, subject to change at any time" applies.


Right, it's easy to intuit why this might be a very valuable optimiser hint, but it's also technically verboten (as an underscored attribute) so I'm trying to judge if it's actually worth that trade-off (plus the time to actually apply it to relevant code), or if there's some unexpected reason it doesn't ultimately achieve much in real-world programs.

I only learnt of it today, but I've been wishing I could express function purity to the Swift compiler since basically the beginning, because I've seen countless examples where it did things inefficiently (mainly failure to promote loop invariant function calls out of loops, that sort of thing). I can't blame the compiler - it's not allowed to just assume purity. I'm happy to help it. If it actually helps.

I don't expect anyone to investigate this for me, I just thought it worth putting the question to these forums in case somebody happened to recall a useful example.

At least one of the reasons it’s “private” is because it’s unchecked: if the function actually had important side effects, the compiler would do its optimizations nonetheless. And introducing a checked version of “pure” is hard—it’s a viral constraint on par with async and copyability, which means a bunch of existing APIs that need to be audited, to make it generally useful. I don’t think Swift is ever going to go down that route, though (even if the stdlib and corelibs get that audit, many library authors wouldn’t bother), so maybe formalizing an @unsafePure is a reasonable path forward for those who can use it.


As an example of why this is especially hard in Swift, this function could validly be marked readnone:

func length(of buffer: ArraySlice<UInt8>) -> Int {
  return buffer.count

And this one cannot:

func length(of buffer: Array<Int>) -> Int {
  return buffer.count

…because ArraySlice stores its bounds inline, and Array does not. But short of reading performance guides, or the actual source, there’s not really a way to know this.

readonly is a little safer than readnone, you have to work a little harder to contrive an example where things go wrong. So maybe that’s where to start.


I assume it's also ABI-incompatible to remove the attribute, since that will invalidate assumptions made by previously compiled code. But not source-incompatible, of course.

And fully compatible to add it.

…yeah, but that's true of lots of other things too (@_noLocks, noasync, etc). I don't know that 'virality' is inherently a problem, as long as it's easy to start from the bottom and it doesn't impose any limitations (which declared purity doesn't, beyond the ABI commitment of course - if your package even has to care about that sort of thing).

In the case of function purity, you don't have to care at all if you're calling pure functions, so the predominant downside of 'virality' doesn't apply. And there's always withUnsafeImpure (or similar) escape hatches if there arises weird situations where you can't mark a subfunction as pure even though you believe it is.

Or am I missing something in this regard?

There is the readnone vs readonly distinction, I guess… having too much nuance / 'degrees of purity' might become practically frustrating, if it's not easy to ignore.

1 Like

Yeah, it's obviously a looser and therefore easier-to-not-screw-up constraint. But it's also even less clear to me how much difference that could make in practice. Traditionally (pre-concurrency checks, at least) I would think the compiler can't really make a lot of assumptions about the complete state of the program - anything could be readable under the readonly promise, right? - in order to know if a call is truly redundant, for example. Maybe with Swift's stronger ownership models and concurrency checks that's no longer such a problem, though?

In the sense of practical, local(ish) reasoning, I mean. The optimiser can't spend infinite resources trying to formally track everything, which I assume would be a practical issue here (unless it falls out of Swift's formal model that it actually can ignore most of the program for this purpose?).

Yeah, that's a good point. And even if one were to pursue some notion of a 'pure-compatible' type trait, in an attempt to address that by surfacing the attribute explicitly, I suspect that'd be its own can of worms for enforced validity, not to mention further binary-inflexibility concerns…

I’m confused as to why “count” being stored inline or not makes a difference. Why would reading a property of an Array, which is a COW type, have observable side effects? It’s unclear to me, even after reading the underscored attributes documentation:

“readnone” must not read any memory that can ever be modified, at least logically. That’s why it’s named that. One concrete optimization that’s performed here is that the function call can be hoisted out of a loop if none of its arguments have changed—not using value semantics, but using the actual system-level calling convention. Thus, you’d (theoretically) run into trouble in an example like the following:

var array: [UInt8] = []
for i in 1..<10 {
  print(length(of: array))

After inlining a bit, the compiler could prove that the array is never reallocated, and thus it’s safe to assume the reference to the backing buffer never changes, which means the raw, system-level argument to length(of:) never changes. Which means it’s “valid” to treat the result as unchanging as well and hoist the call out of the loop:

var array: [UInt8] = []
let $cachedResult = length(of: array)
for i in 1..<10 {

This is why these attributes are considered dangerous: they silently allow changes to the behavior of callers based purely on optimization, which is up to the compiler, and the only way to use them correctly is if those changes in behavior aren’t observable, or at least not in any way you care about.

(Why doesn’t the same logic apply to ArraySlice? Because the system-level arguments for ArraySlice include the start and end pointers, and that would change if you called append.)


In exploring this more, I noticed that the optimiser already does that even without the attribute, unless you annotate the function with @_optimize(none). When it's in the same file, at least. It seems the optimiser will look inside the function and act on the specifics of its implementation, even when the function isn't inlined.

Which wasn't exactly anticipated [by me] but is good to know - I had already hoped that @_effects(readnone) should be unnecessary outside of public module methods, and this seems to support that.

Right, and if I understand this older thread correctly, that's because readnone is really a low-level LLVM IR attribute which distinguishes register passing from memory passing - which presumably means arguments spilled to the stack would also break it?

It seems like - that older thread suggests that - readnone is impossible to use in a platform-portable and future-proof manner, because you can never really know the actual calling code that will be generated (e.g. some embedded or academic platform might not use any registers for argument passing, for example). Is that technically true?

Practically, I suppose it could be used with some confidence on existing platforms if you're very careful about the types and numbers of preceding parameters. Seems pretty fragile, though.

That earlier thread also noted that unspecialised generic functions pass the type metadata (pointers to witness tables, I assume?) on the stack, not in registers, so they also cannot use readnone. That's a pretty huge limitation for something that would otherwise be most useful for generic algorithms provided by 3rd party packages.

Real-world usage (e.g. involving generics, Self, etc)

Where does self play into readnone / readonly? I can't find a single example in any documentation or Swift Forum threads that use methods rather than free-standing functions.

For example, let's consider a non-trivial but very relevant example:

extension Comparable {
    func clamped(_ range: borrowing PartialRangeFrom<Self>) -> Self {
        if self < range.lowerBound {
        } else {

That's great if it's inlined, but it might not be (and maybe for good reason - maybe it's only used generically and the overhead of the generics handling bloats it significantly). It seems like this would be good to mark as readnone so that the compiler can omit redundant calls, since it is plainly pure in the conventional sense.

But first-up, this is generic I guess since it's a default implementation on a protocol, so I guess readnone is flat-out banned?

Even if that's not the case, Self could be anything - a value type, a reference type, etc. It seems undefined what lowerBound does when invoked on range (type Self) - even just worrying about calling convention right now. Likewise the < operator, I suppose. So it seems like there's no way to know if everything is passed in registers - and therefore seemingly readnone is safe - or partly on the heap - and therefore readnone is not.


In contrast, readonly seems safe(ish) either way - assuming < and lowerBound are readonly too, of course.

But then that doesn't really buy you much, since it sounds like it merely lets the compiler omit the call if the result isn't used, which (IMO) isn't a common opportunity anyway. Given that, I'm struggling to imagine any situation in which it's worth using (given the downside that it's an underscored attribute with no guarantees going forward).


Also, does the optimiser look into @inlinable things from other modules, when making its decisions?

If so, that seems like it moots a lot of otherwise attractive cases for readnone / readonly - if you can make it @inlinable instead / anyway. For non-binary, open-source packages I'm curious why not just make everything @inlinable - that lets the optimiser still make good decisions based on what the code is actually doing, and it doesn't require any inlining so it doesn't matter how big the function is.

(I mean, it'll hurt compile times, sure, but that seems like the only notable downside)

Yes, theoretically you could be targeting a stack machine. Although in practice those usually have at least a top of stack register.

There's a few more exotic architectures with no registers, but I'm not sure if any of them have seen any actual use.

Yes, it does. This is part of why you can see this slightly unintuitive pair of annotations in a few places in SwiftNIO's codebase (added in #2050).

I remain convinced that @inlinable has one of the least-helpful names of all the annotations in Swift. A more accurate name is @publicImplementation. The vast majority of optimizations that work within a module will work across them, so long as the method in question is @inlinable, including both effects inference (as discussed here) and the big hammer of specialization.

In many ways, being able to inline the implementation is only a third or fourth tier benefit, though when it matters it really matters (usually Collection index calculations and other cheap math).

I won't speak for the Swift team, but I've been asking for us to have a supported mode for this for many years. In plenty of circumstances compile times are close to immaterial: if you're about to ship or deploy something that is going to live a long time, it often doesn't matter if your build system takes 30 minutes to compile instead of 5. Obviously this is a bad tradeoff for debugging, and it's good to be able to turn it off, but Swift gains so many perf wins from @inlinable that it's hard not to want to see it everywhere.

(Sidebar: if you look at swift-asn1 and swift-certificates you'll notice that almost everything there is @inlinable. That's an attempt to get the compiler to see that we're only doing really simple data parsing, and so to encourage it to be aggressive about optimizing those code paths.)


one practical implication is that it basically leaves you unable to use the private access level, as @inlinable is "viral" in its own way. in my own experience this has proven to be a drag on project maintainability, but i’m not sure what the alternative is.


When the source code for the package is open, access levels don't really matter. It can prevent folks from using private state that's generally unusable or unsafe from outside of the library, but folks can just fork the project and make everything public as they please.

no, it’s just easier in general to maintain and reason about code that uses more private formal access control instead of having everything be (formally) visible to everything else in the current module.


I understand why people want to use @_effects, given that we still don't have a whole-program compiler mode. But the concepts were blindly copied from C without being well-defined in Swift and without basic safeguards on their use. So, make sure you understand the language implementation at the SIL level, how the compiler models side effects, and how the SIL might realistically change in the future. These annotations were added to the most performance critical parts of standard library by the most experienced experts in Swift's implementation, and the result was some of the most heinous undefined behavior bugs I've ever seen, which weren't caught for years.

Simplest example I can come up with... this empty function has undefined behavior:

func foo<T>(t: consuming T) {}

Ah, thanks for pointing to that - I was already wondering if that combination was valid and would work as one might hope, which you seem to be saying it does.

That might be a better solution (for open-source packages) than any of the @_effects(…) stuff. It doesn't help for closed-source / binary-only packages, but fortunately for me I don't have any of those. :slightly_smiling_face:

To be clear, though: does specialisation still occur for @inlinable @inline(never)? I'm not sure if specialisation is a "type of" or "requires" inlining.

Tangentially, does @_alwaysEmitIntoClient have the same effect re. allowing specialisation? I know semantically it's a bit different, but I've been wondering if maybe that's the way to go sometimes (I guess the downside is that you might end up with duplicate implementations if multiple packages use such a symbol?).

Yes, I was confronted with that trade-off when I added @inlinable to some of my open-source packages last week. It doesn't seem like a big deal, but it is a loss - I prefer the semantic clarity of private (and fileprivate similarly), and it's a shame that has to be discarded partly unnecessarilyÂą just to get good performance. :confused:

Âą Conceptually there's a distinction between "accessible from outside this module" and "accessible from other files within this module". I like enforcing that I can't reach into other files unintentionally, within the same package, by making things private rather than internal. It helps maintain localised reasonability, and prevents unintended coupling. I wish I didn't have to give that up in this case.

Yep, and this sort of thing is why I ultimately failed to add even a single @_effects(…) annotation to any of my packages last week, despite auditing them all thoroughly. I just couldnt find any situation where I was confident it was both correct and a meaningful benefit. readonly is a lot easier to apply than readnone - I could find virtually no real-world code where readnone seemed safe - but the benefit of readonly is pretty esoteric (if you're not using the result of a side-effect-free function call, don't call it, maybe? :laughing:).