I noticed methods of Result
like Result.map
, Result.flatMap
and so on are not inlinable
even though Optional
things are inlinable
.
This topic wasn't discussed in PR of SE-0235 but I think it would make sense to change them as inlinable
.
Is there some reasons to be non-inlinable? Thanks.
No reason, other than I didn’t think about it and wasn’t told to do so. At this point I’m not sure it would be possible due to ABI stability.
I’m definitely not an ABI expert, but I think you can add an inlinable directive without breaking ABI stability. The compiler should still produce the symbol in libSwiftCore, as while it’s inlinable it’s not always inlined.
I think we should have a checklist that we use to audit every API that goes into the standard library. We should not rely on proposal authors and reviewers to notice details like this that many Swift programmers never have to deal with. @Ben_Cohen what do you think about that? How can we prevent oversights like this from happening in the future?
It is always better to err on the side of caution by not marking something inlinable. There's a reason it's the default.
Inlining is sometimes an optimization and sometimes a pessimization. It can bloat user code. It can slow things down because it restricts what the library itself can optimize. It slows down compile time. You should only do it for a good reason.
That the similar Optional
type does it is not a good reason. While the types and methods might be similar in nature, they are very different in use.
So the reason is "no-one has benchmarked to check if it's beneficial and not harmful". That would make for a great starter bug.
Marking something as @inlinable
can be done after ABI stability. Think about what making something inlinable means: the implementation code might be placed into the caller and possibly optimized when the caller is next recompiled. Doing that alone won't break the ABI.
What is important is that Result
is @_frozen
– that is the key thing that allows changes to be inlined (because otherwise the inlined code needs to be able to handle any future changes, or past absent cases), and is tricky to do after ABI stability. We expect in the future it will grow a versioning capability, at which point inlining will also need versioning to match.
There are other subtle ways inlinability can break the ABI but they don't apply here. Like if "private invariants" change that the inlinable code was relying on; and bear in mind inlinable code needs to back deploy too so it can’t rely on new invariants that didn’t always hold.
We don't. Every change made to the standard library like this is reviewed by at least one long-established standard library maintainer. In the case of this PR, several. This is especially important now we've declared ABI stability since it is critical to get right and there's no expectation that proposal authors have the level of knowledge to do this. Note, the requirement to provide an implementation is not the same as saying that implementation must be ready to merge.
This is good to know. It sounds like in this case you might add @inlinable
, but not until benchmarking is done. That seems reasonable. I’m happy to know this was an intentional decision and not an oversight. Thanks for the reply!
So the reason is "no-one has benchmarked to check if it's beneficial and not harmful". That would make for a great starter bug.
Thank you for your answer. I understood why it is. I'll measure performance and propose this again. :)
Could the compiler emit two versions of @inlinable
functions? One that’s optimized for when the function call is not inlined and another for when it is?

Could the compiler emit two versions of
@inlinable
functions? One that’s optimized for when the function call is not inlined and another for when it is?
It sort of does this already, though I don't know to what extent. The problem Ben is referring to is when the function does get inlined—it might have to use extra indirection to refer to other types in the library. (This is mostly not the case around @_frozen
types, though.)

It sort of does this already, though I don't know to what extent.
Yeah, after the module file is written out with the serialized SIL in it, we strip the 'serialized' flags from functions and run some optimizations again. However we don't re-run the entire pipeline.