Str.first? == "." performance

Profiling my code, I saw this rather prominently

     specialized Collection.first.getter|


var isDummy:     Bool { asString?.first == "$" }
var asString: String? { if case let .atomic(s) = op { return s } else { return nil } }
enum OpCode : Equatable, Comparable, Hashable {
    case number(Double, String = "")
    case atomic(String)

in the original C++, str[0] == '$' is a trivial check, so now I'm wondering what I've done wrong in Swift to have evaded the fast path.

On further inspection, the String enum payload originated from a document parsed using:

 guard let range = str.range(of: pattern, options: [.regularExpression, .anchored], range:pos.. <str.endIndex) else { return nil }
 pos = range.lowerBound
 nextPos = range.upperBound
 return String(str[range])

Does that make is an NSString? And is that going to impact performance? (And if so, can I force conversion to a native Swift String?)

Try slapping a makeContiguousUTF8() on there.

1 Like

I'm not seeing any change in behavior with that. What can I do at runtime to inspect a String's backing store and assert that it is native?

Does it perform differently if you use hasPrefix instead of first?

1 Like

Treating a String as a Collection is not always terribly performant. Try:

str.utf8.first == UInt8(ascii: "$")

I suspect a large part of the cost here comes from working in the unicode-aware space of String. The UTF8-view performs very close to the performance of Array, so it's much closer to the performance of C++ (assuming the String is a native Swift string).


C++'s subscript gives you access to the leading char. Swift's equivalent spelling is str.utf8.first as @lukasa mentioned.

Swift's top-level subscript gives you the first Unicode extended grapheme cluster, meaning that it works for non-ASCII and complex entities like emoji.


The ICU stuff in that stack should be replaced with significantly faster code in 5.6


No, same issues.

I am using Swift 5.6 but have not updated to macOS 12.3 yet.

You, sir, are a scholar and a gentleman! That, alone, improved this test case from 13.1s to 6.0 seconds.


Ah, yeah; may need both

1 Like

I wonder if it is possible for the obvious spelling to optimise to that (either by the compiler or the standard library).

Creating a character from an ASCII string literal should be essentially free. And comparing the character at any position (and certainly the first position) with that ASCII character is a simple byte comparison; no complex grapheme breaking needed. I don't think users should need to write such a cumbersome thing or require that level of Unicode understanding to get optimal performance for this.

EDIT: Ahhh, but there may be non-ASCII characters which are considered canonically equivalent to the ASCII character. So it may not be trivial.


Greek question mark has entered the chat.


Even so, I do agree someone should glance over the code and make sure we're not leaving low hanging fruit. Especially now that the grapheme breaking code is all in Swift in the stdlib, there may be shortcuts available that weren't before. @Alejandro do you know offhand if we have any fast "these definitely aren't canonically equivalent" predicates handy that we could use before falling back to full grapheme breaking?


hmm no, that would require knowing that we're doing a comparison up front, first is always going to have to get a grapheme. Tricky :slight_smile:

1 Like

.hasPrefix could be optimized that way.

AFAIK string.first takes unbound time and unbound space: you can fit the whole contents of war and piece into a single character.

It's not unreasonable to consider the optimizer performing this kind of interprocedural optimization though, including targeted optimizations with knowledge of String and Unicode to improve performance of this use case. Given how important this kind of comparison to a constant ascii values is, and how obscure the performant solution of dropping down to the .utf8 view is, it might be justified (in the same way the compiler knows a lot about Swift's array type).


I think you would start optimizing this by recognizing the comparison against a string that's a single ASCII character. That should be a very cheap operation — not as cheap as str[0] == c would be in C, but C is ignoring a lot of Unicode rules — because there aren't very many ASCII characters (perhaps none) that are equivalent to composed characters, so you should usually (perhaps always) be able to bypass full grapheme breaking: as soon as you detect combining characters, you know you don't have a match. And of course if this optimization is fed by statically recognizing a comparison against a single-character string, we should know the character as well.

Then you can further statically optimize applying that operation to various ways of producing substrings so that you avoid creating the substrings.


Here is concrete example of the disagreement between the two comparisons:

let str = "$⃝" // $ + ◌⃝

print(str.utf8.first == UInt8(ascii: "$")) // true (like C++: str[0] == '$')
print(str.first == "$") // false