How does one add new fields to struct Customer
inside a running Swift app? Can I somehow do that in a loop?
In my experience, Swift developers don't seem overly keen in analyzing this sort of "varying".
The time complexity of Array.sort()
is documented to be O(n*log(n)) element comparisons and swaps/moves, where n is the count of the array -- no matter what type Element
is.
(The "element comparisons and swaps/moves" part is crucial, but it is not explicitly stated in the documentation. We are notably terrible at spelling out what units we are using to measure these documented complexities; and the (implicit) units can differ wildly between operations, even on the same type. Sadly this is not by accident -- precisely spelling out the correct details appears to have the opposite effect than expected: it scares some people into endlessly asking for ever more clarifying details, and sometimes even frightens them into trying to roll their own (generally much less efficient and/or incorrect) implementations. I keep hoping we can maybe some day clarify things a little more, but so far I've been bitterly disappointed every time we try. To be honest, this current discussion is not helping, either.)
Your original struct Customer
with a single name
field is already enough to make the absolute time complexity of customers.sort()
(expressed in terms of memory accesses) quite terrifying. String
is an absurdly complex type that has to perform Unicode normalization in its ==
implementation; that algorithm itself involves sorting arbitrarily long Unicode scalar sequences, making its worst case time complexity O(n*log(n)), where n is the size of the string in UTF-8 code units. (Regular non-zalgo cases are much faster, but unless Zalgo-style customer names are manually rejected, this is the lowest possible upper bound.) The worst-case cost of comparing Swift String
instances grows superlinearly with the length of the input.
To sort an array of struct Customer
values, we need to repeat this n * log(n) normalization algorithm for every pair of customers that the array's sort decides to compare. This turns the absolute complexity of [Customer].sort()
into some eldritch horror like O(m * log(m) * n * log(n))
memory accesses, where m
is the count of the array, and n
is the maximum length of a customer's name in bytes. If we double the size of the single name
field, the worst-case complexity of each comparison (and along with it, the array sort itself) grows superlinearly -- this is a highly unintuitive result, and I can easily imagine it ruining someone's day.
Compared to that, adding a bunch of additional fields to struct Customer
seems like a trivial change to me. The growth rate as we add more fields (of the same size) matches what sort
promises in its documentation, and what I would naively expect: if the items are twice as large, then comparing them will take at most twice as long, and so will sorting them in an array. There is no surprising superlinear growth.
None of this has any relevance to the idea of adding simple identity tests to types. Identity tests can be guaranteed to take O(1) time -- indeed, that performance guarantee is the sole reason for their existence, as Equatable
does not set any requirement on the complexity of ==
. Implementing identity tests by comparing the raw bits in a type's in-memory representation is a perfectly valid strategy (for some types), and, by definition, it always satisfies the O(1) complexity expectation.
No type should offer an isIdentical(to:)
method unless (a) it is able to return a correct answer, and (b) it is meaningfully faster than ==
. Array
can easily satisfy both criteria, and InlineArray
would generally fail both.