[Starter Proposal] Add Stable Sort to StdLib

Hi all!

This pitch relates to SR-306, which as of yesterday was tagged as Starter Proposal by Ben Cohen.

This pitch consists of three parts:


Some use cases require a stable sort. We should add a good implementation to the Standard Library, because:

  • The commonly known Stable Sorts aren’t good on space complexity (eg classic MergeSort), and amongst the fast (linearithmic, or O(n log n)) ones, the best option is not obvious.
  • The potential users of Stable Sorts aren’t experts on Sorting, and shouldn’t have to be, but the hardness of the topic requires expertise to implement and verify it.

Proposed Steps for a Solution

This is how I think we could proceed:

  1. Discuss if this is necessary.
  2. Discuss what the best possible algorithm for this could be.
  3. Lemme implement it.
  4. I formalize the proposal and send it for review.

Proposed Steps for an Implementation

If I implement it, I should do something like this:

  • Implement MergeSort as a placeholder.
  • Write tests for it: if the tests are correct for MS, they will be for whatever we pick as the final algorithm.
  • Implement and test StableSort in place of MergeSort.

That’s more or less it. Here are the topics I haven’t covered:

  • Naming (todo; bikeshed)
  • API (I guess it should mimic or extend the current one used for Sorting)
  • Optional admissible complexities, like quasilinear complexity (O(n log ^k n) with constant k)

Now onto the discussion.

Should we do In-Place MergeSort? Wikipedia says it’s O(n log^2 n). But I dunno how much it’s used. If it’s a rare sort, we shouldn’t use it.

I’d prefer to put into the StdLib an algorithm that is well known in its behavior and costs, since it would be used in many real-world applications.

That’s it. Thank you for reading!
— Félix


Let’s not use an algorithm that’s worse than O(n log n). A possible choice is Timsort, which is O(n log n), stable, and well tested (i.e. it’s what Java and Python use).


Indeed, Timsort would be good in time complexity (in fact, it seems brilliant as an upgrade to Mergesort!). But its space complexity is O(n). Is that admissible for the StdLib?

Should we take this opportunity to make the default sort a stable sort, instead of adding it as an afterthought?

If we take a “safety first” view of Swift algorithms, then a stable sort is strictly safer than a non-stable one, and should be a non-breaking change (because nobody should be depending on the specific idiosyncrasies of how equivalently-ordered data ends up actually ordered in the results of a non-stable sort).

As for a specific algorithm, there are people more knowledgeable in this space than I, but what about considering Timsort? (edit: beaten!)



Also, y’all: bear in mind that Quasilinearity is very good.

The logarithm of anything frasible is very small. Let’s pick the largest possible array in 64-bits: 2^64 elements. The log of N would be in this case, 64. That’s the largest binary logarithm we’d get without an extended addressable space.

Therefore, you can almost obviate the log ^k n part of the complexity. In practice, it’s linear, but with a larger constant.

Does this make sense to you?

If we get to something good that is O(n log^2 n) or O(n log^3 n) in time, I’d be more or less content :)

According to Wikipedia, there is a sort that is O(n log n), stable, and uses O(1) memory. I’ve never heard of it though, so maybe it has a large constant or something that makes it expensive?


🤔 that’s a good observation.

It sounds to me like we could take an implementation of Blocksort and test it against in-place Mergesort, and see if its constants are lower than the added log n cost of the Mergesort.

Just thought I’d mention, Rust uses a modified version of Timsort, here’s the implementation and here’s their reasoning


Hi Felix,

Thank you for picking this up! There are a couple of other threads on here that are also discussing this topic, but I think it’s fine to start a new thread as a pitch.

My take: I agree with others that the default sort in Swift should be stable, and we should introduce a sortUntably (or some other less tortured name, though I think having it start with the word sort will help discoverability). Note, since Equatable implies substitutability, the equatable version of sort doesn’t need to be stable, so could forward on to the unstable version.

The current sort isn’t great for some use cases either, because it’s heavily recursive and so the optimizer has trouble with it sometimes (e.g. see SR-6224, this is also the reason we have to have two versions stamped out with gyb for the equatable and closure-parameterized versions). So even an algorithm that’s slightly worse on paper, but wasn’t recursive, could do better in some cases than what we have now.

I think allocating storage is fine so long as the sortUnstably version doesn’t for those that need to avoid that cost. Implementing a merge sort as a staging ground, like you suggest, is a great way to go about creating tests. The main problem with a naive sort using extra storage on a generic MutableCollection is that you’ll have to copy elements into auxiliary storage then copy them back, and that might incur lots of ARC that in theory wouldn’t be necessary if we were moving the elements. In-place sorting based on swapAt ought to be able to avoid the ARC (though, whether it does I don’t know, you’d have to look at the code generated). Be sure to benchmark using reference-counted types (like String, so long as it’s longer than a short-string optimization we might put in :) as well as trivial ones.

Note, discussion of the implementation itself should be done over on the std lib development forum rather than evolution.


String literals are also not reference counted and it’s very easy to accidentally benchmark only literals. If you have a benchmark, I can take a look and make sure they’re reference counted.



Hmm. A dynamic (not gyb) name generator (name + lastName) should be enough in order to enforce ARC, right?

That’s a good point to discuss. Given the possibility of a faster (but unstable) sort, why’d you want the default to be stable?

My take is that probably a stable output is strictly better (or preferable) than an unstable one in all scenarios. Therefore, if possible, you’d want that to be the default.

Hmm. My uni must be too old-school in this regard. They’ve told me again and again that O(1) storage is a must.

Is O(n) storage cost admissible for sort? Maybe we can check on the Python and Rust community to see how they’ve handled it.

I think the critical question is this: let’s say we have (optimal in time) sort and sortUnstably. sort uses O(n) space, is stable and is linearithmic in time. sortUnstably is linearithmic in time and constant in space. Is there a use case for a constant space, stable, and linearithmic (or quasilinear) sort function? If there is, should we address it?

Stability is strictly preferable, but only if it’s free. I would imagine the vast majority of real-world use cases do not care a bean about stability, so given a choice between a faster or more memory-efficient sort and a stable sort, developers would in most cases prefer the former.

In many common cases (e.g., arrays of scalars), it’s not really even meaningful to talk about stability from the API client’s perspective since any given 42 or “foxtrot” is as good as any other.


Do you have data that backs this up?

Scalars are Equatable, and as @Ben_Cohen mentioned above, since Equatable implies substitutability, so I think the following approach would be a good compromise of performance and least-surprise:

  • A sort function constrained over Equatable elements (which in reality would mean Comparable because we’d want to use the element’s own < operator for comparison) could be specialized as an unstable sort.
  • A sort function that takes a custom comparator predicate would be stable.

I think this is muddying the waters. As explained in the documentation for Equatable, identity is not equivalence. Those are two distinct concepts. We should not use conformance to Equatable (i.e., a notion of equivalence) to be a proxy for judging whether instances of that type have a notion of identity.

As far as I’m aware, Swift always provides the most correct implementation as the default or currency option. Those who wish to opt into more performance can explicitly invoke alternatives (e.g., unsafeAdding vs. +).


You have a point here.

Formally, what is it that makes stability “more correct”?

It seems to me that since it’s more predictable, it’s in a sense, a better alternative. Leaving behind a degree of freedom in the result should lead to a better usage experience.

I’m not sure how identity is relevant here? I wasn’t talking about it.

If a type conforms to Comparable then it is also Equatable. So if you’re sorting an array of items based that Comparable conformance, two values that compare equal are by definition identical and interchangeable and if the ordering difference between a stable and unstable sort would be a concern, then that type is not conforming to Equatable correctly, no?

Let’s say you have two variables that refer to reference types that conform to Equatable and compare equal; let’s store both in an Array and sort it. In this case, a stable vs unstable sort makes a difference: in an unstable sort, you cannot make any guarantees about which one comes first, while a stable sort will preserve their relative order.

Yes, and my argument is that if you make those types equatable and then sort them based on that conformance, then you shouldn’t be caring about the order because you’re saying “sort these based on a conformance that defines that any equal elements are interchangeable.”

1 Like

Your logic works for value types, because there isn’t any other way to check for “sameness” other than equality. Any value types that are equal are indistinguishable. For reference types, this logic falls apart, since there is a notion of identity that is separate from equality–i.e. you can distinguish them even if they compare equal. Stable sorts, by definition, deal with identity rather than equality, so I’m not sure why you’re bringing Equatable into this.

1 Like
Terms of Service

Privacy Policy

Cookie Policy