More on the linearity (de facto) requirement on sequences and (especially) collections

Continuing the discussion from [Concurrency] AsyncSequence:

It's not the actual requirements of the Sequence that are the problem. Just looking at these names: first(where:), suffix(_: ), dropFirst(_: ), dropLast(_: ), drop(while:), prefix(_: ), prefix(while:), starts(with:by:), starts(with:). elementsEqual(_:by:), elementsEqual(_: ), lexicographicallyPrecedes(_:by:), lexicographicallyPrecedes(_: ), reversed(), and maybe flatMap(_: ) and compactMap(_: ); they make no sense if element publication order wasn't supposed to be significant. Plus Collection.Index needing to be Comparable and Collection's use of index ranges.

Operations on the entire sequence, like flatMap and compactMap are fine, the ordered-ness will simply be inherited.

Collection.Index doesn't do much. At best it is saying that the same instances (not just equal) will use the same traversal order. It's not easy to violate that, even intentionally.

Even prefix/suffix and first/last APIs, they come in pair, which means that even unordered collections can utilize them if we only enforce the rule that traversing a collection twice yields the same item order, something even Set and Dictionary comply to. The only real kickers right now are elementsEqual and lexicographicallyPreceded.

1 Like

I have little hope that this thread will be more productive than the last discussion, and honestly it's really hard for me to understand some "arguments":
Most methods of Sequence are problematic for many types — either because expectations about order are not fulfilled, because it is not clear wether iteration consumes the sequence, or because the sequence might be (theoretically) infinite; many functions (like reversed or suffix) even fall into two categories. Just try to collect a list of functions which cause no problems with every kind of Sequence...

Two Dictionarys can be equal, and still produce completely different results for all the operations mentioned by @CTMacUser — and that is by design. elementsEqual is just worse than the rest because it can even hit you when operating directly with simple Sets or Dictionaries. In a generic context, on the other hand, the problem is much less apparent, as algorithms which work perfectly fine with all "regular" Collections can fail terribly when confronted with something like Set.

I agree that the mapping-operations are somewhat special here, because they might actually just suffer from a limitation of the language, as we cannot have map return the same container type as the object it was called on.

Alas, even if it was ever possible to solve all those issues, the best time for that passed long ago… my expectation is that only big changes like higher kinded or move-only types (if ever introduced) could warrant such break of compatibility.