I told the Cursor
story above for two reasons:
- When Sequence does not fit the bill, because it lacks feature (as in my example), or because it creates weird situations (as in
Stutter
andSharedCountdown
), then one has to wonder if adopting Sequence was correct in the first place. So far, people write as if the bad consequences of an ill-advised adoption of Sequence are to be charged on Sequence. I claim that charging the developer for not choosing the best tool for the job in another valid option on the table. Stutter
andSharedCountdown
are two examples of single-pass sequences and their weird behaviors. I guess they are supposed to support the request for 1st-class single-pass-sequences in stdlib. MyCursor
example comes to add "and what about throwing sequences"? The Set vs. Sequence thread came to add "and what about unordered sequences?". Some other thread did and will come about "and what about infinite sequences?", etc, etc. There are so many combinations that @Ben_Cohen had to write:
This is not to say that the protocols are perfect, or not in need of further debate. The single/multi-pass nature of Sequence, and the way that SubSequence works, is still a topic worthy of exploration and potential change (so long as that change can be made in a source-(and soon ABI)-stable way). But revisiting naming based on theoretical concerns, or totally overhauling the hierarchy in a way that would increase complexity with little upside, is not useful and at this point should be considered a settled discussion.
And this is why I'd like to see more people consider my 1st point right above: was adopting Sequence a good idea in the first place?