Inspired by the Is a set a sequence-thread, I did some sequency things with a Set, and noticed that the following valid program
let x = Set<String>(["a", "b", "c", "d"])
prints 3 with Xcode 9.3 default toolchain
prints 1 with Development Snapshot 2018-03-28
prints 3 with Development Snapshot 2018-04-06
What’s the intended behavior of this program, and why is the result different depending on the version of the compiler?
EDIT: I did some more tests, and this program actually prints 1, 2 or 3 on repeated runs! But this is only if I use any of the development snapshots. If I use the default toolchain of Xcode 9.3, it seems to always print 3 …
This is because hash values of String on master now include different random salt on each execution of your program (see @lorentey’s recent post) . Since the order of elements in the Set is affected by those hash values, you will get different results between runs.
Given you should never be relying on the set having a particular order full stop, it doesn’t matter that that order can vary between runs.
The only thing you can rely on with a Set is that multiple passes over a Set will return the values in the same order on each pass (assuming no mutation is happening in-between).
Yes, but I think it’s strange to have methods like split, prefix, reversed and last available on Sets (and Dictionarys) though.
Are there any other similar cases in the Standard Library, where some type conforms to a protocol (because it makes some sense, in a way, sort of) and thereby gets a bunch of methods that doesn’t make any sense at all given the semantics of that type?
These algorithms are still useful, and make sense, for Dictionary and Set, because those types have a defined ordering over multiple passes of the same instance. There are many algorithms that involve walking through a collection, and slicing it up in different ways, for a purpose that makes sense for a Set or a Dictionary as much as for anything else, and an implementation might make use of these methods.
There are other examples of a similar nature elsewhere. For example, finding the “minimum” value of a String doesn’t “make sense” (especially given the way ASCII ordering works), but it still has a min method because Character is Comparable. This doesn’t mean we need to invent a Minmaxable protocol out of fear that someone might be confused by the presence of min and max on String.
I cannot, but this is of little importance. It does not matter that some algorithms apply to some sequences, but others don’t. Attempting to force all semantics into an elaborate taxonomy of protocols is neither useful nor practical and should probably at this stage go on the commonly rejected proposals list to avoid ongoing noise IMO.
Thank you for taking the time to communicate your views on this. However, I think the community must be allowed – even encouraged – to discuss, question and explore possible modifications to the current protocol hierarchy.
For example: Conforming to Sequence is currently the only way to make a type iterable in a for-loop. But it also brings a ton of methods that may or may not be relevant for the type in question. I’ve found myself avoiding Sequence (and thus for-loopability) because it would pollute my type with methods that conflicts with its semantics, eg iterating over N-dimensional Int-vector-type-indices of an N-dimensional Table type, also Swift’s current range types are not suitable for N-dimensional bounds, but it’s not obvious to me that they never should be.
Anyway, this belongs in the Is set a sequence-thread where I largely agree with @Tino. It reminds me of the initial discussion about making the first parameter in a function declaration follow the same rules as the others. I can’t find any source, but if I remember correctly it started with someone making critical remarks about the (then) special-cased first parameter. These remarks were met with compact resistance. But in the end it lead to a change that I think was greeted unanimously by the community.