The trouble with this and @gwendal.roue's joined
suggestion is they will have poor performance, due to the use of type erasure (unless the optimizer pulls a big rabbit out of its hat).
The good news is you can push the generic approach a bit further and eliminate the type erasure:
public struct Spliced<S1: Sequence, S2: Sequence> where S1.Element == S2.Element {
private let _s1: S1, _s2: S2
fileprivate init(_ s1: S1, _ s2: S2) {
_s1 = s1; _s2 = s2
}
}
extension Spliced {
public struct Iterator {
private var _i1: S1.Iterator, _i2: S2.Iterator
fileprivate init(_ s1: S1, _ s2: S2) {
_i1 = s1.makeIterator()
_i2 = s2.makeIterator()
}
}
}
extension Spliced.Iterator: IteratorProtocol {
public typealias Element = S1.Element
public mutating func next() -> Element? {
return _i1.next() ?? _i2.next()
}
}
extension Spliced: Sequence {
public typealias Element = S1.Element
public func makeIterator() -> Iterator {
return Iterator(_s1, _s2)
}
}
public func splice<S1: Sequence, S2: Sequence>(
_ s1: S1, _ s2: S2
) -> Spliced<S1, S2> where S1.Element == S2.Element {
return Spliced(s1, s2)
}
This ought to produce pretty optimal code (I'm a bit nervous about the implementation of next
function, a stateful boolean indicating the first entry was already consumed might be faster if S1
's next
implementation has some overhead).
This allows heterogenous pairings:
let a: Array = [1,2,3]
let s: Set = [4,5,6]
for x in splice(a,s) {
print(x)
}
NB that Set
is unordered. You will get random ordering of the second half on each run due to Swift's randomized hash seeding.
You can use the same trick as with zip
to splice multiple entries:
let r = 7...9
// prints 45
print(splice(a, splice(s, r)).reduce(0, +))