It's not a bug. Measuring the length of the source before allocating
the destination array is usually a big win when compared to repeatedly
growing the array's memory and copying all its elements.
--
-Dave
Usually yes, but not in the case of lazy filter. If predicate contains
anything more than a dozen CPU instructions, single-pass version is faster.
We often want the predicate to have side effects, but we cannot with
current implementation: the side effects will be doubled.
I also wonder if it's possible to allocate array with capacity of
underlying collection (before all lazy stuff) and shrink it in the end.
It's not a bug. Measuring the length of the source before allocating
the destination array is usually a big win when compared to repeatedly
growing the array's memory and copying all its elements.
--
-Dave
Usually yes, but not in the case of lazy filter. If predicate contains
anything more than a dozen CPU instructions, single-pass version is
faster.
...and predicates are often very lightweight. I don't know how to prove
which approach will be faster.
We often want the predicate to have side effects, but we cannot with
current implementation: the side effects will be doubled.
Though I can't really endorse using a filter with side-effects, that's
why we have eager filter.
I also wonder if it's possible to allocate array with capacity of
underlying collection (before all lazy stuff) and shrink it in the end.
It might be possible, but right now we don't have a way to shrink memory
or efficiently find the length of the underlying collection, which could
itself be a lazy filter collection.
···
on Sun Jun 19 2016, Антон Жилин <swift-evolution@swift.org> wrote:
I've written a simple wrapper sequence that should be optimized out, and
performed a couple of microbenchmarks. Code:
===begin===
struct WrapperSequence<S: Sequence> : Sequence {
let base: S
init(_ base: S) { self.base = base }
func makeIterator() -> S.Iterator { return base.makeIterator() }
}
func predicate(num: Int) -> Bool {
return true
}
let sequence = WrapperSequence(1...400000000).lazy.filter(predicate) // or
remove WrapperSequence
let array = Array(sequence)
===end===
Results:
1. Double-pass wins: num % 2 == 0
2. Single-pass wins: num % 2 == 0 && num % 3 == 0
3. Double-pass wins: num % 2 == 0 || num % 3 == 0
4. Double-pass wins: num % 2 == 0 || num % 3 == 0 || num % 5 == 0 || num %
7 == 0 || num % 11 == 0
5. Single-pass wins: num % 2 == 0 || num % 3 == 0 || num % 5 == 0 || num %
7 == 0 || num % 11 == 0 || num % 13 == 0
But I have to admit that double-pass algorithm CAN be faster if the one
knows what he is doing.
I conclude that we need an API to choose between the two algorithms, and
single-pass should be the safe default.
- Anton
···
2016-06-19 19:56 GMT+03:00 Антон Жилин <antonyzhilin@gmail.com>:
It's not a bug. Measuring the length of the source before allocating
the destination array is usually a big win when compared to repeatedly
growing the array's memory and copying all its elements.
--
-Dave
Usually yes, but not in the case of lazy filter. If predicate contains
anything more than a dozen CPU instructions, single-pass version is faster.
We often want the predicate to have side effects, but we cannot with
current implementation: the side effects will be doubled.
I also wonder if it's possible to allocate array with capacity of
underlying collection (before all lazy stuff) and shrink it in the end.
I tested that behavior a bit more.
When lazy is called on non-collection that is converted to Array, it is
iterated once. `underestimatedCount` is not used.
When lazy is called on collection that is converted to Array, `count`
computed property is retrieved. Lazy collection types define `count` as
iterating and filtering through the underlying collection.
I think there are two issues:
1. underestimatedCount should require O(1) complexity. Array initializer
should always call it, as it perfectly suits its purpose
2. For collections, there should be choice to either use underestimateCount
(safe default) or count
If getting precise count requires computations (otherwise it would be in
underestimatedCount), but programmer is sure that computations are cheap in
the specific case, he can opt in explicitly.
- Anton
···
Sun Jun 19 13:03:03 CDT 2016, Haravikk <swift-evolution@haravikk.me> wrote:
I think I agree that it’s a bug; as stated in the proposal the current
behaviour allows .underestimatedCount to consume the entire sequence
because sequences are potentially destructive (so its only possible to
guarantee access to each element once). Actually, the fact that this hasn’t
been discovered sooner suggests the current tests don’t include the use of
destructive sequences, which may need to be considered too, as all Sequence
methods should work correctly with both destructive and non-destructive
sequences.
So yeah, I think having lazy sequences of this type return 0 for
underestimateCount is the right solution for the time being.