It depends on how much inline visibility there is; one of Swift's things that it does quite well is peering into inlineable functions, this means that if the compiler can see through into the implementation it can determine that a set of calls are really just isomorphic to a memcpy in some cases and optimize down. The benchmarks so far are quite promising and are primarily limited on the async/await runtime more than the swift side of any implementations right now. That means as we optimize the compiler further in parity with the runtime being optimized we will get some pretty fast code - in that more often than not it won't matter too much (even doing things byte by byte is pretty reasonable from some initial tests/benchmarks).
Personally I would like to see async sequences be able to hit on a reasonably powerful desktop machine in the ranges of a few million elements per second.
If we determine that it is behooving to add a convenience mechanism to offer faster raw buffer access I would not claim that array would be the best path, but instead add a method to
func nextBuffer(_ apply: (UnsafeBufferPointer<Element>) -> Void) async rethrows or something along those lines w/ a default implementation that just calls next. That way we get the best of both worlds; a raw buffer dumping mechanism and a single element accessor.