i have a channel handler that contains code like the following:
/// Create a (generic) encoding view
var output:BSON.Output<ByteBufferView> = .init(
preallocated: .init(context.channel.allocator.buffer(
capacity: .init(message.header.size))))
/// Encode a message
output += message
/// Write the encoded ByteBuffer to the channel pipeline
context.writeAndFlush(self.wrapOutboundOut(ByteBuffer.init(output.destination)),
promise: promise)
where
@frozen public
struct Output<Destination>:Sendable
where Destination:RangeReplaceableCollection<UInt8>,
Destination.Index == Int,
Destination:Sendable
{
public
var destination:Destination
}
i do not want BSON.Output
to remain generic, this has caused a lot of (in hindsight, easily forseen) friction with the optimizer. because the BSON library has no relationship to SwiftNIO, i wanted to standardize it on ArraySlice<UInt8>
.
for the channel handler, this means ByteBuffer.init(output.destination)
is no longer an efficient way to write data to the channel pipeline, because it will now always allocate a new buffer and copy all the memory. i find this hard to justify, because the data already exists, contiguously, in the original ArraySlice<UInt8>
destination buffer.
how do i send a normal ArraySlice<UInt8>
(or even an unsafe pointer representation of it) over the network without allocating a second buffer for every message?