Why must i build ALL of NIOCore just to say the name ByteBuffer?

i don't know why BSON uses ByteBuffer as its backing storage, only the author of BSON knows that. but i don't think ByteBuffer has the same baggage as Data, because NIOCore ultimately must be linked into the final binary product anyways (not sure if there are any good use cases for BSON that don't involve NIO), whereas Foundation is not necessary and just adds bloat to the binary and consumes extra memory.

a dependency on NIOCore is annoying, because it takes a long time to compile, but it is not a blocker. a dependency on Foundation is a blocker.

i think this is analogous to assigning to Dictionary.values; the type system allows it but it really doesn't make sense to do, and doing it is considered a programming error.

so, i think over the years, "flexible layout" has evolved into "flexible layout with swiftian characteristics". that is to say, declaration-order layout has become the facts on the ground, even though we ceremonially preface that with "the compiler has the right to reorder struct fields in any way it sees fit". otherwise there would not be a right and a wrong way to list the fields in source. and if the compiler were ever to change that behavior anything that blits buffers of things like RGBA<T>, Vector4<T>, OpenGL structures, etc. would degrade in performance to an extent that it would become a bug.

right so my experience with binary protos is that most of them do actually respect alignment, and will try to buddy-up integers in a way that matches their natural alignment boundaries. sometimes we have awkward shapes like (UInt8, UInt64) in your example, but most of the time that is because the 8-byte integer is actually a fixed-length string or something, and we just took a shortcut making it into a UInt64 when the designer of the proto intended for it to be something like (UInt8, UInt8, UInt8, UInt8, UInt8, UInt8, UInt8, UInt8).

so the problem then becomes getting users to feel confident that

@frozen
struct MyBinaryHeader 
{
    var a:UInt16
    var b:UInt16
    var c:UInt32
}

has no interior padding, which requires them to understand the layout behavior of swift structs. but i do not think it is possible to do serious work with binary formats and not be familiar with this in the first place; alignment and padding is a pretty core concept to binary serialization.

which is why i named the strawman method readTrivial so there would be a reminder every time someone were to use it. i suppose whether this is sufficient or not is up to your discretion.

i envisioned that the structs using the @BigEndian wrapper would merely be shims to quarry data out of a ByteBuffer without a performance hit. it adds work for the user, but reduces the amount of gybbed API in the library, since it's unlikely all sixteen overloads would be used in any particular deserializer.

oh my, i think you overestimate my influence on these forums and the community at large. i don't work at Apple (as i'm often reminded) and i'm not even part of the server group, so i certainly don't post anticipating anyone will take me very seriously. but, i understand why this is a concern for you, so i will keep that in mind in the future.

right, i am really struggling to think of any "good" solutions to this problem in the short-medium term, because i agree with you now that this is not SwiftNIO's fault, and i don't think it's BSON's fault either because BSON is part of MongoKitten. so i think assessments like

are accurate but not constructive because even if i had time to write a self-contained BSON library (which i do not), it wouldn't do any good because MongoKitten and MSD rely on their own framework-specific BSON implementations that do use it as a currency type.