Why must i build ALL of NIOCore just to say the name ByteBuffer?

for better or for worse, ByteBuffer is starting to get entrenched in the server ecosystem (e.g. BSON). so any target that uses ByteBuffer, no matter how transiently (many people seem to just be using it as a synonym for [UInt8], or tack it on as a “convenience API” in a misguided effort to be helpful) now depends on NIOCore, which is a large framework and takes a while to compile.

is it feasible to deprecate this thing once and for all? and if not, could it be moved into its own module, and possibly its own repo?

2 Likes

This is definitely one the big things I think needs to be extracted and refactored to better support the community.

3 Likes

tbh the whole type needs a redesign, the current API is quite silly. why do we even have “withVeryUnsafeMutableBytes(_:)”? a view is either unsafe or it isn’t.

and scolling past endless methods that look like

mutating func readMultipleIntegers<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15>(endianness: Endianness = .big, as: (T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15).Type = (T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15).self) -> (T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15)? where T1 : FixedWidthInteger, T2 : FixedWidthInteger, T3 : FixedWidthInteger, T4 : FixedWidthInteger, T5 : FixedWidthInteger, T6 : FixedWidthInteger, T7 : FixedWidthInteger, T8 : FixedWidthInteger, T9 : FixedWidthInteger, T10 : FixedWidthInteger, T11 : FixedWidthInteger, T12 : FixedWidthInteger, T13 : FixedWidthInteger, T14 : FixedWidthInteger, T15 : FixedWidthInteger

does not seem ideal…

Reminds the situation with Foundation...

Sorry for a naïve question, what advantages does ByteBuffer type have compared to [UInt8]? If that's just a few extra methods couldn't those be added as extensions on [UInt8]?

I don't understand how we could deprecate a type that we are actively using and for which there is no replacement.

I don't want to do this for expressly the reason that you're mad about the current situation: ByteBuffer isn't (and shouldn't be) a general-purpose currency type. Pushing this out into its own repo anoints ByteBuffer as currency type by acclamation, and makes it much harder for the Swift ecosystem to land on an actual currency type by strangling the need for one.

ByteBuffer has one explicit purpose: parsing and serializing network protocols. It is designed with a substantial API surface to make doing that as easy as possible. This goal is totally unrelated to the "I have an unstructured bag of bytes" situation. If you evaluate ByteBuffer's API surface from the idea that all you want is to shuffle bytes around without parsing them, yeah, it seems weird.

A general purpose bag-of-bytes currency type is useful, and I'd like to see folks line up behind the need for that thing. That is a substantially more valuable thread of discussion.


Ok, with that out of the way, I'd like to separately address the later comments. But first: it would be really helpful if your first response to seeing a type whose API seems needlessly complex to you was not to assume that the authors of that type are idiots. It's perfectly reasonable to ask why ByteBuffer is the way it is, as @tera has just done very reasonably, and there are definitely parts of the API that want to be redesigned, but when you write

my assumption is that you don't know what you're talking about and haven't bothered to find out. This is a bit weird to me: I'm right here, answering questions on the Swift forums, all the time. If you wanted to know why ByteBuffer was the way it was, you could just ask, instead of writing the above comment. When I read that comment, I read that as you saying that you think ByteBuffer doesn't do a good job at its core goal. That's laughable: I use it every day to do what it was designed to do, and it's phenomenal at it.

Relatedly:

The fact that you pulled a documentation link out means you must presumably have opened that page, which says, in the very first paragraph:

It’s marked as very unsafe because it might contain uninitialised memory and it’s undefined behaviour to read it.

This distinguishes it from withUnsafeReadableBytes, which vends you initialized memory. Put another way, this code is fine: buffer.withUnsafeReadableBytes { $0.first }. This code is not: buffer.withVeryUnsafeMutableBytes { $0.first }. We are attempting to capture that distinction without writing the much less handy function name withUnsafePotentiallyUninitializedMutableBytes. Is the name bad? Sure. Is the function necessary? Yes, we use it often.

I agree. Those methods exist to resolve the fact that variadic generics do not exist, so we have no more concise way to express this operation. I'm open to suggestions for how we could do this differently.

ByteBuffer has three very distinct properties that are different than [UInt8], and only one of them cannot be implemented on top of [UInt8].

The first is a pair of cursors, a read and write cursor. This allows you to implicitly load a sequence of bytes from wherever you have read up to, without needing to drop them from the front. This behaviour is trivially implemented on top of [UInt8], and is in fact pretty close to ArraySlice<UInt8>, though unlike ArraySlice<UInt8> you can "rewind" a read by moving the cursor backwards.

The second, and much more important, behaviour is that ByteBuffer allows you to write to uninitialised memory. This has a number of powerful features. It makes appending to ByteBuffer extremely cheap, because we don't have to pre-initialise memory. It means that ByteBuffer semantically does have a capacity (Array's capacity is non-semantic), it allows temporarily "skipping" bytes and coming back to fill them in later, and it enables easy interop with C APIs without requiring expensive zeroing first.

The third, and final, behaviour is that ByteBuffer can grow using realloc. This is not something Array can do. The effect is that for large ByteBuffers, further growth can occur without needing to copy the existing bytes, because realloc can (sometimes) transparently attach a new page to the back of an existing one.

Finally, even if ByteBuffer did not have these differences, it would be a mistake to add these methods to [UInt8]. As @taylorswift observed, ByteBuffer has a large API surface for a specialised use-case. We should not burden all users of [UInt8] with this API surface.

What we'd like to do instead is to make it easy for users to pick the shape that fits them best. If you need to parse, it would be great if you could wrap a type into ByteBuffer without copying it. We can't do that today because ByteBuffer's API surface cannot be implemented on top of any other currency type in Swift. If we could, then @taylorswift's problem would be solved, because folks wouldn't need to express their API surface in terms of ByteBuffer in order to support zero-copy transformations to and from it.

32 Likes

my observation is that the toothpaste is out of the tube, because other libraries are already using ByteBuffer, and they are using it as an “unstructured bag of bytes”. i understand this is not what ByteBuffer is for and that it is frustrating to you as a library author that people treat it as such. but it seems unlikely that ByteBuffer will be able to shed this role without lots of people spending a lot of time refactoring their own libraries to not use ByteBuffer.

i never said that you were an idiot, but at a higher level, there is nothing wrong with being an idiot in the first place. i am an idiot who releases idiotic APIs on a regular basis. the world was built by idiots.

for a genius can only get dumber, but an idiot can only get smarter.

so, the thing i was trying to say in a very inelegant manner is that safety as a binary concept is very useful. when you try and add gradations to safety, it gets less useful.

perhaps: ByteBuffer might benefit from a nested ReadableBytes view type that exposes the initialized section of the buffer. taking a pointer to the buffer as a whole could then be understood to make no guarantees whatsoever about its memory state.

or perhaps this idea is idiotic and ByteBuffer cannot vend such a view, but i think it is at least worth thinking about possible shapes ByteBuffer might take besides the one it has now.

when i have run into this problem in the past, sometimes i later realized it was really because of a lack of vertical composition. i once had a parsing library that relied on dozens of gybbed tuple overloads, and what it really needed was to be able to delegate to a type that speaks a protocol (BinarySerializable?) they both understand. because sometimes when you have 16 tuple elements that means there is a struct that wants to come out and that struct wants to initialize itself from some raw input source.

some of these i knew about and some of these (like realloc) i did not. so thank you for the explanation, and i see now that suggesting we deprecate it was dumb. on the other hand, seeing as ByteBuffer has uses not specific to networking, wouldn’t that be a reason to make it more widely available in its own module?

I agree, and such an interface already exists as ByteBuffer.readableBytesView, which vends a ByteBufferView.

The problem with this interface is that it is not a substitute for a pointer-based one when it comes to C interop. It also has some performance limitations, as discussed below.

In general I agree. The intention with the API name is to encourage users to not use this function. There are very few cases where anyone should use it, but those cases do exist.

Yes, but the problem there is what the "raw input source" is. There are the following choices:

  1. RandomAccessCollection where Element == UInt8. This is the safest and most general approach, and also the slowest. Relies heavily on @inlinable to get anywhere near decent performance. Also suffers from the same problems as offering:
  2. [UInt8] or ArraySlice<UInt8>. Both good choices, but ByteBuffer can't give you one efficiently because it isn't backed by an array, so using this would force a copy. Additionally, you end up recursing: how do you get the integers out? ByteBuffer has helpers, but Array does not.
  3. Repeated calls to readInteger on ByteBuffer. This was the status quo, and it works ok! However, the Swift compiler ends up failing to optimise this as effectively as it could. In particular, the Swift compiler will repeatedly issue redundant bounds checks and check for uniqueness on each call. This was discussed in the PR that originally added the feature. This makes NIO slower than it needs to be.
  4. Raw pointers. ByteBuffer is a tightly performance tuned object: the only way to go faster is to use types that don't implement CoW and don't bounds check, and that means raw pointers. This is bad! We shouldn't force users to write unsafe code in order to parse packets efficiently. In general, NIO's position is that it should be possible to write fast code without ever needing to write unsafe code, and where possible we'll expose APIs to make that happen. Additionally, you still end up needing ByteBuffer's API surface to make the parsing work sensibly, unless you want to issue a bunch of memcpy calls directly and damn the bounds checking.

The solution we landed on was a compromise. Getting the Swift optimiser to work better will happen eventually, but we're not a compiler engineers and so it's not a problem we can fix. However, we can write code that looks ugly but lets you efficiently express the loads you want to perform. These functions allow you to implement exactly the API surface you want: define your struct, then initialise it from a ByteBuffer using one of these calls.

My ideal outcome is a declarative API type where you can describe the format of your packet and NIO can emit the appropriate calls to parse it as efficiently as possible. This would mean doing the minimal number of bounds checks (1 for fixed-size structures, otherwise n where n is the number of variable-length fields to be parsed), the minimal number of mutating accesses (1), the minimal number of CoW checks (1), and, in the case of simple types, successfully optimising down to a sequence of raw register loads. Unfortunately, we can't implement these with the language tools we have today: this is the best we've got.

As Swift improves its ability to optimise the need for these APIs goes down. Until that point, though, ByteBuffer is following its design philosophy: parse network packets as fast as possible without forcing users to write unsafe code. Sometimes that means APIs that look ungainly, but that go fast.

I want to stress here: it wasn't dumb, it was under-informed. It's not a failing not to know why something is the way it is. The mistake is to assume that, because you cannot see why something is the way that it is, there must be no reason. The NIO developers have made ourselves extremely available to the community, and we happily and candidly explain why our libraries are the way they are, including our opinions about what they fail at. We encourage you to ask us why we've made our choices before you decide that they're bad ones.

It depends on what the uses are. ByteBuffer is laser focused on what NIO needs. It has no API surface for streaming parsing (yet, though I have ideas on that front), nor does it make streaming serialization very easy, and it can't be backed by arbitrary pointers preventing use of mmap in parsing. This means it's not ideal for file parsing, though it's certainly better than many of the other types that are floating around.

Similarly, it is more prone to copying itself than some other types, because it has been relentlessly optimised to fit into 23 bytes so that it can safely be stuffed into an Any without allocating. This means that some slicing operations are forced to CoW, because even though the type can store up to 4GB of data, the lower bound of the slice is stored in a UInt24 and cannot exceed 16MB.

Also, as just noted, it has a maximum size limit of 4GB! For server use-cases that's fine, but for file-backed parsing it's an absolute non-starter.

What I really want is to have a way to make it easy to transform between the representations of a bucket of bytes that are useful in your specific context. This implies a low-level representation that can be "wrapped" by ByteBuffer or anything else that can provide the interface needed at the specific moment. That type then becomes a sensible currency type.

With that in mind, let me return to your first comment:

I want to actively provide headwinds on people using ByteBuffer as an arbitrary currency type. Users will have to refactor their code, but they're more likely to do that if we don't make it too easy to use ByteBuffer where a different type would be better.

15 Likes

Very interesting reading, thank you.

It feels a name like "NIOByteBuffer" or "NetworkPacketBuffer" would have been a better choice. I can see how people who can't use Foundation's Data and for some reason can't also use [UInt8] may arrive at considering a generically named "ByteBuffer" assuming it is, basically, "a Foundation's Data analogue without the baggage of Foundation."

4 Likes

ByteBufferView has a withUnsafeBytes(_:), what advantages does ByteBuffer.withUnsafeReadableBytes(_:) have over ByteBuffer.readableBytesView.withUnsafeBytes(_:)?

because i think it would be easier to understand ByteBuffer itself to be some area of raw memory with no guarantees of memory state, and ByteBuffer.readableBytesView to be the "initialized" world where this guarantee exists. it would also make the API look more like the diagram in the docs:

+-------------------+------------------+------------------+
| discardable bytes |  readable bytes  |  writable bytes  |
|                   |     (CONTENT)    |                  |
+-------------------+------------------+------------------+
|                   |                  |                  |
0      <=      readerIndex   <=   writerIndex    <=    capacity

where ByteBuffer has multiple conceptual "regions".

(and while this would probably have to wait a while, it might make sense to follow the String precedent and shorten the name to readableBytes or even just readable to reduce the friction of using it. you would have to demote the readableBytes count to something like readableBytesCount, but i think the count is used less than the view.)

i am speculating here, but i am guessing that the readMultipleIntegers overloads exist because of endianness. because if there were no endianness, we could just unsafe-bitcast to a tuple. but then we would have to do .bigEndian everywhere to access the value.

i wonder if NIO could instead vend a property wrapper that reshuffles the bytes lazily:

@frozen
struct MyBinaryHeader
{
    @BigEndian
    let a:Int64
    @BigEndian
    let b:Int32
    // ...
}

and then we bitcast to the struct

let header:MyBinaryHeader? = buffer.readTrivial(as: MyBinaryHeader.self)

a bit of a tangent, but even though i don't think NIO developers are making "bad choices", it's not the end of the world if one person "decides" you made a bad choice. because your library has many users and no single person has the authority to say a library is "bad" or not. it is just one data point in a range of possible reactions users can have to your API. :slight_smile:

right. but let me provide a perspective wearing my "user hat" as opposed to a "third party library developer hat" or "NIO developer hat". i totally understand why BSON relies on ByteBuffer. because BSON exists to support MongoKitten, and MongoKitten depends on SwiftNIO. so there is no additional cost to BSON itself using ByteBuffer, because it normally gets compiled as part of MongoKitten, where all of NIO is available to it.

but me, i want to write code that is one more step removed, code that interfaces with BSON, but doesn't directly interact with either MongoKitten or SwiftNIO. so there is a misalignment of incentives here. this is why i think making ByteBuffer hard to import in order to prevent its proliferation is not a good strategy, because the projects that need to use it do not always have the same exposure to SwiftNIO as the third party frameworks that require it.

I don't think it's fair to hold NIO responsible for how third parties abuse their framework. There's nothing stopping those library authors from extracting the ByteBuffer code into their own library and using it internally. The problem only begins when third parties want to use it as a currency type, and that's just not what it's designed for.

The NIO team isn't responsible for others' laziness.

10 Likes

Huh, I didn't know Array always needed to copy existing bytes when it grew. Do you know where that is documented in the compiler/stdlib? I'm really curious about why this is not already a feature.

I don’t know of any reason why Array wouldn’t be able to realloc when growing a uniquely-referenced buffer of trivially-movable elements. It’s blocked if those conditions don’t hold, but that’s not common.

It’s possible that the current implementation doesn’t do this, though.

2 Likes

Bear in mind that to get access to this type you must depend on the swift-nio package and import NIOCore, so this type's true name is NIOCore.ByteBuffer.

For the above reason, I don't think anyone thinks that: it's an alternative type to Data that has the baggage of NIOCore, which is exactly what promoted @taylorswift to start this thread in the first place.

None, but it predates the introduction of ByteBufferView. Additionally, there are a suite of related functions that ByteBufferView cannot model, including withUnsafeMutableReadableBytes, readWithUnsafeReadableBytes & readWithUnsafeMutableReadableBytes.

The problem with this idea is that it makes manipulating the indices extremely painful. In particular, the "read" operation (consume some bytes and move the reader index) conceptually cannot occur on the ByteBufferView without allowing a BBV to be stored into ByteBuffer.readableBytesView, and that opens us up to awkward questions like "what happens if someone stores a BBV that was not derived from this ByteBuffer?".

Agreed in general: this is a legacy of the fact that ByteBuffer substantially predates ByteBufferView. However, I will say that ByteBuffer.readableBytes is used vastly more than ByteBufferView is.

No, they exist because unsafeBitcast is, y'know, unsafe. See my prior note about not requiring users to touch unsafe code to achieve their goals. And pre-empting the response that NIO could hide the unsafebitcast, any API would have to constrain the type to be a tuple of FixedWidthInteger (for the reasons I'll outline after your next comment) in order to be safe, at which point we've reinvented this API.

There are four reasons we don't do that:

  1. Swift does not promise that structs defined in Swift have a specific layout, so performing this unsafe bitcast is not generally safe. If the type is frozen it's safer, but still not safe as tools versions may change the layout, and we can't force that as a generic constraint anyway.

  2. It forces users to understand structure padding. Imagine a 9 byte header made up of a single 8-bit flags field and a 64-bit integer. You may want to define this type as:

    @frozen
    struct MyBinaryHeader {
        var flags: UInt8
        var identifier: UInt64
    }
    

    The problem here is that the size of this structure is 16 bytes, not 9. Unsafebitcasting to this will both read too many bytes and throw away 7 of the header bytes.

  3. There is no way to express a generic constraint that forces this to be a trivial type. Users will naturally be tempted to write a struct containing an Array, and an unsafeBitCast will therefore produce a wild pointer with a crash. Unacceptable for a safe API.

  4. Defining an @BigEndian property wrapper whose only effect is to change how NIO encodes something is, in my view, an abstraction violation. It pushes an implementation detail (this type is decoded from a ByteBuffer) into your API surface. I've never been a fan of this when people use it for Codable, and I don't love it here either.

The ideal spelling of this interface probably actually involves macros. However, that feature doesn't exist today, and waiting for it was unnecessary.

Yes, but it matters when someone who thinks that we've made a dumb choice publicly rants about how dumb our choice is. If I hadn't stepped in it is highly likely you'd have been unopposed, and so folks who haven't used our type would read your post and assume that our silence implies that you are right. It becomes a form of misinformation that encourages other people to draw the same incorrect conclusion as you.

Put another way, I don't care whether you think NIO is good or not, your opinion is your own. But I do care when a) you think it for reasons that are ill-informed, and b) you write posts that encourage others to agree with you. This is why I wanted you to begin by asking questions instead of making assertions: it makes the whole thread far less combative, with the side benefit that third party observers might learn something.

Yup, I agree this is a pain in the neck. I just don't think the proposed remedy is the right one. It implies that if we ever build something useful we need to factor it into its own library in case someone exposes it in their own API. We actually went through this earlier with CircularBuffer where we repeatedly received pressure to move it to a separate library, which we resisted.

Happily, with the release of swift-collections containing Deque, all pressure to make CircularBuffer generally available vanished. Deque is both a better type and available without NIO's baggage, so users have rapidly transitioned over to it. I'd like to see something similar happen with ByteBuffer.

One honest answer is that realloc is mostly a pretty niche feature. It doesn't guarantee it won't have to copy the bytes, it just tries not to. And as @John_McCall notes, there are a bunch of constraints as to when it can fire.

The current implementation doesn't do it because tail-allocated storage doesn't support it in Swift today. It's not impossible, just not currently supported. @weissi attempted to add support for this capability generally (in ManagedBuffer, which isn't quite the same) in https://github.com/apple/swift/pull/19421, which was reverted in Revert "implement ManagerBuffer.reallocated to allow realloc'ing the storage" by airspeedswift · Pull Request #21874 · apple/swift · GitHub. None of this is impossible, but none of it works today.

As to why we didn't pursue adding the capability to Array: we needed a bunch of features that Array also doesn't support, and additionally, the bar for changing Array is really high! That data type is critical and load bearing, and even slight tweaks have a tendency to be catastrophic.

15 Likes

Right, I didn't mean to engage with the general topic of why NIO wouldn't use Array, just the narrow question about realloc.

To be clear, in case anybody reading this didn't follow the link, this was reverted because it's an addition to the public API of the standard library that would need to go through the evolution process. If someone wants this API, they're welcome to propose it.

Separately, Array could do reallocation as part of its implementation today, because that is not within the scope of evolution.

4 Likes

Corey, the amount of knowledge you dump in threads like this one is amazing. You and Johannes should write a book about NIO's design, and I would buy it! :muscle:

14 Likes

There is Netty in Action, which also explains a few of the Javaisms: Netty in Action

2 Likes

i don't know why BSON uses ByteBuffer as its backing storage, only the author of BSON knows that. but i don't think ByteBuffer has the same baggage as Data, because NIOCore ultimately must be linked into the final binary product anyways (not sure if there are any good use cases for BSON that don't involve NIO), whereas Foundation is not necessary and just adds bloat to the binary and consumes extra memory.

a dependency on NIOCore is annoying, because it takes a long time to compile, but it is not a blocker. a dependency on Foundation is a blocker.

i think this is analogous to assigning to Dictionary.values; the type system allows it but it really doesn't make sense to do, and doing it is considered a programming error.

so, i think over the years, "flexible layout" has evolved into "flexible layout with swiftian characteristics". that is to say, declaration-order layout has become the facts on the ground, even though we ceremonially preface that with "the compiler has the right to reorder struct fields in any way it sees fit". otherwise there would not be a right and a wrong way to list the fields in source. and if the compiler were ever to change that behavior anything that blits buffers of things like RGBA<T>, Vector4<T>, OpenGL structures, etc. would degrade in performance to an extent that it would become a bug.

right so my experience with binary protos is that most of them do actually respect alignment, and will try to buddy-up integers in a way that matches their natural alignment boundaries. sometimes we have awkward shapes like (UInt8, UInt64) in your example, but most of the time that is because the 8-byte integer is actually a fixed-length string or something, and we just took a shortcut making it into a UInt64 when the designer of the proto intended for it to be something like (UInt8, UInt8, UInt8, UInt8, UInt8, UInt8, UInt8, UInt8).

so the problem then becomes getting users to feel confident that

@frozen
struct MyBinaryHeader 
{
    var a:UInt16
    var b:UInt16
    var c:UInt32
}

has no interior padding, which requires them to understand the layout behavior of swift structs. but i do not think it is possible to do serious work with binary formats and not be familiar with this in the first place; alignment and padding is a pretty core concept to binary serialization.

which is why i named the strawman method readTrivial so there would be a reminder every time someone were to use it. i suppose whether this is sufficient or not is up to your discretion.

i envisioned that the structs using the @BigEndian wrapper would merely be shims to quarry data out of a ByteBuffer without a performance hit. it adds work for the user, but reduces the amount of gybbed API in the library, since it's unlikely all sixteen overloads would be used in any particular deserializer.

oh my, i think you overestimate my influence on these forums and the community at large. i don't work at Apple (as i'm often reminded) and i'm not even part of the server group, so i certainly don't post anticipating anyone will take me very seriously. but, i understand why this is a concern for you, so i will keep that in mind in the future.

right, i am really struggling to think of any "good" solutions to this problem in the short-medium term, because i agree with you now that this is not SwiftNIO's fault, and i don't think it's BSON's fault either because BSON is part of MongoKitten. so i think assessments like

are accurate but not constructive because even if i had time to write a self-contained BSON library (which i do not), it wouldn't do any good because MongoKitten and MSD rely on their own framework-specific BSON implementations that do use it as a currency type.

A reason I can think of is to avoid a copy. I.e. you already get a ByteBuffer from NIO, might make little sense to copy that into a Data just to please some API.

1 Like

Yeah. I see what you're driving at there, but I don't love that API shape. (It also interacts poorly with the fact that _modify remains underscored.)

I don't think it's a good idea to mistake "there is currently a wrong way to lay things out in source" for "there will always be a wrong way to lay things out in source". The compiler team have expressly reserved the right to change structure layout in Swift. They've made it clear a number of times. Designing APIs on the assumption that the compiler team would not take that option is a recipe for hurt later on.

Hmm, this is a bit of "he-said-she-said", but my experience is almost entirely the other way. By way of examples:

  • TLS 1.3 defines things like the record layer format:

    enum {
        invalid(0),
        change_cipher_spec(20),
        alert(21),
        handshake(22),
        application_data(23),
        (255)
    } ContentType;
    
    uint16 ProtocolVersion;
    
    struct {
        ContentType type;
        ProtocolVersion legacy_record_version;
        uint16 length;
        opaque fragment[TLSPlaintext.length];
    } TLSPlaintext;
    

    Note the UInt8 followed immediately by a pair of UInt16s.

  • HTTP/2 defines the frame header like so (numbers in parentheses are in bits):

    HTTP Frame {
        Length (24),
        Type (8),
        Flags (8),
        Reserved (1),
        Stream Identifier (31),
        Frame Payload (..),
    }
    

    Leaving aside UInt24 for a minute, the flags byte forces the stream identifier out of alignment (there are 5 bytes before it).

  • Protobuf relies heavily on using varints, which cannot be represented in fixed-width structures at all.

  • QUIC also relies heavily on varints.

  • WebSocket has an awkward frame format that can use either 7-bit, 16-bit, or 64-bit lengths. Leaving aside the difficulty of representing that in a fixed-width format, the two latter lengths will follow a leading byte, so they're unaligned.

  • SSH defines a wide range of messages, many of which contain variable-length fields, but some messages are unaligned even before you hit a variable length field, e.g. SSH_MSG_DISCONNECT:

    byte      SSH_MSG_DISCONNECT
    uint32    reason code
    string    description in ISO-10646 UTF-8 encoding [RFC3629]
    string    language tag [RFC3066]
    

    Here the field byte forces the field content out of alignment.

Those few examples cover almost all of the binary protocols NIO supports out of the box. The ability to load sequential integers without doing aligned loads is fairly critical to parsing (and serializing) network protocols, IMO.

Yeah, I'd never merge the API in that form. You could add the word unsafe and it might be acceptable, but I think it's not acceptable to have a safe API that will happily assist you in dereferencing a wild pointer. Without the word "unsafe" that method is just a CVE factory, and even with it I think it's a bit of an attractive nuisance.

4 Likes

even with @frozen? because right now, frozen layout is declaration-order layout, and if that were to change, wouldn’t library ABI break?