I've been thinking about this for a while and I'm increasingly pessimistic on the "one size fits all" "container of bytes" type. We could definitely add it, but too many of the pre-existing solutions have frozen representations that make them impossible to back with this type without breaking ABI.
I'm increasingly convinced that instead of trying to go back in time and centralise on a single basic "bucket o' bytes" type, we should make it much easier for frameworks to accept whatever bucket the user happens to have.
To use an example I'll consider swift-protobuf. Here's an example from the README:
// Create a BookInfo object and populate it:
var info = BookInfo()
info.id = 1734
info.title = "Really Interesting Book"
info.author = "Jane Smith"
// Serialize to binary protobuf format:
let binaryData: Data = try info.serializedData()
// Deserialize a received Data object from `binaryData`
let decodedInfo = try BookInfo(serializedData: binaryData)
In this instance, swift-protobuf has serialisation and deserialisation defined against Data: serializedData always returns a Data, and .init(serializedData:) always takes a Data. This is a fine enough default, but it means that if you happen to either have something that's not a Data (such as a ByteBuffer, [UInt8], UnsafeRawBufferPointer, or some custom type) or need something that's not a Data, you will have to incur an extra heap allocation and an extra copy to move between the two representations.
(Author's note: yes, I am aware of Data.init(bytesNoCopy:). This makes life moderately easier on ingestion if you carefully hold your types just right, but there is no escape on the serialise side of things.)
A better world would be one where we could define a common baseline interface that frameworks like protobuf can use, and then conform our existing bucket types to it. That would allow users to work with the data types they have and need, rather than be forced to transform to whichever ones the framework authors decided to privilege.
For deserialisation this almost exists already. Foundation has ContiguousBytes, which is borderline the correct answer to this problem. It defaults to the basic operation of "give me a pointer to your initialised storage", and can be used to bootstrap most parsing operations.
For serialisation things are a lot harder, mostly because many serialisation formats don't know ahead of time how much space they need. This forces us to define a data type that can be reallocated. No such protocol exists today, though it could probably be defined with minimal effort.
If we had a nice native deserialisable protocol, these methods could be implemented on top of that protocol and all conforming types would get the implementation for free. Seems like a win!
All of this is somewhat orthogonal to the idea of a bytes literal. A bytes literal, to my mind, should probably just vend a static buffer from the binary. This would match nicely with the other proposals for include_bytes and friends. This, again, allows us to wrap this static data in whatever data type we want, instead of having to bless a single type.