Bit fields, like in C

I regret to inform you that sub-byte formats are alive and well, not because controllers can't handle higher bit-depth, but for data size reasons. They're not used as pervasively as they were, but 565 and 1555 are still fairly common for things like map tiles where you don't really need more bit depth and file size (or dirty memory size) is at a premium.

4 Likes

No problem. As said, I believe these formats can be dealt with.

In a modern byte addressed computer, the order of the bits is inconsequential because the bits in a byte are conceptually parallel. They don't have an order, they all come together, as it were.

As long as all of the components in a computer system agree on which bit represents which power of two, the physical ordering is not important. By convention, we usually label the bits from 0 to 7 with 7 being the most significant bit because that means the labels tell us which power of 2 the bit represents. By convention, we also tend to put the most significant bit on the left because that's how the Arabic number system works. The physical ordering of the flip flops in a computer doesn't have to match this at all, although putting them in ascending or descending order of significance is probably a good idea.

With those conventions in mind, I find your diagram confusing. If it's meant to represent the bits being transmitted over a serial line, it's somewhat misleading because there is nothing there that says when I receive bit G3 I have to put it in the most significant bit of a byte. I could just as easily fill in each byte from least significant to most significant bit and end up with a format like @mgriebling showed, which I guess is BGR 565.

All that said, I am not a fan of C bitfields. In fact, during my career programming in C I never used them. I've never had to program in a situation where memory is that critical and if you have to translate to an over the wire protocol, bitwise operators and shifts offer more control.

1 Like

Bitfields are not essential but they let the compiler do some of the grunt work that otherwise needs to be done by the progammer. Compilers (usually) make fewer mistakes than programmers and any code that doesn't have to be written is guaranteed not to have bugs in it.

That being said, bitfield functions (perhaps in the FixedWidthInteger protocol extension as shown above) would do the job almost as easily although not quite as elegantly.

This is really not about efficiency (although the compiler could produce more efficient code) but rather about clarity of the code and hiding unimportant details like the bit assignment, for example, of the red graphic bits. With high-level, well-defined bitfields, this is all possible (and I'm not talking about C bitfields -- which are basically just macros).

Hmm, I thought you were talking about C bitfields (although they are not macros):

struct S {
    int one:1;
    int two:2, three: 3; 
}

as @QuinceyMorris pointed out above they were never standardised and compilers do them differently.

See this message, it shows how we can achieve something similar with Swift macros.

It's not representing serial transmission, it's showing the conceptual mapping between MSB ... LSB and the 0...7 "order" arbitrarily defined by the format. The bits are stored where they are, you don't get to put them into bytes as you receive them, and if you load a 16b pixel, you have to deal with the fact that the bits representing one of the fields are discontiguous, no matter how loudly you yell "bit order isn't real".

3 Likes

I think we are talking at cross purposes. C bit fields are definitely not like macros and your FixedWidthInteger extension is not like C bit fields (this is a good thing IMO). C bit fields are declared in structs to narrow the size of a member. For example, I might naively define the BGR 565 pixel as

#import <stdint.h>

struct  BGR_565
{
    uint16_t blue: 5;
    uint16_t green: 6;
    uint16_t red: 5;
}

Unfortunately, I've already made a mistake, at least on an Intel Mac using clang. It's up to the compiler whether it puts the first bit field in the least significant bits or the most significant bits and the above declaration has put red at the top and blue at the bottom on my machine with my compiler. Furthermore (as of C11) the compiler can choose to use a 16 bit sized value to put those fields in because they will fit but it is allowed to choose a bigger sized value if it likes.

1 Like

Sorry about the confusion. In the post above, I talked about Ada bit fields and the rest of my discussion continued with that. Others chimed in about C bit fields so, yes, that was confusing.

Sorry, I don't understand what you are saying. Is bit 0 the most significant bit or the least significant bit? If bit 0 is the least significant bit, you have just drawn the pixel with the bits in reverse order according to normal convention.

If you are saying bit 0 is the most significant bit then it still makes sense because it's clearly a little endian formatted RGB value. That is unless, you are claiming that the bits of the individual fields are stored in reverse order.

"What 32 bit value should I write into B&W video memory to have the left-top pixel turned on?"

- offset: 0, value: 0x80000000
- offset: 0, value: 0x00000001
- offset: 0, value: 0x01000000
- offset: 0, value: 0x00000080
- offset: rowBytes * (height - 1), value: one of the above

I've seen most of these in practice, if not all.

2 Likes

Their whole point is that there is no "normal convention". Some formats use one order, some use another. In any event, this is entirely tangential, so... meh

1 Like