/// Defines bit-related operations such as setting/getting bits of a number
extension FixedWidthInteger {
private func mask(_ size: Int) -> Self { (Self(1) << size) - 1 }
/// Returns the bits in the `range` of the current number where
/// `range.lowerBound` ≥ 0 and the `range.upperBound` < Self.bitWidth
public func get(range: ClosedRange<Int>) -> Int {
precondition(range.lowerBound >= 0 && range.upperBound < Self.bitWidth)
return Int((self >> range.lowerBound) & mask(range.count))
}
/// Returns the `n`th bit of the current number where
/// 0 ≤ `n` < Self.bitWidth
public func get(bit n: Int) -> Int {
precondition(n >= 0 && n < Self.bitWidth)
return ((1 << n) & self) == 0 ? 0 : 1
}
/// Logically inverts the `n`th bit of the current number where
/// 0 ≤ `n` < Self.bitWidth
public mutating func toggle(bit n: Int) {
precondition(n >= 0 && n < Self.bitWidth)
self ^= 1 << n
}
/// Sets to `0` the `n`th bit of the current number where
/// 0 ≤ `n` < Self.bitWidth
public mutating func clear(bit n: Int) {
precondition(n >= 0 && n < Self.bitWidth)
self &= ~(1 << n)
}
/// Sets to `1` the `n`th bit of the current number where
/// 0 ≤ `n` < Self.bitWidth
public mutating func set(bit n: Int) {
precondition(n >= 0 && n < Self.bitWidth)
self |= 1 << n
}
/// Replaces the `n`th bit of the current number with `value` where
/// 0 ≤ `n` < Self.bitWidth
public mutating func set(bit n: Int, with value: Int) {
value.isMultiple(of: 2) ? self.clear(bit: n) : self.set(bit: n)
}
/// Sets to `0` the bits in the `range` of the current number where
/// `range.lowerBound` ≥ 0 and the `range.upperBound` < Self.bitWidth
public mutating func clear(range: ClosedRange<Int>) {
precondition(range.lowerBound >= 0 && range.upperBound < Int.bitWidth)
self &= ~(mask(range.count) << range.lowerBound)
}
/// Replaces the bits in the `range` of the current number where
/// `range.lowerBound` ≥ 0 and the `range.upperBound` < Self.bitWidth
public mutating func set(range: ClosedRange<Int>, with value: Int) {
self.clear(range: range)
self |= (Self(value) & mask(range.count)) << range.lowerBound
}
}
At the level of a gut reaction, I hate this idea. It raises all the questions that C bitfields raise, including:
How does this interact with per-platform endianness, when there are more than 8 bits?
What is the numbering order of bits within a single byte (left to right or right to left, and which is the left; alternatively, high order to low or low to high, and which is the high end)?
How does the top to bottom layout in source code relate to bit order within the memory that the bit fields occupy?
Is the source code order restrictive on the memory layout order?
What happens to multiple-bit fields when they cross byte boundaries?
When a series of bit fields are interrupted by another kind of struct member, can the compiler rearrange them for better memory packing?
When there's a multiple-bit field (e.g. 3 bits) how do you alias that to 3 single bit fields?
etc.
Now, clearly, all of those questions could have absolutely definitive answers in a spec or proposal, but people are likely to have various preferences about what the answers should be, except maybe the answer, "It should behave exactly like C, for interoperability reasons." In that case, we're stuck with the terrible C ergonomics of bit fields.
I don't think you're wrong that bit field access is useful in plenty of use-cases, but I can't help feeling that it'd be better if each use-case had its own definition and implementation of the behavior appropriate to itself, rather than contaminating Swift with stuff that C already does badly.
Just to be clear, there is no "exactly like C" for these questions, because C makes almost all of these implementation-defined, and there's no consensus implementation choice.
fwiw, I found it very useful to map bit-ranges on UInt64 to Int enums, etc., and then use atomic operations available on UInt64, to encode complex state changes without locks or actors. Everything was static-constant, and on initialization it would catch layout errors.
It's not so much a use-case as a class of solutions available if the domain can be mapped using the API. Regardless, Swift already had all the language features needed.
Not sure about your confusion. Except for wacky graphic formats (thanks Steve), bit 0 is usually the least significant bit in the same way that it is for the integer types. Even if you have a wacky use-case where the bits need to be transferred across a serial communications channel, the registers holding the data to be sent are organized (AFAAK) with bit 0 as the LSB. Top to bottom source ordering is irrelevant since the bit fields are defined by bit ranges.
For example, 0b100 is always interpreted as the the number 4 and never as the number 1.
BTW, endianess in these days is really a non-issue since most computers use 64-bit busses that transfer words and not bytes. Internally registers hold data, by convention, with right-most bit being LSB. Computers can emulate endianess for those who still believe this is a real concern (mainly for backward compatibility).
This is really, really not the case. There are lots of sub-byte image pixel formats and bitstream formats that define bits as being ordered from MSB to LSB. There are also lots that go from LSB to MSB. There are even two-byte pixel formats with little-endian bit order and big-endian byte order such that the bits of one of the color channels end up being discontiguous when viewed as a 16b field with "normal" ordering:
Bitfields and bitstream formats are a mess, and for any crazy thing you can think of that no one would ever do, there are tens of formats that did it, and you'll get stuck supporting one of them eventually.
No harm, no foul and it is now in a more standard format - assuming bit 7 means the most significant bit. If bit 0 actually means the opposite, we can just flip the bit numbers. The graphic stream remains the same.
I recall working with some of these graphic formats, which are usually from cheap LCD controllers circa the 1980s. Most were lacking documentation and what did exist was pretty marginal. These days graphic controllers support at least 8-bits per color - if not more.
I regret to inform you that sub-byte formats are alive and well, not because controllers can't handle higher bit-depth, but for data size reasons. They're not used as pervasively as they were, but 565 and 1555 are still fairly common for things like map tiles where you don't really need more bit depth and file size (or dirty memory size) is at a premium.
In a modern byte addressed computer, the order of the bits is inconsequential because the bits in a byte are conceptually parallel. They don't have an order, they all come together, as it were.
As long as all of the components in a computer system agree on which bit represents which power of two, the physical ordering is not important. By convention, we usually label the bits from 0 to 7 with 7 being the most significant bit because that means the labels tell us which power of 2 the bit represents. By convention, we also tend to put the most significant bit on the left because that's how the Arabic number system works. The physical ordering of the flip flops in a computer doesn't have to match this at all, although putting them in ascending or descending order of significance is probably a good idea.
With those conventions in mind, I find your diagram confusing. If it's meant to represent the bits being transmitted over a serial line, it's somewhat misleading because there is nothing there that says when I receive bit G3 I have to put it in the most significant bit of a byte. I could just as easily fill in each byte from least significant to most significant bit and end up with a format like @mgriebling showed, which I guess is BGR 565.
All that said, I am not a fan of C bitfields. In fact, during my career programming in C I never used them. I've never had to program in a situation where memory is that critical and if you have to translate to an over the wire protocol, bitwise operators and shifts offer more control.
Bitfields are not essential but they let the compiler do some of the grunt work that otherwise needs to be done by the progammer. Compilers (usually) make fewer mistakes than programmers and any code that doesn't have to be written is guaranteed not to have bugs in it.
That being said, bitfield functions (perhaps in the FixedWidthInteger protocol extension as shown above) would do the job almost as easily although not quite as elegantly.
This is really not about efficiency (although the compiler could produce more efficient code) but rather about clarity of the code and hiding unimportant details like the bit assignment, for example, of the red graphic bits. With high-level, well-defined bitfields, this is all possible (and I'm not talking about C bitfields -- which are basically just macros).
It's not representing serial transmission, it's showing the conceptual mapping between MSB ... LSB and the 0...7 "order" arbitrarily defined by the format. The bits are stored where they are, you don't get to put them into bytes as you receive them, and if you load a 16b pixel, you have to deal with the fact that the bits representing one of the fields are discontiguous, no matter how loudly you yell "bit order isn't real".
I think we are talking at cross purposes. C bit fields are definitely not like macros and your FixedWidthInteger extension is not like C bit fields (this is a good thing IMO). C bit fields are declared in structs to narrow the size of a member. For example, I might naively define the BGR 565 pixel as
Unfortunately, I've already made a mistake, at least on an Intel Mac using clang. It's up to the compiler whether it puts the first bit field in the least significant bits or the most significant bits and the above declaration has put red at the top and blue at the bottom on my machine with my compiler. Furthermore (as of C11) the compiler can choose to use a 16 bit sized value to put those fields in because they will fit but it is allowed to choose a bigger sized value if it likes.
Sorry about the confusion. In the post above, I talked about Ada bit fields and the rest of my discussion continued with that. Others chimed in about C bit fields so, yes, that was confusing.
Sorry, I don't understand what you are saying. Is bit 0 the most significant bit or the least significant bit? If bit 0 is the least significant bit, you have just drawn the pixel with the bits in reverse order according to normal convention.
If you are saying bit 0 is the most significant bit then it still makes sense because it's clearly a little endian formatted RGB value. That is unless, you are claiming that the bits of the individual fields are stored in reverse order.
Their whole point is that there is no "normal convention". Some formats use one order, some use another. In any event, this is entirely tangential, so... meh