Three questions about the bitWidth type- and instance properties

There are two properties called bitWidth for FixedWidthIntegers, one is a type property:

/// The number of bits used for the underlying binary representation of
/// values of this type.
///
/// The bit width of a `Int` instance is 32 on 32-bit
/// platforms and 64 on 64-bit platforms.
public static var bitWidth: Int { get }

The other one is an instance property:

/// The number of bits in the binary representation of this value.
public var bitWidth: Int { get }

This instance property is listed here as a default implementation of the above type property.

[1] Can a type property have a default implementation that is an instance property?

Also, it seems to me that the fact that it is not a type property together with its current documentation makes it easy to (mis)interpret as “The minimum number of bits required to represent this value in binary form” which would mean:

a.bitWidth == type(of: a).bitWidth - a.leadingZeroBitCount
// Or perhaps more intuitively:
a.bitWidth == String(a, radix: 2).count

But as it turns out the instance property is always the same as the type property, ie

a.bitWidth == type(of: a).bitWidth

So, given that the type property is (for some reason) duplicated as an instance property, [2] why hasn’t the formulation in the documentation for the type property simply been reused for the instance property? It would be much clearer:

/// The number of bits used for the underlying binary representation of
/// values of this type.
public var bitWidth: Int { get }

And, finally, [3] what motivates the instance property when it’s already available (more logically) as a type property?

I mean, there is for example a similar type property .significandBitCount (in BinaryFloatingPoint). Not only hasn’t it been duplicated as an instance property, but there is even an instance property called .significandWidth which is very similar to my misinterpretation of .bitWidth above:

/// The number of bits required to represent the value's significand.
/// ...
public var significandWidth: Int { get }

That is, the result of this instance property changes depending on the particular value / instance, just as you’d expect from an instance property.

Given this, and returning to FixedWidthInteger, I think it would have made more sense if the type property .bitWidth had been called .bitCount and the instance property .bitWidth had been the “The minimum number of bits required to represent the value in binary form”.

The instance property is required for every BinaryInteger. A BinaryInteger can be a fixed-width integer, or each instance of a conforming type could have a different width (which would be the case for a BigInt type). Therefore, this cannot be a static property for obvious reasons.

The static property is required for FixedWidthInteger types. As the name suggests, conforming types have a fixed bit width. Therefore, the static property bitWidth is always equal to the instance property bitWidth. We are therefore capable of writing a default implementation so that end users only need to implement one of these properties. It is important to have the static property in addition to the instance property, because we need to write generic code that considers the bit width of fixed-width integer types without instantiating an integer every time we do so.

The documentation is correct as-is and very deliberately chosen:

  • The static bitWidth is literally the number of bits used for the underlying binary representation of the value. The underlying binary representation for an arbitrary-width integer may be sign-magnitude rather than two’s complement; the number of bits used to represent the value may differ among these representations or other exotic underlying representations.
  • The documentation stresses “underlying binary representation” because most other methods (such as bitwise and &) have semantics that operate on the two’s complement representation of the value regardless of the actual underlying representation. This is important because an expression such as 12 & 24 must give the same result whether it’s of type Int or BigInt.
  • (By contrast, for a fixed-width type, the bit width is fixed–by definition–and we don’t need to wonder about the semantics of that property due to differences in underlying representation.)

The names of significandBitCount and significandWidth are also deliberately chosen for FloatingPoint:

  • Some floating-point types have an implicit leading significand bit and others do not, so significandBitCount may or may not be equal to the number of bits in memory used to store the significand. Therefore, it’s not the bit width of the significand as represented in memory.
  • You’ll notice that significandWidth is defined as “the number of bits required to represent the value’s significand” and not “the minimum number of bits required to represent the value’s significand”: some floating-point types permit multiple representations of the same value, and significandWidth is specifically the number of bits required to represent the significand as it is actually represented, not the minimum possible number of bits in a significand of that value. It is, literally, the width of the significand, not the bit width (because there may be leading zero bits in the significand), not the minimum width, and not the minimum bit width.

For a fixed-width integer, it’s trivially easy to compute the minimum number of bits required to represent the value in binary form: it’s bitWidth - leadingZeroBitCount. But when we write “bit width” in Swift, we’re talking about the number of bits actually occupied in memory and this usage is consistent throughout.

2 Likes

Thank you for the very informative reply. It all makes sense. But I guess I’m not the only one that will have to check the actual behavior of .bitWidth the instance property, in order to know which of the (at least to me, before your explanation) two possible interpretations of its documentation it implemented.

If there had been ¹ an instance property that implemented the other (mis)interpretation, ie “The minimum number of bits required to represent the value in binary form”, what had it been called?

(1) Since it isn’t trivial to compute for a an arbitrary-width integer, it could have been an instance property of BinaryInteger.

There is one, but it’s not public :) It’s spelled _binaryLogarithm() + 1 and it’s available on BinaryInteger.

(For reasons beyond the scope of our discussion here, _binaryLogarithm() is generally more useful than _binaryLogarithm() + 1.)

The method is probably useful enough that it could pass the bar for becoming a public API, but I dread the bikeshedding and haven’t proposed it. For end users, though, it’s not really more convenient than bitWidth - leadingZeroBitCount.

1 Like

This instance property is listed here as a default implementation of the above type property.

[1] Can a type property have a default implementation that is an instance property?

The documentation has the type-level nonsensically(?) refer to the instance-level version, while the actual code (in Apple’s GitHub repository) has the default implementations the other way around.

1 Like

This is definitely a bug in the documentation—thanks for pointing that out!

1 Like