How to find rounding error in floating-point integer literal initializer?

My hope was that we would eventually "productize" the concept into a library type. Note that it cannot support normal integer operations, though; it is not designed to be a runtime arbitrary-precision type.

It's an oversight that the ABI isn't documented. It is quite simple:

// Terminology:
//   chunkWidth := sizeof(size_t) * CHAR_BITS.
//   bit i of an integer is the bit set by (1 << i)
struct IntLiteral {
  // Chunks of the integer value.
  // Chunks individually have native endianness, but the chunks
  // are in little-endian order; i.e. data[i] represents bits
  //   (i*chunkWidth) ..< ((i+1)*chunkWidth)
  // Negative numbers are stored in two's complement, and
  // the last chunk is sign-extended.
  // The length of this array is:
  //   (bitWidth + chunkWidth - 1) / chunkWidth
  size_t *data;
  // Bit 0 is whether the value is negative.
  // Bits 1..7 are reserved and currently always set to 0.
  // Bits 8..<chunkWidth are the bit-width of the integer,
  // i.e. the minimum number of bits necessary to represent
  // the integer including a sign bit.
  size_t flags;
};

So suppose we wanted to represent the number -182716:

  182716 == 0x2c9bc == 010 1100 1001 1011 1100 (note leading zero)
 -182716            == 101 0011 0110 0100 0100 (note leading one)

In both cases, the bit width is 19. If we were using a 4-bit chunk (for exposition purposes), then the chunks array for -182716 would look like the following; note the sign-extension of the final chunk:

  0100, 0100, 0110, 0011, 1101
4 Likes