SE-0425: 128-bit Integer Types

Hello Swift community,

The review of SE-0425 "128-bit Integer Types" begins now and runs through March 19th, 2024.

Reviews are an important part of the Swift evolution process. All review feedback should be either on this forum thread or, if you would like to keep your feedback private, directly to me as the review manager via the forum messaging feature or email. When contacting the review manager directly, please put "SE-0422" in the subject line.

Try it out

While these will be implemented in the standard library, for the purposes of experimentation, they are on a branch of swift-numerics. To try these types out, use the branch int128:

  url: "",
  branch: "int128"

and add Int128Demo as a dependency:

.target(name: "MyTarget", dependencies: [
  .product(name: "Int128Demo", package: "swift-numerics")

This branch requires a recent nightly toolchain to build.

What goes into a review?

The goal of the review process is to improve the proposal under review through constructive criticism and, eventually, determine the direction of Swift. When writing your review, here are some questions you might want to answer in your review:

What is your evaluation of the proposal?
Is the problem being addressed significant enough to warrant a change to Swift?
Does this proposal fit well with the feel and direction of Swift?
If you have used other languages or libraries with a similar feature, how do you feel that this proposal compares to those?
How much effort did you put into your review? A glance, a quick reading, or an in-depth study?

More information about the Swift evolution process is available at

swift-evolution/ at main · apple/swift-evolution · GitHub

Thank you,
Doug Gregor
Review Manager


The [U]Int128 types conform to Codable; however, many existing encoders and decoders do not support such large integer types. Therefore they are encoded as a pair of 64b integers. This pair is always in little-endian order, regardless of the endianness of the architecture.

I think this is probably the best representation possible for JSON and plists, but perhaps we should let the coders decide? That is, I think we should add the following protocol requirements to the standard library:

protocol KeyedEncodingContainerProtocol {
  mutating func encode(_ value: Int128, forKey key: Key) throws
  mutating func encode(_ value: UInt128, forKey key: Key) throws
  mutating func encodeIfPresent(_ value: Int128?, forKey key: Key) throws
  mutating func encodeIfPresent(_ value: UInt128?, forKey key: Key) throws
  // And matching changes to KeyedEncodingContainer<Key>
public protocol KeyedDecodingContainerProtocol {
  func decode(_ type: Int128.Type, forKey key: Key) throws -> Int128
  func decode(_ type: UInt128.Type, forKey key: Key) throws -> UInt128
  func decodeIfPresent(_ type: Int128.Type, forKey key: Key) throws -> Int128?
  func decodeIfPresent(_ type: UInt128.Type, forKey key: Key) throws -> UInt128?
  // And matching changes to KeyedDecodingContainer<Key>
public protocol UnkeyedEncodingContainer {
  mutating func encode(_ value: Int128) throws
  mutating func encode(_ value: UInt128) throws
  mutating func encode<T: Sequence>(
    contentsOf sequence: T
  ) throws where T.Element == Int128
  mutating func encode<T: Sequence>(
    contentsOf sequence: T
  ) throws where T.Element == UInt128
public protocol UnkeyedDecodingContainer {
  mutating func decode(_ type: Int128.Type) throws -> Int128
  mutating func decode(_ type: UInt128.Type) throws -> UInt128
  mutating func decodeIfPresent(_ type: Int128.Type) throws -> Int128?
  mutating func decodeIfPresent(_ type: UInt128.Type) throws -> UInt128?
public protocol SingleValueEncodingContainer {
  mutating func encode(_ value: Int128) throws
  mutating func encode(_ value: UInt128) throws
public protocol SingleValueDecodingContainer {
  func decode(_ type: Int128.Type) throws -> Int128
  func decode(_ type: UInt128.Type) throws -> UInt128

And give them all default implementations that encode the two halves of the value in the way you describe. That way if some specific coder has a good way to splat 128 bits of integer data straight onto the disk, it’s free to do so, but by default coders will represent them as two 64-bit integers.



I second @beccadax's suggestion regarding Codable - I like having a good path to a future where 128-bit integers are just like all the other integers, not hampered by weird behaviours for legacy reasons.

Sadly my most common pain-point for things-which-are-intrinsically-128-bit-integers-but-are-not-implemented-that-way is Duration, which this proposal won't fix. But at least it will make my hacks and workarounds a little less painful, by letting me easily translate from Duration to UInt128 instead of having to use Double and all the precision problems that entails.

But there are other cases too where I'll be able to use 128-bit integers instead of Doubles (or BigInts), which will improve performance and avoid the problems & complexity of floating-point.

Formatting & localisation

What about support in Foundation itself? e.g. IntegerFormatStyle (and a bunch of other types & protocols) have hard-coded enumerations of the integer types, and so will need 128-bit versions manually added.

Localised formatting currently doesn't work correctly even for the existing UInt64 type… although this is partly fixed in the new new Foundation, that has no ETA on actually shipping on Apple platforms. Will the old new Foundation be fixed in the interim?


Good to see this! :+1:t2:

The actual API of the types is uninteresting

I don’t agree at all with that statement.

For example the Int128/UInt128 types looks exactly like the existing 8, 16, 32 and 64 bit integer types, but differ in many ways that aren’t mentioned and to me very much interesting for such a fundamental type, e.g.

  • division and multiplication doesn’t seem supported? Neither truncating or overflowing variants (with or without carry/remainder)

  • are all the same binary operations supported on the type? Bit shifting, rotations? Init with bitPattern.

  • endianness? Is swapping endianness supported?

  • conversions to/from other types, e.g. Double?

I assume some of these are included in the listed protocol conformance but the majority isn’t.

My main point is that I’m missing clearly spelled out differences in the proposal between the new type and the existing ones, and that I would like for that set to be as small as reasonably possible.

Apart from that it’s a strong +1 from me!
A quick read


  • is the new type considered a primitive/trivial type (I forgot the actual name for it)?

  • does it work the same way as the existing types when interacting with the Swift pointer/buffer APIs? What are the edge-cases there?


+1 from me on the overall addition!

This pair is always in little-endian order, regardless of the endianness of the architecture.

The little-endian-ness of this decision seems off to me — traditionally, anything serialized should generally appear in network byte order for interoperability. Sure, everything can byte swap nowadays, but I would argue most would simply assume to see serialized data with the high bytes first?

+1 to @beccadax's proposal to let coders decide, and I would add to that the JSON coders should offer options for how this should be represented, decreasing friction with existing systems that might prefer to receive the number as a decimal string for instance.


We would like to be layout-compatible with _BitInt(128) on all platforms, but given the currently-murky state of the layout of those types, it makes the most sense to guarantee compatibility with the widely-used but non- standard __[u]int128_t and find mechanisms to make _BitInt(128) work once its ABI has been finalized on Swift's targeted platforms.

What, if any, compatibility do you have in mind for Windows/MSVC++'s __int128 and unsigned __int128 types? They have about the same level of support on Windows as __[u]int128_t have on POSIX-y systems.

[U]Int128 will not bridge to NSNumber.

How will it bridge to Objective-C as a scalar? As __[u]int128_t? I assume so given the paragraph I quoted above, but it's not explicitly stated. Will it bridge on ARM64_32?


All of these operations are supported (rotation isn't defined for any of the normal integer types in the stdlib, but is defined in the IntegerUtilities module of Numerics). They fall out of the listed protocol conformances:

  • division and multiplication, including truncating and overflowing variants, with and without carry/remainder, are protocol requirements and/or extensions of Numeric, BinaryInteger, and FixedWidthInteger
  • the various bitwise operations (excluding rotation) are protocol requirements and/or extensions defined on BinaryInteger and FixedWidthInteger.
  • endianness operations are defined on FixedWidthInteger
  • conversions to/from integer types are requirements of Numeric and BinaryInteger. Conversions to/from binary floating-point types are defined on BinaryInteger.

Yes, it's POD, and yes, it acts exactly like any other POD type w.r.t. pointers and buffers.


To be clear, they are not in little-endian (nor big-endian) byte order. They are a pair of [U]Int64s, which are then encoded however the encoder handles them (maybe as strings, maybe in little-endian order, maybe in big-endian order, maybe something else). We could swap them around, but it makes more sense to adopt @beccadax's proposal anyway.


Foundation changes are outside the scope of proposals before the LSG (Foundation has its own API review process). I would expect that we'd be happy to consider a proposal to add support once the standard library type has been approved.

The "new Foundation" shipped in Apple's OS releases last fall, and bug fixes that land upstream on the open-source repository will generally forward into the OS releases.


Great! Thanks!

I think it’s the way the proposal is worded with regard to protocol conformances that took me off on that direction. I see now that the FixedWidthInteger conformance covers most of them.

I would suggest a section where all the differences from the existing fixed width integer types are listed explicitly, such as the differences to Codable and NSNumber

I’m also not sure about the value in specifying it in terms of a __uint128_t in Clang and maybe more so the part about not doing _BitInt(), it feels like implementation details are leaking through.

Again, looking forward to having this in Swift

1 Like

It's not "specified in terms of" either one. This is a Swift type, it is not defined as a C type. It is layout-compatible with __[u]int128_t on 64b platforms, and will generally be layout-compatible with _BitInt(128) as well, assuming C compiler devs and platform ABI maintainers manage for fix the current situation.

We're giving it the ABI that we believe is most appropriate. One consideration in that decision is compatibility with 128b C types, but it is not the only consideration.


It's clang's view of imported C headers that matters. I believe that clang supports these types on Windows and that they are 16B aligned. If so, we would simply import them automatically.

That's right. If/when _BitInt(128) gets its ABI fixed, we could look into switching to the "standard" spelling for platforms where it works (but it might have to be behind a C23 check, while __[u]int128_t is unconditionally available where supported, so maybe not). arm64_32 has __[u]int128_t, so there's no issue there.


Will the 128-bit integer types conform to the SIMDScalar protocol?

Codable may need default implementations for RawRepresentable<{U}Int128>, so that enums and option sets are supported.


There is no existing Swift-supported target with a significant set of 128b integer operations on SIMD vectors, nor are 128b integers commonly used for data types frequently represented with small vectors (like position, velocity, color, ...), so there is not much reason to provide this conformance. There is no technical limitation preventing it, however, so it can be added in the future if a use case ever comes forward.


I think the answer is yes, but for clarity, since {U}Int128 is a fixed-width integer type, will it have the automagic power to be the raw type of an enum with this proposal?


As a minor aside: it's not strictly necessary to add [U]Int128 to the Codable protocols in order for encoders and decoders to treat them specially. Like all other types that that don't fall in to the "Codable primitives" set, they can be handled in the generic method variants by inspecting the generic type.

Not at all to say we shouldn't do this, just to pose the question of whether we consider these types foundational enough to be considered "primitives" that every Encoder/Decoder should theoretically handle. (e.g., Float and Double are primitives, but Float80 is not, because we didn't expect most encoders and decoders to have appropriate representations for Float80 by default)

I don't feel strongly one way or another, but if an encoder does have a way to splat 128-bit values directly into a stream, it at least isn't blocked on the addition of expanding these methods.


What's the best way to conditionally use [U]Int128 when available? (to support older Swift versions)

1 Like

Is there another (better?) example that follows Float80 behaviour?

<Assuming Float80 is not deprecated and is useful for something>
This is a good example. If Float80 is not Codable (e.g. because 80 bits is too much for the standard coders?) why do we go into trouble making Int128 Codable? Or, vice versa, if Int128 is Codable shouldn't we do the same for Float80?
</Assuming Float80 is not deprecated and is useful for something>

We should also "spell out" / consider a simpler approach: make Int128 Codable, values that fit into 64 bits would encode and values outside of 64-range would simply fail to encode. This would be in line with the existing precedent of Float/Double.nan which is not codable to / from JSON by default.

Float80 doesn't conform to Codable mostly because there hasn't really been any demand for it. It's a very seldom-used type with very niche applications, and even less so in overlap with the domain of the type of code that uses Codable.

There's nothing preventing Float80 from adopting Codable — we would just need to pick semantics for it in terms of the existing primitives, and it would "just work".

"In terms of the existing primitives" is the key here: the concrete encode[U]Int[8/16/32/64…]/encodeFloat/… define a set of types that every encoder and decoder must be able to handle "natively" in some way, even if they need to make some affordances for certain values (e.g., JSON supporting nan indirectly). Float80, at the time, and now, is not such a crucial type that it warrants an encodeFloat80() method.

Float16 comes to mind as a useful type which does adopt Codable, but wasn't added as a primitive (e.g., there's no encodeFloat16). It's not a perfect analogue to the [U]Int128 types because its domain is smaller, so at least every Float16 value can be encoded as a Float/Double.

There's no reason for [U]Int128 to not conform to Codable—it's as easy as encoding an unkeyed container of two [U]Int64—the question is whether it should be a primitive, with explicitly-expected support via the methods that @beccadax brings up. We can definitely make it Codable without introducing those methods.

There's no real need or benefit to do this — because encoders already support 64-bit values, and support unkeyed containers, there isn't an actual limitation on encoding [U]In128. (Terminology-wise, I also wouldn't say that Float/Double.nan aren't Codable; it's expected that the full range of representable values be handled for primitives. JSONEncoder specifically requires you to tell it how you want to encode these values, though it could have also made some decision for you by default.)

1 Like