128 bit operations in Swift?

I'd like to implement one of the pseudorandom number generators in the PCG family that has a 128 bit state (for a 64 bit output) and whose implementation is described using 128 bit operations.

What are my options for doing this in Swift? (I see no obvious 128 bit uint types or operations, but perhaps there's some less-obvious way to do it via SIMD or something like DoubleWidth?)

It should work (ideally as efficiently as possible) on common recent iOS and macOS architectures.


If I have no option but to roll my own little struct UInt128 { … } with &+, &*, >>, &, |, ^ etc, what are your recommendations/thoughts?

The easiest thing currently is probably to define a small C header library and then wrap those operations in some Swift sugar, since clang has the __uint128_t extension:

// uint128.h
#include <stdint.h>

// The type that you'll extend to define your Swift sugar.
struct UInt128 {
  uint64_t lo;
  uint64_t hi;
};

// Helper function to go from swift type to __uint128_t.
static inline __attribute__((always_inline))
__uint128_t swiftToC(struct UInt128 swift) {
  return (__uint128_t)swift.hi << 64 | swift.lo;
}

// Helper function to go from __uint128_t to swift type.
static inline __attribute__((always_inline))
struct UInt128 cToSwift(__uint128_t c) {
  return (struct UInt128){ .lo = c, .hi = c >> 64 };
}

struct UInt128 uint128_add(struct UInt128 a, struct UInt128 b) {
  return cToSwift(swiftToC(a) + swiftToC(b));
}

struct UInt128 uint128_sub(struct UInt128 a, struct UInt128 b) {
  return cToSwift(swiftToC(a) - swiftToC(b));
}

struct UInt128 uint128_mul(struct UInt128 a, struct UInt128 b) {
  return cToSwift(swiftToC(a) * swiftToC(b));
}

struct UInt128 uint128_and(struct UInt128 a, struct UInt128 b) {
  return cToSwift(swiftToC(a) & swiftToC(b));
}

struct UInt128 uint128_or(struct UInt128 a, struct UInt128 b) {
  return cToSwift(swiftToC(a) | swiftToC(b));
}

struct UInt128 uint128_xor(struct UInt128 a, struct UInt128 b) {
  return cToSwift(swiftToC(a) ^ swiftToC(b));
}

struct UInt128 uint128_rshift(struct UInt128 a, unsigned int b) {
  return cToSwift(swiftToC(a) >> b);
}

struct UInt128 uint128_lshift(struct UInt128 a, unsigned int b) {
  return cToSwift(swiftToC(a) << b);
}

and then:

// UInt128.swift
extension UInt128 {
  static func &+(a: UInt128, b: UInt128) -> UInt128 { return uint128_add(a, b) }
  // ...
}

This saves you from needing to implement the arithmetic by hand. Obviously, you can also implement it all in Swift without too much fuss, and the codegen will be pretty decent, it's just slightly more work (and more opportunity to get carry propagation subtly wrong).

Another low-effort option would be to crib DoubleWidth<UInt64> from validation-test/Prototypes/. It's been a while since I've looked at it, and I can't remember how aggressively it's been optimized, however.

The lowest-effort and probably best solution--even if less fun--is probably to just wrap the C implementation of PCG =)

5 Likes

Hehe, yes of course. (But part of the goal here is to write it all in Swift.)

What are the reasons for not including a UInt128 in the Standard Library though?

I mean:

The goal of the Swift project is to create the best available language for uses ranging from systems programming, to …

I wouldn't be too surprised if it was already on @scanon's personal roadmap somewhere. Along with other nice numerical things.

1 Like

I do know that there is a BigInt library. I personally used this to implement the RSA algorithm for a Discrete Mathematics course a few years ago back in the swift 1 or 2 days (it's been so long I can't even remember which version of swift it was). It looks like that library has had a significant number of updates since then and you can check it out to see if it may work for you so you don't have to implement your own 128 bit integer.

If you're really going for performance then that library may or may not work for your needs. It was pretty slow when I used it, but that was a long time ago and both swift and that library have evolved a lot since then.

As with most missing features, we simply haven't gotten to it

FWIW, UInt128 is not a type that is often (ever?) needed in systems programming (in over a decade working on math libraries for Apple's OS team, I never needed it). It would be wonderful to have for stuff like PCG and when UInt64 isn't quite big enough, but it's definitely not a dealbreaker for systems work. Actually, does any systems language other than Rust have it? =)

You definitely do not want to use a general-purpose bigint library for this task.

3 Likes

Technically the int type in Python 3 is "unbounded". While not specifically 128 bit (there is still a maxsize based on the system word size) ints in Python 3 should have infinite precision.

There are tons languages with an arbitrary precision int type. There are relatively few languages with a 128b-int type, and fewer still that are suitable for use as systems languages (Python definitely is not a systems language, nor does it aspire to be).

Well, C/C++ (on 64-bit targets I guess)?

Only as a compiler-specific extension. It's not in the language.

On the other hand, Objective-C, being de facto defined by the Clang implementation, has it. (j/k)

In general, I agree, 128 bits isn't used for "math" but it certainly is used for "counting". I've heard of UInt128 being used on premium routers/switches (for byte/bit counters) and Sun thought 128 bits was the right number of bits for measuring the size of a ZFS storage pool. I'm sure there are other examples.

Personally and when it comes to "math", I've used UInt128 for OS work, but only for intermediary calculations. If the OS provided an API like so, then I wouldn't have needed UInt128. If I might put my C hat back on for a second:

// return value = number * numerator / denominator
// *overflow = (number * numerator / denominator) >> 64
// *remainder = number * numerator % denominator
uint64_t umuldiv64(uint64_t number,
               uint64_t numerator,
               uint64_t denominator,
               uint64_t *overflow,
               uint64_t *remainder);
1 Like

As @scanon suggested, worked for me with this structure:

typedef union {
    __uint128_t value;
    struct {
        uint64_t lo;
        uint64_t hi;
    };
} UInt128;

UInt128
uint128_add(UInt128 a, UInt128 b) {
    UInt128 r = { a.value + b.value };
    return r;
}

Complete project: GitHub - paiv/swift-pcg-random: PCG Random for Swift language

1 Like

Now there is a third-party implementation of UInt128 that works pretty well for me:

1 Like

One place where I found Jitsusama's UInt128 library useful was operations on IPv6 addresses, which are 128 bits. My firewall rule analyzer and CIDR trainer / calculator both use it heavily.

Apple's network framework has an IPv6Address type, but writing extensions on that did not meet all my needs.

I think it's coming into stdlib sooner rather than later.

1 Like