Filling buffers using RandomNumberGenerator

I'd like to propose 2 minor additions to RandomNumberGenerator:

1. Filling buffers

Currently, RandomNumberGenerator has a single requirement, which returns a UInt64 of random bits. This is despite the fact that the runtime function, swift_stdlib_random is able to fill a buffer with an arbitrary number of random bytes.

All of the sources of randomness: arc4random_buf, BCryptGenRandom, the Linux getrandom syscall, the BSD getentropy syscall, and reading from /dev/urandom, all support filling arbitrary-sized buffers with random bytes.

By limiting RandomNumberGenerator, we make it so that code which needs more than 64 bits of randomness requires multiple syscalls, and is more expensive than calling the above functions (or even the standard library's own runtime function) directly.

So I propose we add a requirement which does that:

protocol RandomNumberGenerator {
  // Existing API
  mutating func next() -> UInt64

  // Added:
  mutating func next(into buffer: UnsafeMutableRawBufferPointer)

This function will have a trivial default implementation, which simply fills the buffer using bytes from the existing UInt64 requirement.

2. Static member syntax

We now have static member syntax for common protocol conformances. We should use it for SystemRandomNumberGenerator:

extension RandomNumberGenerator where Self == SystemRandomNumberGenerator {

  static var system: Self {
    get { SystemRandomNumberGenerator() }
    set { /* No-op, throwaway instance. */ }

If you're adding support for generating random values to custom types, it's quite common to add 2 overloads: one with a user-specified RNG, and one which uses SystemRandomNumberGenerator. This change results in a very slight QoL improvement when writing those overloads:

extension MyType {

  static func random() -> Self {
    random(using: &.system)

// Used to be:
// static func random() -> Self {
//  var rng = SystemRandomNumberGenerator()
//  random(using: &rng)
// }

  static func random<RNG>(using rng: inout RNG) -> Self where RNG: RandomNumberGenerator {
    /* ... */

Also, credit to @benrimmington and @Nevin for pointing this out during the review 3 years ago. From what I can see, the point was accepted but not acted on. I think we would reconsider it.


Filling is a no-brainer that I've been ready to pitch for a while now. (Though, "next" is a weird spelling for it--it's unfortunate that we used "next" for the existing requirement for a number of reasons as well; I call it "fill", which I think is more natural).

There's an add-on piece that I was planning to include in that pitch, which is a buffered random source that wraps another RNG and allows existing API that uses next() -> UInt64 to benefit from the new fill() as well; this lets existing algorithms get the benefit of fill() without needing to adopt it. The implementation there is a straw man, but communicates the basic idea; on macOS it makes the SystemRNG nearly as fast as the dumb LCG in the benchmark suite, even if clients never use fill().

Static member syntax I can take or leave. It's fine, I guess, but doesn't actually add much from my perspective. If people like it I have no objection.


Is there a reason that BufferedGenerator clamps the buffer size to a maximum of 512, rather than simply having a default value of 512?

It seems to me that if someone intentionally chooses a larger value, it’s probably because they want a larger buffer.

By 512 you've amortized most of the benefit even for the slowest generators¹, so you don't really go faster by making it bigger, and making it too large trashes your caches, since the whole point of using it is to use it in hot loops. 512 * 64 bits leaves "most" of the L1 cache available for other purposes on most modern hardware.

Even if someone thinks they want a larger buffer, they are probably wrong, unless they really know what they're doing (in which case, they can always use fill()). Particularly dangerous is that you will often go a few percent faster in microbenchmarks, while tanking macroperformance in the use cases that matter most. It's an attractive nuisance that I would like to protect users from.

¹ since writing that comment, I've experimentally verified that it does hold up pretty well as an upper bound on a variety of hardware, in line with the first principles argument.


I've been waiting for this since it was mentioned in the original review. Please and thank you!

1 Like

fill is definitely more natural (see: the title of this thread), +1

My hope is that, now that static member syntax is part of the language and we've sort of blessed this idea of "common conformances" with a special syntax, we might one day allow them as default parameters and infer generic types.

// In the dream world...
extension MyType {

  static func random(using: some inout RandomNumberGenerator = &.system) -> Self {
    // You only need to write one function.

Imagine if you needed a Clock and an RNG: you'd need to write entry-points for each permutation of (custom/system RNG) vs (custom/system Clock). Static member syntax is still a very new language feature, but I think it has the potential to be quite powerful when it comes to eliminating tedious generics overloads. Until that day, it's a minor QoL improvement.

Also, buffered random sources look really great! It fits the theme of generating larger quantities of randomness, so it makes sense to include in a brief proposal (there have been quite a few small, ad-hoc improvement proposals recently, so I was hoping this could join them).


Went ahead and implemented this for chacha-rng-swift. I added

mutating func fill<Output>(_ output: inout Output)
where Output: MutableCollection, Output.Element: FixedWidthInteger & UnsignedInteger

in analogy to func next<T>() -> T. There should probably be a variant with upperBound.

This implies a chosen byte order or platform-dependent behavior. I’m not sure whether Rust’s rand is little-endian in general (it used to be), but rand_chacha is.

In general, an invocation of fill(_:) will throw away some bytes. A BufferedGenerator need not do this—will it mirror fill’s behavior or will it make use of all generated bytes?

Hear, hear!

Some related ideas:

  • randomElements(range:using:), randomElements(range:count:using:) Sequence/Collection generators analogous to repeatElement(_:count:).
    Replacing, e.g.,

    var rng = SystemRandomNumberGenerator()
    sequence(first: as UInt8, next: { _ in }).prefix(count)
  • Initializers for Collection. I’ve written

    var rng = SystemRandomNumberGenerator()
    let array: [UInt8] = (0..<count).map { _ in }

    a hundred times.

  • Appending to RangeReplaceableCollection.

The standard library up to this point only has an unseeded CSPRNG, which is necessarily non-reproducible; you can't observe byte ordering of a non-reproducible random source, so this hasn't been a concern.

However, this becomes a concern for downstream algorithms that consume the results of reproducible (seeded) pseudo-random sources that want to get the same results on all platforms. I think I'm of the belief that a random source should produce the same byte stream regardless of endianness. Defining fill for a raw buffer dodges this to some extent, though algorithms that transform that bytestream into integers still have to deal with it eventually. But this problem is already present for next() -> UInt64, so it isn't a new concern, merely something that should be tightened up eventually.

Need not, but it's generally still more efficient to do so. I did an implementation that allowed clients to draw any number of bits, which makes things like generating small integers or booleans use (much) less randomness, but the extra overhead and bookkeeping makes this not too attractive.

1 Like

Well, the idea is that it would fill the buffer with random bytes, not random integers. This mirrors the existing next() -> UInt64 requirement, which returns "a value from a uniform, independent distribution of binary data", and is careful to direct people looking for random integers elsewhere.

Since endianness only affects how a certain sequence of bytes are interpreted numerically, I don't think it would apply to fill, or that fill would introduce platform-specific behaviour if implemented using the existing requirement. Even if you used it to fill a buffer which you then yielded values from, the buffer would still produce the same bytes in the same order; it doesn't care about the numeric interpretation.

We have this one (I think, depending what you mean by it); you can just take a slice and sample random elements from that.

1 Like

Should the BufferedGenerator and other stdlib RNGs wait until move-only types are supported?

(There were some recent pull requests for an -enable-experimental-move-only feature.)

BufferedGenerator doesn't exactly depend on move-only types, I don't think; you would like a buffered generator wrapping a move-only generator to be move-only, but there's lots of places in the standard library where one would like to add that sort of constraint, so it certainly won't be the only type in that boat if/when they are added.

Terms of Service

Privacy Policy

Cookie Policy