Improving the UI of generics

AnyObject-the-"protocol" doesn't actually have any run-time representation. In a world with any AnyObject, there is no such type AnyObject. At the implementation level, you can think of it more like a guarantee about the type, like "fits in 64-bits" or "can be copied using memcpy", albeit one that shows up a lot more often than either of those.

In general, class-bound existentials are not compatible with AnyObject because they also carry their conformance information alongside the class pointer. @objc existentials are the exception to this since they don't use conformance information to invoke requirements.

I'm still not fully following, so I apologize up front for more questions.

  • How does weak var p: (/* any */ P & AnyObject)? work then?
  • Why is the existential made class bound from the protocol composition which allows weak?

One other unrelated question:

  • Are metatypes reference types or just a special kind of types with reference semantics?

To hopefully address this part: Metatypes are reference types, but they are not objects—they are a separate kind of reference type. Unlike objects, metatypes are not reference-counted; once allocated, a metatype is never deallocated. Also unlike objects, metatypes aren't always pointer-sized; they have a zero-byte "thin" representation used when the type can be statically known, which keeps the compiler from wasting a register passing them to static functions and initializers which can't be overridden.

Metatypes do have ObjectIdentifiers and can be compared with the === operator, but if you look in the standard library, you'll notice that there are separate overloads to handle them. So they look very object-like, but they're not actually objects—they're a structural type which happens to be pointer-sized in its thick representation.


Thanks for clarifying that @brentdax. This explains why they don’t work with Objective-C associated objects. But in theory, a Swift-native implementation of associated objects could support metatypes, right?

Your question is a little ambiguous. Do you mean:

  • Store a metatype into an object's associated objects? Probably; I assume we'd allow any instance, not just object instances, so metatypes would be supported.
  • Store associated objects in a metatype? I'm a little more doubtful. Metatypes don't have the side allocation we think we'd store associated objects in, so we'd need a different implementation. But what's the point? You can store the "associated objects" in a static dictionary instead; that's not sufficient for objects because you want the associated objects to be released/deallocated when the object they're attached to is destroyed, but metatypes are immortal, so they don't have this concern.

This is what I was referring to. The approach you describe is exactly what I was referring to. It would be nice to be able to use them with a single API that also works with objects. This would allow both metatypes and objects to be used in contexts that require associated information.

I didn't remember that the first case did work, but I believe that in that case the weak makes the value inside the existential be the weak pointer. The compiler has to know it's an existential up front, though; compare with:

struct WeakWrapper<Ref: AnyObject> {
  weak var ref: Ref?
protocol P {}
class C {}

let x = WeakWrapper<C>() // okay
let y = WeakWrapper<P>() // rejected, obviously
let z = WeakWrapper<P & AnyObject>() // also rejected

A very recent example I encountered is random number generation.

While there is a protocol RandomNumberGenerator in Swift, it unfortunately is defined like this:

public protocol RandomNumberGenerator {
    mutating func next() -> UInt64

Being tied to UInt64 makes it useful as a low-level back-end for higher-level RNGs, but rather useless for direct use in most real-world situations, where you'd want to random-sample from Int, Float, Bool, or the like.

Luckily there are individual methods on Float and the like, sprinkled all over the stdlib, which are defined along the lines of this:

static func random(in range: Range<Float>) -> Float 

While this is nice for situations where your code is very tightly specified, bound to concrete types and you're only interested in uniform distributions, it ends up being rather useless when one or more of the following criteria are met …

  • … you need to randomly sample from a type provided as generic argument
  • … you actually care about correctness and want to write unit tests, without having to write ad-hoc RNG wrappers for each of those methods (especially the static func is problematic with tests).
  • … you need your values to be sample from any non-uniform distributions (gaussian e.g.)
  • … you need your execution to be deterministic (by seeding the RNG), like for testing

And unless you're just doing casual coding, prototyping or anything else where correctness, generality or re-usability isn't actually important you can be rather certain that at least one of the above will apply to the code you're writing.

As such I find the func random(in:) of stdlib to be more of an anti-pattern than a solution. They lure you into writing code that ends up hard to maintain and test and impossible to decouple later on.

If we however had a way to implement a protocol multiple times, for specific types each (i.e. generically) we could expand the existing "back-end" into something like this:

public protocol RandomNumberGenerator {
    mutating func next() -> UInt64

extension RandomNumberGenerator {
    mutating func sample<T, D: Distribution<T>>(from distribution: D) -> T {
        distribution.sample(from: &self)

    mutating func sample<T, D: Distribution<T>>(from distribution: D, within range: Range<T>) -> T {
        // ...

Next we would add a generic(!) Distribution protocol like this:

public protocol Distribution<T> {
    func sample<R: RandomNumberGenerator>(from rng: inout R) -> T

… which would open up the possibility of user-land swift packages providing implementations of all kinds of distributions (Bernoulli, Beta, Binomial, Categorical, Cauchy, Chi, Chi-Squared, Dirichlet, Discrete-Uniform, Erlang, Exponential, Fisher-Snedecor, Gamma, Geometric, Hypergeometric, Inverse-Gamma, Log-Normal, Multinomial, Normal, Pareto, Poisson, Students, Triangular, Uniform, Weibull, just to name a few).

The stdlib would then provide a default distribution that would sample from a numerically uniform distribution, and with a range appropriate to the given type T.

public struct DefaultDistribution {
    // ...

extension DefaultDistribution: Distribution<Bool> {
    func sample<R: RandomNumberGenerator>(from rng: inout R) -> Bool {
        // ...

extension DefaultDistribution: Distribution<Int> {
    func sample<R: RandomNumberGenerator>(from rng: inout R) -> Int {
        // ...

extension DefaultDistribution: Distribution<Float> {
    func sample<R: RandomNumberGenerator>(from rng: inout R) -> Float {
        // ...

… which would greatly improve ergonomics by using it in a convenience extension like this:

extension RandomNumberGenerator {
    mutating func random<T>() -> T
        where DefaultDistribution: Distribution<T>
        return self.sample(from: DefaultDistribution())

    mutating func random<T>(range: Range<T>) -> T
        where DefaultDistribution: Distribution<T>
        return self.sample(from: DefaultDistribution(), within: range)

This would allow us to …

  • … randomly sample from a type provided as generic argument
  • … effortlessly write unit tests, without having to write ad-hoc RNG wrappers, like before.
  • … sample from any non-uniform distributions (gaussian e.g.)
  • … have one's execution be deterministic, assuming seedable RNGs being made available.

In other words it would solve all the pain points listed above for the existing and limited API.

I don't see a way to build a similarly flexible (and efficient!) implementation without multiple conformances to a single protocol (as in "generic protocol").

The key here is being able to combine N random-sources with M distributions resulting in up to N × M combinations from just N + M implementations with zero run-time or dynamic-dispatch overhead, thanks to generic protocol conformance.

cc @DevAndArtist

1 Like

A multi-parameter protocol feels a bit overkill for this, and would make type inference of the element type independent of the distribution, which seems inconvenient. It seems like a better design to me to treat the distribution type as an associated type, and not try to pile all the unrelated kinds of distribution (integer and floating-point uniform distributions are different things!) onto conformances on one type.

An arguably even more expressive way of factoring randomness would be to treat the generator as an infinite Sequence of raw words, and the distributions as transformations on top, like the "composable randomness" design from


that, would be... just tremendous (as in super cool ^^)

I only wish to do this

Where do I contribute to achieve this?

How would that be different from Collection where Element: Geometry?

The difference is that I'm not able to run that code in a playground because I always get a LLDB RPC server crash.

Have you already reported that crash on If not, that would be a great first step.

I like the idea of some Protocol, but in practice it's very limiting. I'm really interested in reverse generics. If I understand correctly, this will greatly aid in compacting long lazy sequence chains, right?

Yeah, with generic constraints on associated types, so you could say that the Element == SomeType of your opaque sequence, many sequence transformers could be expressed without exposing their concrete implementation types.


Awesome, that'd be so great! :smiley: What's the current dependency graph of generic UI features in your mind? I.E. reverse generics before / after constraint shorthands, any keyword has to come before generic variadics, etc.

Something I can relate to as a mere Swift user. Thanks for approaching this technical topic from a wider user-centric perspective and for the candour! :pray:t2:

... on that same note: Would there be no way to simplify the Protocol<.Assoc> shorthand even more? Like so:

func concatenate<T>(a: some Collection<T>, b: some Collection<T>) -> some Collection<T>

Couldn't Swift figure out that Collection is not an existential but has an associated type? Of course, associated types would need to be declared in a similar fashion so that they have a defined order, in case there are multiple ones. But I'd love the consistency with generic notation :blush:.

Collection has multiple associated types, so it wouldn't be inherently obvious which one you're referring to here, but I think it does make sense to provide a way for protocols to specify which associated types are most likely to be bound in an existential type. As a strawman, maybe this declaration syntax:

protocol Collection<Element> {

could declare Element as an associated type that also behaves as a positional parameter in generic argument lists.


I really don't remember the example that I wanted to show you during the upthread discussion. However recently someone on some slack workspace wanted to use the functionality of a protocol Self which gets substituted by the current subclass on classes. The issue was that Self on classes behaves differently and the compiler emitted an error.

An ideal description of the constraint would be this 'generic protocol':

protocol P<A, B> {
  typealias Next<T> = P<T, Self>
  init(a: A, b: B)
extension P {
  func next<U>(value: U) -> any Next<U> {
    .init(a: value, b: self)

I know the resistance against generic protocols, but I still think we shouldn't just build something else in its place so that it would never be possible to achieve after that.

Personally I would highly appreciate generic protocols as a high level feature, because when someone knows what he's doing, why not allow full natural generic expressiveness of the language?!

Terms of Service

Privacy Policy

Cookie Policy