Is it an explicit design goal to not allow type-level functions/computation in Swift?

AFAICS anything analogous to the following is not possible in Swift:

enum _0 {}
enum _1 {}

struct XOR<A, B> {}
extension XOR where A == B { typealias Result = _0 }
extension XOR where A != B { typealias Result = _1 }

Is / will it ever be possible to write such type level functions / computation mechanisms (of which XOR is the simplest example I could come up with) in Swift?

If not, what is the rationale behind that design decision?

1 Like

You might want to check out this thread.

Thanks, I’ve read that thread before but it’s not answering my question, which can be reformulated (to not use !=) like this:

Why doesn’t/shouldn’t eg this work:

enum _0 {}
enum _1 {}
struct XOR<A, B> {}
extension XOR where A == _0, B == _1 { typealias Result = _1 }
extension XOR where A == _1, B == _0 { typealias Result = _1 }
extension XOR where A == _0, B == _0 { typealias Result = _0 }
extension XOR where A == _1, B == _1 { typealias Result = _0 }

?

The example that you give here seems to be more of a namespacing issue than a fundamental limitation.

If you try this in REPL, you'll see that Result becomes visible in XOR without any constraints as soon as the first extension is declared:

Welcome to Apple Swift version 5.2.4 (swiftlang-1103.0.32.9 clang-1103.0.32.53).
Type :help for assistance.
  1> enum _0 {} 
  2. enum _1 {} 
  3. struct XOR<A, B> {} 
  4. extension XOR where A == _0, B == _1 { typealias Result = _1 }
  5> XOR.Result.self
$R0: _1.Type = _1

There's whole another question of how these namespaces could be resolved, i.e. how do you resolve to a specific Result with given constraints? Would it look like (XOR where A == _0, B == _1).Result.self?

I don't have a good suggestion for that syntax, but even if there was one, it obviously would require a significant amount of effort to extend the type checker to allow this and also to push it through the evolution process.

I personally think that these things should be implemented properly as dependent types and there's a significant body of research and even more or less practical implementations of that in Scala (where it looks quite ugly in my opinion) or Haskell with extensions. The are also more elegant, but slightly more academic approaches such as Agda, Idris etc, and in a purely educational setting pi-forall and The Little Typer with its Pie language are fun to play with. I'd hope that someone with substantial experience in that area could adapt it to Swift and pitch as the next step in the generics evolution. It could even get some traction as dependent types are briefly mentioned in the Generics Manifesto with a tentative "maybe".

3 Likes

I'm just saying that I think it's strange that while this works:

struct S<A> {}
extension S where A == Int { static func foo() -> Int { return 0 } }
extension S where A == Bool { static func foo() -> Int { return 1 } }
print(S<Int>.foo()) // 0
print(S<Bool>.foo()) // 1

this doesn't:

struct S<A> {}
extension S where A == Int { typealias R = String }
extension S where A == Bool { typealias R = Double } // ERROR: Invalid redeclaration of R
print(S<Int>.R.self) // I would've expected this to print "String".
print(S<Bool>.R.self) // I would've expected this to print "Double".

I mean, how come the typealias is a redeclaration while the static func is not?
IMHO this behavior is inconsistent and unintuitive, because A can clearly never be both Int and Bool for the same S<A>.

It's equally strange that XOR.Result even compiles (without specifying XOR's A and B), as you showed:


Not sure I understand, but simply XOR<_0, _1>.Result.

The way this example would be used is of course something like this:

struct SomeOtherTypeLevelFunc<A, B, C, D> {
    typealias Result = SHL<MUL<C, D>, XOR<A, B>>.Result
}

EDIT: Ah, you mean how do I specify for SomeOtherTypeLevelFunc that its type parameters has to be _0 or _1 and not some other types?

As we can see from that example, typealiases declared in constrained extensions are available in namespaces of unqualified generic types, not in their fully qualified versions. Quite possibly someone somewhere already relies on this behavior, so making that consistent with member function declarations would be a breaking change. Which should go through the Swift evolution process I suppose.

It's also a good question whether you'd like it to be consistent with member functions and not static functions, which are also available in the namespace of the unqualified generic type:

  7> extension XOR where A == _0, B == _1 { static func blah() {} }
  8> XOR.blah
$R2: () -> () =

(I totally go out on a limb here with "unqualified" and "qualified" terminology. In the functional programming world they'd be called "unapplied type constructors", but I'm not sure what would be a good name when referring to a similar thing in Swift).

Digging into this a bit more, it looks like a single static function (XOR.blah in my example) is resolved from unqualified generic type XOR, while multiple static functions with the same name (S.foo in your example) require disambiguation with generic type parameters. I do agree that typealiases could be consistent with that behavior, so this seems more like a technical not a fundamental limitation.

Even if/when that is resolved, I still think that calling this a "type-level computation" would be a stretch, how would you implement your MUL and SHL generics then?

Note that I'm not trying to actually do this (it was just a simple example, but somewhat equivalent in fucntion to actual use cases), but I guess that if we can implement a NAND then the rest would be possible too (in some awkward way). One recent example of an actual use case is this:

(I'm of course not thinking of representing eg scalar floating point values at the type level, in case the bit-operation-example seems to suggest something like that :slight_smile: but rather I'd like to be able to "compute" the types (for the mutlivectors), and only the ones actually needed for a given program, at compile time rather than runtime, to make it an acceptably efficient GA implementation, without code generation, ie replace code generation with generics (expand quoted post for details). But this is getting off topic).

I do think the (still unanswered) question of this thread is clear in the OP and the title.

1 Like

I'm getting close in this github branch. I do get compact representation. A 3-vector on 5D algebra (Vector3_5D) only contains the expected 10 coefficients. And the computation works get pretty well with specialisation. Though I don't have a reference to compare it with.

With that said, I still need to do something unsighty :see_no_evil::

public typealias Vector1D = Mixed<Vector0 , Vector0>
public typealias Vector2D = Mixed<Vector1D, Vector1D>
public typealias Vector3D = Mixed<Vector2D, Vector2D>

So Swift type does have a lot of room to grow. Things like dependent type @Max_Desiatov mentioned would help a lot.

2 Likes

A reference could be some corresponding geometric code/benchmarks written "traditionally" with eg homogenous 3D using SIMD4<Float> etc?

1 Like

NAND won't help as a lot of other things are still missing. To work on numbers on the type level you'd still need support for recursion with termination checks and/or induction. The other languages that allow this don't use NAND or gates or binary logic at all for their foundations of type-level functions. I really do recommend checking out how they do this and then, as I mentioned in the other thread, translating manually their simplest code to Swift will expose the next steps in getting things off the ground.

It is answered in the Generics Manifesto: with dependent types you'll get type-level functions, and with regards to dependent types there's no explicit design goal to (dis)allow them.

The idea with extensions is interesting overall, but even if typealiases were clearly disambiguated between extensions by the type checker, it still wouldn't help without the recursion as I mentioned above. For example, you could start with the classic natural numbers example from 90% of all available introductions to dependent types:

protocol Nat {}
struct Zero: Nat {}
struct Next<N: Nat>: Nat {}

protocol NonZero {
  associatedtype MinusOne: Nat
}
extension Next: NonZero {
  typealias MinusOne = N
}

typealias One = Next<Zero>
typealias Two = Next<One>
typealias Three = Next<Two>
typealias Four = Next<Three>

struct Sum<N1: Nat, N2: Nat> {}
extension Sum where N2 == Zero {
  typealias Result = N1
}

extension Sum where N2: NonZero {
  typealias Result = Sum<Next<N1>, N2.MinusOne>.Result
}

With this you'd expect Sum<Two, Two>.Result.self to print Next<Next<Next<Next<Zero>>>>, but the problem is with the last extension on Sum, in which the Result typealias does not resolve recursively (and doesn't type check). Maybe there's a way to avoid recursion here, but it would rely on some other primitive type checker behavior allowing to define Sum's Result by induction for all cases.

Don't get me wrong, I'm not saying it shouldn't be possible. If something like Sum above worked, it wouldn't be relegated to a niche of SIMD libraries. It's highly valuable to operate on "type-level numbers" in everyday applications. Imagine something like this:

struct Vector<Element, Length: Nat> {}

extension Vector {
  // guarantees at compile time that the returned vector is not empty
  func append(_ e: Element) -> Vector<Element, Next<Length>>
}

extension Vector where Length: NonZero {
  // no optionals here!
  var first: Element

  // this always works as the type checker won't allow it on empty vectors 
  func dropFirst() -> Vector<Element, Length.MinusOne>
}

extension Vector {
  // guarantees that appended non-empty vectors are not empty
  func append<L>(
    contentsOf: Vector<Element, L>
  ) -> Vector<Element, Sum<Length, L>>
}
2 Likes

Yes, a (nice) static / fixed size array type is another real world example that I've been frustrated by not being able to define, ie something like:

struct StaticArray<Element, Index: StaticArrayIndex> {
    var storage: SomeCurrentlyImpossibleWayOfDecidingStorageFrom<Element, Index>
}

(MemoryLayout<StaticArray<Element, Index>>.stride should be Element's stride times the number of elements.)

The common key concept here is …Deciding… ie a way to make choices / compute an output based on one or more inputs at the type level.

The question here is whether Swift has taken an explicit stance against giving the type system this capability. But I guess the answer is that it might be included via dependent types.

1 Like

Yes, that would help in avoiding OOB errors too, but in practice most array indices are only known at run time. For this to become practical you need a way to convert a run time value to something that's guaranteed to be a certain natural number at compile time. And that again requires the language to be carefully designed for this from the ground up. I think it could still be possible to add this to Swift in the future, but needs a lot of work and research on how other people do this and how they avoid common pitfalls.

1 Like

Not for x y z w (index 0 1 2 3) of vectors. Or the row (0) and column (1) of a Table type index. Or the x (0), y (1), width (2), height (3) indices of a Rectangle type etc etc.

I have a (non-production quality) Vector type that I've been using for some years now and then, with type level index.

Here


protocol VectorIndex {
    static var count: Int { get }
    static func withAll(_ op: (Self) -> Void)
    var value: Int { get }
}

enum VectorIndex1 : VectorIndex {
    case i0
    static var count: Int { return 1 }
    static func withAll(_ op: (VectorIndex1) -> Void) {
        op(i0)
    }
    var value: Int { return 0 }
}
enum VectorIndex2 : VectorIndex {
    case i0, i1
    static var count: Int { return 2 }
    static func withAll(_ op: (VectorIndex2) -> Void) {
        op(i0)
        op(i1)
    }
    var value: Int {
        switch self {
        case .i0: return 0
        case .i1: return 1
        }
    }
}
enum VectorIndex3 : VectorIndex {
    case i0, i1, i2
    static var count: Int { return 3 }
    static func withAll(_ op: (VectorIndex3) -> Void) {
        op(i0)
        op(i1)
        op(i2)
    }
    var value: Int {
        switch self {
        case .i0: return 0
        case .i1: return 1
        case .i2: return 2
        }
    }
}
enum VectorIndex4 : VectorIndex {
    case i0, i1, i2, i3
    static var count: Int { return 4 }
    static func withAll(_ op: (VectorIndex4) -> Void) {
        op(i0)
        op(i1)
        op(i2)
        op(i3)
    }
    var value: Int {
        switch self {
        case .i0: return 0
        case .i1: return 1
        case .i2: return 2
        case .i3: return 3
        }
    }
}


protocol Vector {
    associatedtype Element
    associatedtype Index: VectorIndex
    init(elementForIndex: (Index) -> Element)
    subscript(index: Index) -> Element { get set }
}



struct V1<T>: Vector {
    typealias Index = VectorIndex1
    var a: T
    init(elementForIndex: (Index) -> T) {
        a = elementForIndex(.i0)
    }
    subscript(index: Index) -> T {
        get {
            return a
        }
        set {
            a = newValue
        }
    }
}

struct V2<T>: Vector {
    typealias Index = VectorIndex2
    var a, b: T
    init(elementForIndex: (Index) -> T) {
        a = elementForIndex(.i0)
        b = elementForIndex(.i1)
    }
    subscript(index: Index) -> T {
        get {
            switch index {
            case .i0: return a
            case .i1: return b
            }
        }
        set {
            switch index {
            case .i0: a = newValue
            case .i1: b = newValue
            }
        }
    }
}

struct V3<T>: Vector {
    typealias Index = VectorIndex3
    var a, b, c: T
    init(elementForIndex: (Index) -> T) {
        a = elementForIndex(.i0)
        b = elementForIndex(.i1)
        c = elementForIndex(.i2)
    }
    subscript(index: Index) -> T {
        get {
            switch index {
            case .i0: return a
            case .i1: return b
            case .i2: return c
            }
        }
        set {
            switch index {
            case .i0: a = newValue
            case .i1: b = newValue
            case .i2: c = newValue
            }
        }
    }
}

struct V4<T>: Vector {
    typealias Index = VectorIndex4
    var a, b, c, d: T
    init(elementForIndex: (Index) -> T) {
        a = elementForIndex(.i0)
        b = elementForIndex(.i1)
        c = elementForIndex(.i2)
        d = elementForIndex(.i3)
    }
    subscript(index: Index) -> T {
        get {
            switch index {
            case .i0: return a
            case .i1: return b
            case .i2: return c
            case .i3: return d
            }
        }
        set {
            switch index {
            case .i0: a = newValue
            case .i1: b = newValue
            case .i2: c = newValue
            case .i3: d = newValue
            }
        }
    }
}

extension V2 { var tuple: (Element, Element) { (a,b) } }
extension V3 { var tuple: (Element, Element, Element) { (a,b,c) } }
extension V4 { var tuple: (Element, Element, Element, Element) { (a,b,c,d) } }
extension V1: Equatable where Element: Equatable {}
extension V2: Equatable where Element: Equatable {}
extension V3: Equatable where Element: Equatable {}
extension V4: Equatable where Element: Equatable {}
extension V1: Hashable where Element: Hashable {}
extension V2: Hashable where Element: Hashable {}
extension V3: Hashable where Element: Hashable {}
extension V4: Hashable where Element: Hashable {}


extension Vector where Index == VectorIndex1 {
    init(_ e0: Element) {
        self.init { (i) -> Self.Element in
            return e0
        }
    }
}

extension Vector where Index == VectorIndex2 {
    init(_ e0: Element, _ e1: Element) {
        self.init { (i) -> Self.Element in
            switch i {
            case .i0: return e0
            case .i1: return e1
            }
        }
    }
}

extension Vector where Index == VectorIndex3 {
    init(_ e0: Element, _ e1: Element, _ e2: Element) {
        self.init { (i) -> Self.Element in
            switch i {
            case .i0: return e0
            case .i1: return e1
            case .i2: return e2
            }
        }
    }
}

extension Vector where Index == VectorIndex4 {
    init(_ e0: Element, _ e1: Element, _ e2: Element, _ e3: Element) {
        self.init { (i) -> Self.Element in
            switch i {
            case .i0: return e0
            case .i1: return e1
            case .i2: return e2
            case .i3: return e3
            }
        }
    }
}



extension Vector {
    mutating func set<V>(from other: V)
        where V: Vector, V.Index == Index, V.Element == Element
    {
        Index.withAll { (i) in
            self[i] = other[i]
        }
    }

    init(repeating value: Element) {
        self.init { _ in value }
    }

}

extension Vector where Element: ExpressibleByIntegerLiteral {
    init(repeating value: Element.IntegerLiteralType) {
        let e = Element(integerLiteral: value)
        self.init { _ in e }
    }
}
extension Vector where Element: ExpressibleByFloatLiteral {
    init(repeating value: Element.FloatLiteralType) {
        let e = Element(floatLiteral: value)
        self.init { _ in e }
    }
}



extension Vector where Index == VectorIndex4, Element: SIMDScalar {
    var simd: SIMD4<Element> {
        return .init(self[.i0], self[.i1], self[.i2], self[.i3] )
    }

    init(simd: SIMD4<Element>) {
        self.init(simd.x, simd.y, simd.z, simd.w)
    }
}

It optimizes well and is of practical use.

1 Like

You did mentioned this on the other thread as well. I'm not sure I follow. Since I think more-or-less we know what other languages have been doing. That's why we try to do the same, and got stuck in the limitation of Swift's type system, hence the question like this popping up. For one, if I'm mimicking what C template does it, dependent type would be what appears to be missing.

Perhaps you mean to explore a few different designs to see which one suits future Swift best? Which would be very interesting.


If anything the most foundational thing missing would be the ability to branch, as @Jens has mentioned. Right now the only place where the same reference refers to different types would be in protocol's associatedtype, but it's still not nearly enough to get some thing beyond combinatorial logic to work. Having generic associatedtype would already be enough to do XOR, and should get us into combinatorial logic, but not much beyond that. But of course, we could just leap with dependent type. So there's a lot of ways to expand the current type system, which is perhaps what you meant by translating?

1 Like

I realize I've been wanting to do / asking about this for four years now : P.

Yes, exactly this. Dependent types is more of an umbrella term, same as "functional programming". There are a lot of different flavours, and we need to understand which flavours are the most interesting and could be adapted to Swift in the most practical way. For example Idris has type classes, but Agda doesn't, and type classes match Swift's protocols quite closely. Idris 1 also has some support for uniqueness types (AKA linear types), but in Idris 2 (which is still in preview) it's re-formulated as multiplicities, both linear types and multiplicities could play well with Swift's Ownership Manifesto. And all of this would require massive changes to the type checker, maybe almost a rewrite. And you'd also want type checking to stay performant and to still produce readable error messages.

Yes, sorry I wasn't clear. By "translating" I mean looking at the docs and tutorials of Idris and other languages (possibly even simpler educational languages like Pie to make things easier) and then manually trying to write than in Swift and seeing what's the most critical thing missing at the moment. Like with the natural numbers example above, where the lack of recursive typealiases or some other similar feature becomes more obvious.

1 Like

I wish generic associated types as described in the Generics Manifesto would get implemented, it would solve the problems we've been talking about here, perhaps not in the best way, but at least it seems more likely to happen within reasonable time (a year or two?), than eg dependent types, because the latter needs much more planning / discussion / design, please correct me if I'm wrong.

fwiw, you can mix'n-match type-as-variable with constant folding:

protocol StaticBoolean { static var value: Bool { get } }
...
enum XOR<A: StaticBoolean, B: StaticBoolean> { static var value: Bool { A.value != B.value } }

It is enough to affect branching decision (and optimiser seems to be doing well on constant-fold and dead code elimination so far). Though not enough to affect type layout.

1 Like

I think if we're going to reintroduce Bools, Ints and other types into a static form for the type system, we might as well add generic value parameters.