Compile-time generic specialization

Where is T defined? What happens if you replace "T" with "Tensor<S>"?

- Dave Sweeris

···

On Feb 20, 2017, at 12:23, Abe Schneider via swift-evolution <swift-evolution@swift.org> wrote:

However, if I define an operation to on the Tensor:

class SomeOp<S:Storage> {
    typealias StorageType = S
    var output:Tensor<S>
    
    init() {
        output = Tensor<S>(size: 10)
    }
    
    func apply() -> Tensor<S> {
        let result = T.cos(output)
        return result
    }
}

let op1 = SomeOp<FloatStorage>()
let result3 = op1.apply() // calls default `cos` instead of FloatStorage version

So one question I have is why doesn’t the correct version of `cos` get called? Before it was because there wasn’t a vtable available to figure out which function to call. However, in this case since the function was defined in the class, I would assume there would be (I also tried variants of this with an accompanying protocol and non-static versions of the function).

I can get `SomeOp` to work correctly if I create specializations of the class:

extension SomeOp where S:FloatStorage {
    func apply() -> Tensor<S> {
        let result = T.cos(output)
        return result
    }
}

extension SomeOp where S:IntStorage {
    func apply() -> Tensor<S> {
        let result = T.cos(output)
        return result
    }
}

However, this doesn’t seem like a good design to me, as it requires copying the same code for each StorageType introduced.

Sorry, I forgot to copy in its definition:

typealias T<S:Storage> = Tensor<S>

As a quick sanity check I changed all `T.` syntax to `Tensor<S>` and got the same behavior.

Thanks!

···

On Feb 20, 2017, at 3:58 PM, David Sweeris <davesweeris@mac.com> wrote:

On Feb 20, 2017, at 12:23, Abe Schneider via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

However, if I define an operation to on the Tensor:

class SomeOp<S:Storage> {
    typealias StorageType = S
    var output:Tensor<S>
    
    init() {
        output = Tensor<S>(size: 10)
    }
    
    func apply() -> Tensor<S> {
        let result = T.cos(output)
        return result
    }
}

let op1 = SomeOp<FloatStorage>()
let result3 = op1.apply() // calls default `cos` instead of FloatStorage version

So one question I have is why doesn’t the correct version of `cos` get called? Before it was because there wasn’t a vtable available to figure out which function to call. However, in this case since the function was defined in the class, I would assume there would be (I also tried variants of this with an accompanying protocol and non-static versions of the function).

I can get `SomeOp` to work correctly if I create specializations of the class:

extension SomeOp where S:FloatStorage {
    func apply() -> Tensor<S> {
        let result = T.cos(output)
        return result
    }
}

extension SomeOp where S:IntStorage {
    func apply() -> Tensor<S> {
        let result = T.cos(output)
        return result
    }
}

However, this doesn’t seem like a good design to me, as it requires copying the same code for each StorageType introduced.

Where is T defined? What happens if you replace "T" with "Tensor<S>"?

- Dave Sweeris

Hey. Really old topic, but it looks like the best place to turn to.

I just built a hobby framework based on the assumption that specialization can eliminate deeply composed type based if-else trees, but I found out today that this is not the case.

What I tried to do:

public protocol ActionProtocol {}


public protocol ErasedReducer {
    
    associatedtype State
    
    func apply<Action : ActionProtocol>(_ action: Action,
                                        to state: inout State)
    
}

public protocol Reducer : ErasedReducer {
    
    associatedtype Action : ActionProtocol
    func apply(_ action: Action,
               to state: inout State)
    
}

public extension Reducer {
    
    @inlinable
    func apply<Action>(_ action: Action,
                              to state: inout State)
    where Action : ActionProtocol {
        guard let action = action as? Self.Action else {
            return
        }
        apply(action, to: &state)
    }
    
}

public struct ClosureReducer<State, Action : ActionProtocol> : Reducer {
    
    @usableFromInline
    let closure : (Action, inout State) -> Void
    
    public init(_ closure: @escaping (Action, inout State) -> Void) {
        self.closure = closure
    }
    
    @inlinable
    public func apply(_ action: Action, to state: inout State) {
        closure(action, &state)
    }
    
}

public extension ErasedReducer {
    
    func compose<Next : ErasedReducer>(with next: Next) -> ComposedReducer<Self, Next> {
        ComposedReducer(c1: self, c2: next)
    }
    
}

public struct ComposedReducer<C1 : ErasedReducer, C2 : ErasedReducer> : ErasedReducer where C1.State == C2.State {
    
    @usableFromInline
    let c1 : C1
    @usableFromInline
    let c2 : C2
    
    @inlinable
    public func apply<Action>(_ action: Action,
                              to state: inout C1.State) where Action : ActionProtocol {
        c1.apply(action, to: &state)
        c2.apply(action, to: &state)
    }
    
}

This is the core of what I'm doing. I build a large "registry" of handlers for actions that extract their required info using guard let downcasts - the only difference to what has been discussed in the middle of this thread, I compose this registry in a bottom-up way.

The guard statement can fully be resolved ahead of time by specialization. And this actually works for trivial compositions of, e.g., two primitive reducers. However, it does not work for the app reducer in this example project when compiled with optimizations. The disassembly still shows the dynamic casts and the debugger still stops when I set breakpoints at the return statements in the else block of the guards. I even tested this on a local branch in a test suite where I isolated the app reducer from the store.

Is generic specialization somehow sensitive to deep nesting and the full length of the composed functions? It would surprise me quite a bit, as it is often claimed that SwiftUI makes heavy use of this to optimize deeply nested conditional view hierarchies.

Edit: originally, I asked about this here, but there I'm really talking to myself at the moment.

I'm not necessarily against replying to old posts in general, but when the posts are old enough to have been sent by email it's best to start a new thread and link to the old one. If only so that we don't flood peoples inboxes.

Having said that, while Swift does try to specialize when possible it isn't guaranteed. If a function is complicated enough, Swift will happily leave you with just the fully generic version. SwiftUI largely gets around this by using opaque types.

Sorry about that inbox thing. This thread is the best one that came up on google.

The issue is: if the optimizer was smart enough, it could see that fully specialized functions would actually lead to a way smaller function body in this instance. Say you register 10 ErasedReducers in a tree of ComposedRecuers. Each of the individual reducers responds to a different action type. Without specialization but with inlining, this function becomes a big mess of attempted dynamic downcasts of which 9 are failing. With specialization and with inlining, you would get 10 different functions consisting of one successful downcast and an associated rather short implementation, and the downcast can even be resolved at compile time. It would really be great if the compiler could look at the specialized version before deciding that the fully specialized function would be too big. Or if @_specialize became public and got an unconditional parameter that forces specialization for every call that can be seen in the context from which the function is compiled (dynamic libs: inside the dynamic lib; static libs: at all known call-sites).

1 Like