Closure vs function performance and protocols

Ever since the release of the Parsing library, I’ve been thinking about the performance of closures vs. functions/methods and its implications for the Composable Architecture.

I haven’t done any formal measurements, but I could potentially see the case where a very deep hierarchy of views could have a similarly deep top-level Reducer, and hence a very deeply nested set of closures. This could lead to the same issue seen in the Parsing episodes, where deeply nested closures perform worse than nested method calls.

The solution there was instead to make Parser a protocol. I wonder if the same could be done for the Reducer? I imagine it would look something like...

protocol Reducer {
  associatedType State
  associatedType Action
  associatedType Environment

  func run(
    state: inout State, 
    action: Action, 
    environment: Environment
  ) -> Effect<Action, Never>
}

I actually think this might not end up changing ergonomics all that much, except that some static and middleware-esque Reducers would need concrete implementations, many extensions could now be on the protocol instead of the concrete Reducer type, and you’d need a type-erased AnyReducer<State, Action, Environment> wrapper probably. But... it could, theoretically, improve performance in some situations?

This all hinges on a lot of theoreticals, but it’s been on my mind, and I wanted to bring it up for discussion. I wouldn’t be surprised at all if Brandon and Stephen have already thought about it and made a decision one way or the other. :sweat_smile: In any case, curious to hear y’all’s thoughts!

3 Likes

I played with this idea some more this weekend, and I quite like some of the directions it's led me, as I've found some more potential upsides!

One pretty big one: we could get rid of the Environment generic entirely.

Since Reducers could have any implementation as long as they implement this run method, we can just include any dependencies directly on our Reducer. Consider the following:

struct FeatureState {
  // properties and such
}

enum FeatureAction {
  case buttonTapped
  // etc
}

struct FeatureReducer {
  var persistence: PersistenceClient
  var uuid: () -> UUID
  // etc

  func run(
    state: inout FeatureState, 
    action: FeatureAction
  ) -> Effect<FeatureAction, Never> {
    switch action {
    case .buttonTapped: // do some mutation of the provided state
    }
  }

  func helperMethod() {
    // potentially run some other logic
  }
}

(That helper function could also be inside some other dependency that wraps other dependencies, of course, in this version or in the current TCA.)

There are some other upsides of this as well, mostly having to do with flexibility and approachability.

  • One less generic, making this type a bit less intimidating and easier to reason about to those less familiar with the library.
  • It may appear that making the dependency properties mutable could be dangerous, but if the Reducer is a struct, this won't be possible, since run would have to be mutating. If you wanted to make this possible, could make a class-based Reducer. You could perhaps even, when it's available, make an actor-based Reducer!
  • In my experience, many developers are more comfortable using protocols than with the closure-based approach that TCA tends to use, so this might seem more appealing to them.
  • In a similar vein, that UUID initializer could even be a method, or this reducer could be a protocol if you wanted. This is probably what devs used to "protocol-oriented" programming would be more likely to expect from this sort of structure.
    protocol FeatureReducerProtocol: Reducer {
      var persistenceClient: PersistenceClient { get }
      func newUUID() -> UUID
    }
    
    You could even provide some default implementations.
    extension FeatureReducerProtocol {
      func newUUID() -> UUID {
        UUID()
      }
    }
    

This could also possibly be made backwards-compatible with earlier versions if the protocol version were named ReducerProtocol. This could end up getting a little bit more complicated, though.

In some ways this would be a pretty drastic break from current iterations of the library, so I don't expect it to be adopted for TCA. I still wanted to put forth these ideas, though, in case they could be useful in convincing others to adopt a more unidirectional, composable, testable, etc architecture in their codebases! :smiley: I'm also still, of course, open to critique and discussion.

1 Like

Abstractly, ComposableArchitecture's Reducer is just a function (inout State, Action, Environment) -> Effect<Action>.

An alternative formulation is (Environment) -> (inout State, Action) -> Effect<Action>, which we fully evaluate in two steps. First, at startup, we pass it an Environment, and get back a function (inout State, Action) -> Effect<Action>. We pass this simpler function to the Store, so the Store doesn't need the Environment. This is essentially what your protocol-based design does, but we can also do it with a struct-based design.

Concretely, ComposableArchitecture's Reducer has this interface:

public struct Reducer<State, Action, Environment> {
    public init(_ reducer: @escaping (inout State, Action, Environment) -> Effect<Action, Never>)
}

And Reducers are defined like this (examples taken from isowords):

extension Reducer {
  public static func leaderboardResultsReducer<TimeScope>() -> Self
  where
    State == LeaderboardResultsState<TimeScope>,
    Action == LeaderboardResultsAction<TimeScope>,
    Environment == LeaderboardResultsEnvironment<TimeScope>
  {

    Self { state, action, environment in
      switch action {
      case .dismissTimeScopeMenu:
        state.isTimeScopeMenuVisible = false
        return .none

    ...
}

public let leaderboardReducer = Reducer<
  LeaderboardState, LeaderboardAction, LeaderboardEnvironment
>.combine(

  cubePreviewReducer
    ._pullback(
      state: OptionalPath(\.cubePreview),
      action: /LeaderboardAction.cubePreview,
      environment: { _ in CubePreviewEnvironment() }
    ),

  Reducer.leaderboardResultsReducer()
    .pullback(
      state: \LeaderboardState.solo,
      action: /LeaderboardAction.solo,
      environment: {
        LeaderboardResultsEnvironment(
          loadResults: $0.apiClient.loadSoloResults(gameMode:timeScope:),
          mainQueue: $0.mainQueue
        )
      }
    ),

   ...
)

Here's an Environment-free alternative interface:

public struct Reducer<State, Action> {
   public init(_ reducer: @escaping (inout State, Action) -> Effect<Action, Never>)
}

With this alternative interface, we would write Reducers as follows, passing the Environment when we create each Reducer:

extension Reducer {
  public static func leaderboardResultsReducer<TimeScope>(
    environment: LeaderboardResultsEnvironment<TimeScope>
  ) -> Self
  where
    State == LeaderboardResultsState<TimeScope>,
    Action == LeaderboardResultsAction<TimeScope>
  {

    Self { state, action in
      switch action {
      case .dismissTimeScopeMenu:
        state.isTimeScopeMenuVisible = false
        return .none

    ...
}

public func leaderboardReducer(environment: LeaderboardEnvironment) -> Reducer<
  LeaderboardState, LeaderboardAction
>.combine(

  cubePreviewReducer(environment: CubePreviewEnvironment())
    ._pullback(
      state: OptionalPath(\.cubePreview),
      action: /LeaderboardAction.cubePreview
    ),

  Reducer.leaderboardResultsReducer(
    environment: LeaderboardResultsEnvironment(
        loadResults: environment.apiClient.loadSoloResults(gameMode:timeScope:),
        mainQueue: environment.mainQueue
      )
  )
    .pullback(
      state: \LeaderboardState.solo,
      action: /LeaderboardAction.solo
      }
    ),

   ...
)

I don't recall if PF ever discussed this in the videos. Exercise 2 of episode 91 discusses another alternative Reducer formulation: (inout Value, Action) -> (Environment) -> Effect<Action>. In this formulation, the Store still has to know about the Environment, but it much more strongly encourages the Reducer to be side-effect free, because all State mutation has to happen before the Reducer has access to the Environment.

1 Like

It's something we've definitely wanted to explore more! We just haven't had the opportunity to dive deep into it yet. We don't imagine the performance gains to be as critical as they were for parsers, where the average parser can grow to become much more complex than the average app reducer, and where a parser may call thousands of parsers under the hood, but an app reducer may call just a dozen or so reducers. That isn't to say performance improvements wouldn't be welcome, though!

We're interested more generally in what the protocol will unlock, including @mayoff's suggestion.

If anyone explores this topic further we'd love to know what you find out!

2 Likes

So I decided to implement this for fun... there's some interesting tradeoffs! A lot of the usual stuff when it comes to protocols.

I actually kind of like the lack of an explicit Environment, though; the removal of that leaves some room for flexibility, but still the option of keeping dependencies constrained to living in the reducers. It also simplifies some signatures, and increases the complexity of others.

I'm pretty sure I made some errors adapting the TicTacToe app (or one or more of the reducers...), as it doesn't seem to function properly....

I'm not sure if I would prefer this over the simple struct version; the removal of the Environment generic is the one thing that I find really interesting, but I'm not sure how feasible it would be to implement this on the current struct version. In any case, it was a fun exercise!

If you'd like to explore, you can check out my fork of TCA here and its Reducers folder: https://github.com/junebash/swift-composable-architecture/tree/main/Sources/ComposableArchitecture/Reducers.

Also attempted to adapt the Todos and TicTacToe examples. swift-composable-architecture/Examples at main · junebash/swift-composable-architecture · GitHub

Also added some Lens-y protocols because I wanted to be lazy and overengineer one of the Reducer implementations. :sweat_smile: swift-composable-architecture/ReadWritable.swift at main · junebash/swift-composable-architecture · GitHub

1 Like

Awesome work! We'll take a peek soon :smile:

I've done some experimentations with redux-like architectures based on protocols before. Here's one finding that I'd like to share:

Don't make Action an associatedtype. Instead, have run be a generic function over Action (something which you can't express with a struct-based design).

Why? Well, because this enables generic specialization. Each time in your program that you send a concrete action to your generic reducer, a new run function will be generated where the Action type is already known. Hence, downcasts are known in advance to succeed or fail and therefore it is known in advance which base reducers should actually react to the particular action -- you achieve static dispatch!

In order to implement base reducers, one would have a BaseReducer protocol that actually does have Action as an associatedtype and downcasts the generic Action to the associatedtype in a default implementation.

Not sure if Composable Architecture is a great candidate to adopt this design because Effect<Action, Never> as return type may change the variance of the Action type (no idea if its a covariant, contravariant or invariant type parameter of Effect).

Hmmm, I'm not sure I follow you entirely, but from what I do understand, I think that would be a pretty drastic change in the structure of reducers and the library as a whole. The transformation to protocols I used above is more in line with how they adjusted their base Parser type from a value type to a protocol (type generics correspond directly to protocol associated types). Could you maybe provide some examples with what you mean and how it would function?

Now again, I don't know about Effect<.,.>, that's very library specific. So my example will be based on "pure" reducers found in other Redux-like libraries.

protocol Reducer {
associatedtype State 
associatedtype Environment
func run<Action>(state: inout State, action: Action, environment: Environment)
}

protocol BaseReducer : Reducer {
associatedtype Action
func run (state: inout State, action: Action, environment: Environment)
}

extension BaseReducer {
func run<T>(state: inout State, action: T, environment: Environment) {
      guard let action = action as? Action else {return}
      run(state: &state, action: action, environment: environment)
   }
}

struct ComposedReducer<R1 : Reducer, R2 : Reducer> : Reducer where
 R1.State == R2.State, R1.Environment == R2.Environment {

   let r1 : R1 
   let r2 : R2 

   func run<Action>(state: inout R1.State, action: Action, environment: R1.Environment) {
      r1.run(state: &state, action: action, environment: environment)
      r2.run(state: &state, action: action, environment: environment)
   }

}

The individual reducers and the composed reducers now have all the types available in advance. So when you call

topLevelReducer.run(state: &appState, action: "Hello, World!", environment: appEnvironment)

the compiler will generate a new function via generic specialization. Say, your reducer hierarchy looks like this:

struct Reducer1 : BaseReducer {

func run(state: inout AppState, action: Int, environment: AppEnvironment){...}

}

struct Reducer2 : BaseReducer {

func run(state: inout AppState, action: [Double], environment: AppEnvironment){...}

}

let topLevelReducer = CombinedReducer(r1: Reducer1(), r2: Reducer2())

Then, the function generated for you by the compiler in the case action: "Hello, World!" will look like this (I'm paraphrasing):

func run(state: inout AppState, action: String, environment: Environment) {

   if let action = action as? Int { //will never succeed
     r1.run(state: &state, action: action, environment: Environment)
   }

   if let action = action as? [Double] { //will never succeed 
     r2.run(state: &state, action: action, environment: Environment)
   }

}

Since the compiler knows in advance that the downcasts won't succeed, it can (and in optimized builds will) eliminate those calls - and actually the entire reducer call, as it is empty.

Edit: That's just an example how protocol based approaches can lead to improved performance and that you can achieve static dispatch (which sometimes is desirable) - especially in case of highly unbalanced reducer trees where the action you want to trigger is deeply nested and all other reducers don't respond to the request and just do unneccessary checking if they should react.

I'm aware that this introduces other conceptual problems, e.g. you may accidentally call reducers with actions that have absolutely no effect. It's cool that those calls will be completely optimized away, but one may ask if it is legit to even make such calls.

Edit Edit: The extreme case that no reducer in your hierarchy responds to an action can be mitigated by marking valid actions in your app with a marker protocol AppAction and wrapping your store's dispatch<Action> function into a constrained dispatchSafe<A : AppAction> function. You'd still lose out on "action hierarchies" that correspond to your reducer hierarchies (unless you have different marker protocols for different parts of your reducer hierarchy and write some dispatchSafe<A : Whatever> boilerplate).

1 Like
Terms of Service

Privacy Policy

Cookie Policy