Cross-cutting concerns - how to manage them

I've been working on refactoring a small app I've been working on to use TCA and have been really enjoying it so far - it's a small poker blinds timer app. I've managed to extract the core game timer behaviour into a separate module but have decided to leave very app-specific concerns in the main app target itself.

One of these concerns is sound effects, as they seem very UI-specific. I was wondering what the recommended approach would be to handle something like this, where you want to potentially hook into actions from various parts of your app. As far as I can see, the options are:

  1. Handle them in your main reducers, returning them as a side effect merged with any other side effects - I'm not a big fan of doing this as the main logic could be in a separate module or target and they shouldn't really need to know anything about sound effects.

  2. Handle them in a separate reducer of their own - this is the approach I'm taking at the moment.

  3. Handle them in a higher-order reducer that decorates other reducers - this is an approach I'm considering but am unsure if its the right approach.

The problem I'm finding with approach 2 is that it means having a reducer that acts on app state/actions which means potentially ignoring a lot of actions you aren't interested in. The other thing that bothers me is that it's implementation is tightly coupled to the order in which you arrange your reducers in the app reducer.

For example: I have a play/pause button on my main game clock screen and the play/pause logic is stored in GameState and handled by a togglePaused action. This is all in a separate module from the main app. The behaviour of this action is simple:

  1. Toggle the play/pause state.
  2. Depending on the new state, either return a new game clock effect (effectively a timer) or an effect that cancels the timer.
case .togglePaused:
        if state.isPaused { return Effect.cancel(id: GameClockId()) }
        return .gameClock(environment: environment)

I also want to handle this action in my sound effects reducer so I can play a different sound effect depending on whether the button paused or unpaused the game:

case .game(.togglePaused):
        if state.currentGame.isPaused {
        else {

This works, but it only works as implemented because the reducer runs after the main game reducer. If the reducers were swapped so that the sound effect reducer ran first (before the state was updated) the logic would be flipped.

It seems that an advantage of making this a higher order reducer is that it would be very clear within the sound effect reducer itself the order in which they are run because it would be responsible for calling the underlying reducer when it chooses. However I would then need to take care of collecting any effects from the underlying reducer and merge them with my own, potentially making the code more complicated than it needs to be.

Which approach should I be taking, is there a better one I've not considered?

Sounds are side effects same as any other, I'm not sure what you gain by treating them differently. Why not simply trigger the sound in the reducer that handle's the relevant action? This way you don't have to think about sequencing across different reducers, you can simply say e.g. when user runs into alien, decrease lives and play explosion sound. Put another way, you've already encapsulated the concern of how sounds should be played in the environment, it's simply a matter of triggering them at the appropriate times.

Sounds are part of the application UI, so I'm not sure why I'd want to mix those concerns in with my business logic? Is it not reasonable to try and keep them separate?

It's certainly reasonable to separate your UI and business logic. Something that I think is a little misleading is that most examples of reducers (including redux) tend to put business logic in the reducers. There's a number of issues with this, it's hard to reuse the business logic across actions and as you've noticed it means that you end up with ui/business logic mixed together.

The signature for a reducer is, in words, "when this happens update the state to xyz and trigger these side effects". If you use two reducers you lose the single source of truth for that relationship. Now if you're reading the code you aren't able to know for sure what happens when a reducer handles an action because there may be others involved as well.

Rather than splitting UI/business logic into separate reducers I'd suggest moving your business logic into the State/Model. In the case of your example the business logic is delegated to the model anyway, in which case I'd be very happy with

case .togglePaused:
    state.toggle.isPaused.toggle() // dispatch business logic

    return .concatenate( // side effects
        Effect.cancel(id: GameClockId()), ? .pause : .play).fireAndForget()

The difference in approaches becomes more obvious when there's more business logic triggered by the side effect/action. What I'm suggesting is that instead of adding more and more logic to the action handler you move that logic into the state so that there's always only a single line of business logic dispatching. This allows you to perform an arbitrary amount of business logic whilst keeping everything encapsulated within your State.

Thanks for your feedback. I'm open to being convinced that this is the way forward, although I'm not there yet.

What lead me in the direction I've currently taken is the episode on pointfree where they moved a cross-cutting concern (the "activity feed") into a reducer of its own - having the original activity feed logic spread across other reducers meant there was no single place to look and understand where things get added to the activity feed. [1]

Moving this all into one reducer made it easier to see the activity feed logic in one place, and I guess my thinking is the same here - by having a single sound effects reducer I can see in one place what triggers sounds in my app. It also made it easier to test the other reducers as I no longer needed to concern myself with orthogonal effects (even though I can obviously just stub them out by passing in a null sound effect player in the environment).


I have to admit I saw that episode and wasn't convinced that it was the right approach. I've been using this architecture with Elm for a few years and I'd make the same arguments there as I did regarding splitting out sounds.

At a very practical level what we're talking about is how to find code when we want to change it. In the case of a game I can easily imagine wanting to make a change in how the game responds to a particular event. I can imagine wanting to see a list of all sounds that are used in the game this is already available in the enum passed to your play function. I'm not sure when I'd want to see all of the triggers for all sounds though. I suppose in my experience there's great value in having everything that happens in response to an event in one place and I'd need a very good reason to sacrifice that.

Keeping things practical, you mentioned that you found it easier to test the reducers, but haven't you just traded a single test which covers your game's response for two separate tests that only partially capture your game's response?

We're talking about a functional architecture which has it's roots in mathematics. I tend to think of functional code as an equation written on a page, a succinct expression of how one thing maps to another. This is why I favour keeping the response to an action in one place. That's not to say complexity shouldn't be abstracted but I believe the abstraction should happen in the env and business domain rather than where you dispatch side effects.

In the end it is just code organisation so there's a lot of subjectivity here. Hopefully that's a useful perspective at least.

Terms of Service

Privacy Policy

Cookie Policy