I wanted to share a little more context for one of the reasons why I think the ability to chain Observables is important. The pitch states that one of the problems with Published is that it triggers unnecessary view updates.
The @Published attribute identifies each field that participates in changes in the object, but it does not provide any differentiation or distinction as to the source of changes. This unfortunately results in additional layouts, rendering, and updates.
I agree. However, the pitched proposal has the same problem but from a slightly different angle.
For example, imagine an Observable that emits rapid-fire user input events:
protocol MouseObservable: Observable {
var position: CGPoint { get }
}
If we want to create a view that switches views when the mouse reaches a threshold of 100 pixels, we could do this:
struct MouseView: View {
@State private var mouse = MouseObservable()
var body: some View {
if mouse.position.x > 100 {
ViewA()
}
else {
ViewB()
}
}
}
However, this means that a view update will be triggered and reevaluated every time the mouse position updates. Every pixel. X or Y. This seems in contradiction of the above quoted aim to avoid unnecessary ‘additional layouts, rendering, and updates’.
Even if we were to create a computed property:
extension MouseObservable {
var isXGreaterThan100: Bool { mouse.position.x > 100 }
}
And use that in our view instead – the very ‘additional layouts, rendering, and updates’ that we are trying to avoid still get triggered!
If there were an Observable API that offered a synchronous, trailing-edge, didSet
based update it would be possible to get around this using an intermediary Observable. The intermediary observable can then perform direct observation on the mouse, and only update its own properties when there’s a relevant change:
final class ViewModel: Observable {
private let mouse = MouseObservable()
var isXGreaterThan100 = false
init() {
reobserveMouse()
}
func reobserveMouse() {
// ⚠️ this API differs to the one proposed.
// It uses a 'trailing edge' observation rather than the
// 'leading edge' observation specified in the proposal.
withObservationTracking {
if isXGreaterThan100 != mouse.position.x > 100 {
isXGreaterThan100.toggle()
}
} didChange /* trailing edge */ : { [weak self] in
self?.reobserveMouse()
}
}
}
Now, isXGreaterThan100
only gets updated when the condition actually changes. In addition, as it’s called synchronously, it’s guaranteed to be in the same animation transaction as any other properties dependent on the mouse position. View invariants are maintained. And of course, unnecessary ‘additional layouts, rendering, and updates’ are avoided’ .
Also, I’m not necessarily wedded to the didSet
mechanism being part of the withObservationTracking
API – just something that facilitates this capability. Arguably a simpler synchronous mechanism (vs. the previously proposed async mechanisms) for this would be more ergonomic.
final class ViewModel: Observable {
private let mouse = MouseObservable()
var isXGreaterThan100 = false
private var observation: ObservationToken?
init() {
observation = mouse.observe(\.position, edges. [.initial, .didChange]) {
if isXGreaterThan100 != mouse.position.x > 100 {
isXGreaterThan100.toggle()
}
}
}
}
In summary, my feeling is that, if the aim of this API is to provide control of how and when view updates are triggered for SwiftUI, we need an API that gives us some control of how and when those observation events are fired. We need to be able to get initial
, willSet
and didSet
values so that we can properly orchestrate property updates, and crucially, we need to receive them synchronously in order that we can tie in to the current view update/event loop cycle/animation transaction.