[Second review] SE-0395: Observability

That's a good start, but note that it already gets us quite close to the case where bar is just calling foo on its own.

I didn't have anything fancy in mind.
Perhaps this:

func foo(_ x: BinaryTree) { ... }

func bar(_ x: inout BinaryTree, side: BinaryTree.Side) {
  mutateAtWill(&x[side])
  if predicate(x) {
    foo(x)
  }
}

I think you can generalize the pattern:

var s: S = ...
func foo() { someObservation(s) }
func bar(_ p: inout T) { someMutation(&p) }

becomes

func bar(_ s: inout S, path: KeyPath<S, T>) {
  someMutation(&s[keyPath: path])
  if someObservationTrigger(s) {
    someObservation(s)
  }
}
1 Like

We can, but what's that trigger in practice? Is it the same as

if oldValue != value {
    oldValue = value
    ...

or what?

Obviously we don't like to write triggers manually as it was the constant source of errors with trigger and observation getting out of sync because we'd changed one and forgot to change another.

Frankly this starts resembling what we had in UIKit:

func bar() {
    // change model.s somehow
    if true { // simple trigger
        label.text = model.s.a.b[123].c["key"] // manual UI update
    }
}

Whereas for an Observation client like SwiftUI we don't want to write any trigger manually:

func bar() {
    // change model.s somehow
}

struct MyView: View {
    var model = ... // observation magic here
    var body: some View {
        Text(model.s.a.b[123].c["key"])
    }
}
2 Likes

I have no horse in that race. I'm just trying to add my own contribution to Dave's comment:

The moment you can name a variable without having it be an argument of your function (or a copied capture), you get reference semantics. Whether or not it's a bad thing is a different question.

If the heart of the problem is to find better ways to write and maintain observation triggers, then I'm sure there's a way to do it in a MVS world and that may be a very interesting research question. Otherwise, I may be tempted to conclude that observation is incompatible with a value-oriented model.

5 Likes

I suspect your example might be misleading. Let's assume this scenario: there are a textfield and a label in the UI. The label displays the text user input in the textfield in upper case.

The pseudo code in UIKit:

func bar() {
   label.text = textfield.input.uppercased()
}

The pseudo code for data model in SwiftUI:

struct Model {
   var inputText: String {
       didSet {
           displayedText = inputText.uppeercased()
       }
   }
   var displayedText: String
}

So you need to implement how a change propagate in your data model even in SwiftUI. What SwiftUI automates is to propage data model change to view.

I think Dave's comment is about how to implement the change in data model: using observer or a value based algorithm (I understand the latter as data processing flow). While in this simple example, observer works fine and an algorithm would be overkilling, but from my experience the observer approach doesn't scale.

But why does observer approach works for SwiftUI? I think it's just used to track changes, not to process changes. My guess is SwiftUI processes changes in a functional way too.

I think it should be considered to move the onChange call to after cleaning up subscriptions in the withObservationTracking function.

    let values = list.entries.mapValues { $0.addObserver {
      onChange() // move from here
      let values = state.withCriticalRegion { $0 }
      for (id, token) in values {
        list.entries[id]?.removeObserver(token)
      }
      // to here
    }}

It currently works without issues due to the fact that ObservationRegistrar.State.willSet cancels a subscription for a given keyPath. However, moving the onChange call will make this code more robust.

EDIT: There are two point in which the subscription is cancelled: in withObservationTracking and in ObservationRegistrar.State.willSet. One of them is redundant, e.g. if you move onChange you could remove cancel call in willSet.

@Philippe_Hausler Are you and the team aware of any performance issues in the current implementation?

After using things for a few weeks and migrating some model code from Combine and ObservableObject to @Observable I decided to benchmark both approaches and was surprised to see the @Observable version was about 10x slower, and spent 10% of its time in access(_ keyPath:) via _swift_getKeyPath(pattern:arguments:).

I know @Joe_Groff has mentioned in the past that there are still performance wins to be made with key paths:

I'm curious if this performance is to be expected long term, if improvements are in the pipeline, or if it'd be helpful to dive deeper into the problem.

16 Likes

The issue with key paths is a known sore spot. This is definitely something that should be explored more not per se with how observation uses it but as you found out - the intrinsics for building key paths has a performance implication.

Quite honestly key paths are the right tool for the job here - they just need to be more refined. However it is a small hot spot in comparison to the savings of avoiding excessive rendering. The savings from avoiding unwanted rendering vastly out weighs the impact of key paths being formed (unless you are in a severe degenerate case).

Any wins needed to remedy the impact would likely pose some distinct wins in many other areas too - so from an engineering focus standpoint I would think it would be something that we should consider devoting resources to. If someone is interested in delving into that part of it I would be happy to share my findings as well as some ideas on how we could turn the costs into a practically insignificant impact. (but perhaps on another thread than the review of observation).

3 Likes

Can you clarify how forming keypaths avoids “excessive rendering”, and what this means for an app that doesn’t use observation for “rendering” (which I presume refers to SwiftUI-style invalidation)? Will non-SwiftUI use cases suffer a net perf regression by switching to @Observable?

@Philippe_Hausler Why did you use thread locals instead of task locals?

The execution of the closure is needed to be guaranteed across the start to end; meaning that the values must be changed sequentially and non-asynchronously. Furthermore thread locals are accessible from non-async contexts: e.g. when the property getter (which cannot be forced to be async) is accessed.

That means that to accomplish the tracking we need a scope local (aka thread local) for tracking, and cannot force non-async getters to reach out to a task local. Plus there is an extra cost for rendering side that was quite delicate; thread locals in that case compile down to a single machine instruction on some architectures.

Within the scope of the initial task, the changes will occur sequentially. With the scope of a nested task, the parent value will be shadowed, not overridden. So, I don't see where an inconsistency may occur. Am I wrong?

Task locals are also accessible from synchronous contexts. In such cases, they make use of what is called fallback storage and behave similarly to thread locals.
https://github.com/apple/swift/blob/main/stdlib/public/BackDeployConcurrency/TaskLocal.cpp#L79

I agree on this. But I also have a pitch in mind to allow for more efficient usage of task local storages. Specifically, one downside of the current task local system is the requirement to use heap-allocated objects as keys. In many cases this could be avoided.

Well, I'm not entirely sure I can help you there, because I have an issue with the word “magically.” When stuff happens magically that usually means local reasoning has been lost… so I'm opposed to magic in programming. The whole point of value semantics is to retain local reasoning.

If you're willing to settle for non-magic, you could take the approach SwiftUI, Photoshop, the desktop software I developed way back in prehistory—and I'm sure countless other systems—take to this problem:

  1. Make your data structure a value
  2. Use CoW so similar values of the data structure created by small mutations share lots of storage
  3. Make it efficient to compare parts of the data structure for equality and describe the differences between one value and another (didn't Foundation add collection diffing facilities some years ago?)
  4. Take a snapshot of your data structure before making a series of changes
  5. Compare the snapshot with the new state to see what changed and do whatever you need to do in response to those changes.

(This is super convenient if you need to be able to undo changes, because your undo history can just be a series of snapshots)

This is just one way to do it. I'm sure you can think of others.

Notice that steps 3 and 5 involve first-class algorithms that allow you to describe the behaviors of the system coherently rather than as a scattered set of observation actions.

And this is exactly the key. More generally, reference semantics doesn't scale because it undermines local reasoning.

6 Likes

I think the discussion of “why not use observation?” has been very interesting, but this review is certainly not going to result in the removal of reference semantics from Swift. And I highly doubt that even if the language workgroup decides to reject this proposal that SwiftUI, Combine, and Foundation will deprecate all existing forms of observation. So it might be a good idea to split out these posts into a new thread and limit the discussion here to the question of how well this specific revision of the proposal satisfies the assumed goal of adding observation to the standard library.

1 Like

Good idea to move the thread, but, lest my intentions be misinterpreted: the discussion has been about "whether observation fits with value semantics and how to handle the same kinds of problems with pure value semantics,” and nobody's suggesting removing reference semantics from swift. I hadn't planned to say anything in this review and I've just been answering the questions people ask me. Not saying you explicitly said otherwise, but that seems like the implication.

1 Like

I'm finding that property wrappers don't compose with the Observable macro. For example:

@propertyWrapper struct Wrapper<Value> {
	var wrappedValue: Value
}

@Observable class Model {
	@Wrapper var foo = "bar"
}

This results in two compiler errors, "Property wrapper cannot be applied to a computed property" and "Invalid redeclaration of synthesized property '_foo'". Wondering if this is a known issue and if it's a bug or an expected limitation of the macro.

That is an expected limitation.

Will it be possible to support property wrappers in the future? Not being able to use them would be a pretty significant and unfortunate constraint.

4 Likes

Also just stumbled across an inability to use the following construct:

@Observable final class HealthModel {
#if targetEnvironment(simulator)
  private var heartRateSimulationTimer: Timer? = nil
#endif
...
}

which results in: error: cannot find type '_heartRateSimulationTimer' in scope when trying to build for a simulator device. Is this also expected?

Reported as FB12581867

2 Likes

That is a bug for certain - and not expected. It is on my short-list of things to look into remedying.

2 Likes