SE-0475: Transactional Observation of Values

In that case the isolation is not specified from its call site then it will be scheduled as if it is passed nil to the isolation parameter internally. That will mean that the closure (having no particular isolation) will only be able to capture Sendable things. And therefore should be safe per their construction.

That wouldn't be horrible; it would mean that we have parity to the Observed.untilFinished method - however I am not sure I have a good name for that handy. Filling in those underscores as an alternative might go a long way to seeing if that is a viable direction.

Names like this were brought up in the initial pre-pitch however there were objections of calling it "Values" since the returns from the closure may not be "value-types"... I'm not fully in that camp myself but that objection was pretty strongly posed, and I can appreciate that confusion like that could exist.

In that case the construction would be in error if the closure was not called on the main actor (and consequently being safe for it's later access), because the initializing closure is not asynchronous and inherits the calling isolation - that means that to use something @MainActor you have to construct one of these with the isolation of the @MainActor.

I think that if someone wants to propose a synchronous didSet system; I would be quite willing to collaborate or be a point of reference since those are explorations I have done. I have particularly focused a lot of the designs for Observation in general to leave that carveout of the didSet available still - because I believe it has uses. I think those use cases are perhaps a bit more niche. However, I am quite willing to have a concrete set of uses presented that make a compelling case to say that is not as niche as previous reasoning as posed.

2 Likes

I think that if someone wants to propose a synchronous didSet system; I would be quite willing to collaborate or be a point of reference since those are explorations I have done. I have particularly focused a lot of the designs for Observation in general to leave that carveout of the didSet available still - because I believe it has uses.

Thanks – I appreciate your openness to the idea and the thought you’ve put into leaving space in the system – it would be great to see where you got to with this.

I think those use cases are perhaps a bit more niche. However, I am quite willing to have a concrete set of uses presented that make a compelling case to say that is not as niche as previous reasoning as posed.

While I understand the desire to frame these use cases as niche, I’d argue that the need for synchronous, selective propagation of observable changes is actually representative of a broad and recurring class of UI observation scenarios – especially when dealing with high-frequency or noise-prone data.

The elapsedTime example I gave isn’t an isolated occurrence, it’s illustrative of a more general pattern that I find myself encountering time and again:

  • Any data source that produces values more frequently than the UI can/should update will benefit from a layer that can observe changes, derive state, and throttle or coalesce updates before marking a view as dirty (and triggering a diff in the case of SwiftUI).
  • This includes domains like media playback, sensor data (i.e. accelerometers, gyroscopes), camera previews (auto exposure values like shutter speed, etc.), audio streams, and real-time data feeds (i.e. telemetry, health tracking, etc).

In all of these contexts, synchronous didSet-style observation can be used to:

  1. Transform and filter raw data into derived properties (e.g. roundings, thresholds, meaningful ranges),
  2. Control when a property is considered meaningfully “changed” for UI redraws or synchronising animations,
  3. Build layered or composable observables, where one observable object wraps and observes another emitting changes only when semantically necessary,
  4. Remove reliance on delegates or callbacks, promoting consistency and allowing for more composable designs.

This is a foundational capability for building predictable, performant, and modular UIs – both working with SwiftUI’s diffing engine or in conjunction with some other UI framework.

6 Likes

I never worked too closely on JS infra stacks but AFAIK the original Flux and Redux implementations that shipped around ReactJS did leverage synchronous "did set" semantics for marking components as dirty when application state changed. The actual code to then transform that dirty flag into a new component computation might be an asynchronous operation from the POV of the UI infra system
 but that's a different problem that what I believe is brought up here.

Very high-frequency updates are historically one of the long-standing pain points of declarative and asynchronous UI layout engines. ReactJS evolved tools to help work around this over time
 but it traditionally was not the original problem that ReactJS was built to solve for.

There have been concerns raised around the naming of the type/API as proposed and the LSG is interested in hearing a bit more exploration in that regard. Several reviewers have suggested that spelling this as Observed._____ would be appropriate, but as Review Manager I would encourage reviewers to suggest (and argue for) concrete options rather than mere placeholders. It is difficult to evaluate feedback of this type that doesn't provide a compelling alternative.

In another vein, some reviewers have suggested that the type name itself ought to be different. The LSG is interested hearing additional ideas in this category as well.

To give time for this additional discussion, the review of this proposal is extended until one week from today, through May 6th. Thank you to everyone who has participated so far!

1 Like

Thanks, Freddy.

I want to tease apart two different questions about the naming here. The first is a very high-level, almost philosophical, design question about styles of observation and whether it's reasonable to prefer one by giving it such a broad name as "Observed". That's been talked about in this review already; I'm not totally sure where I stand on this yet, and I would be interested in seeing more discussion about it. The other question is just about style and consistency, and in this post I want to focus on that.

I don't think there's any precedent in the standard library for an API that looks like Observed { foo.bar }. This is calling an initializer for the type Observed<T>. Now, we generally expect type names to be descriptions of their values. Adjectives can be okay as names for generic types, but that's because we expect them to typically be written together with a type argument, such that the whole phrase then works as a description of the value. For example, an Optional<String> really is an optional string, so it's a good description. But with Observed, we have two problems:

  • First, we don't expect the type argument to be written out in the primary idiomatic use of this type; we end up with just Observed by itself, and it looks weird.
  • Second, an Observed<Int> is not any kind of Int. It's a stream of values produced by evaluating a function that returns an Int, with some observation magic along the way. So it's not a good description of the value at all.

I think a more conventional way to design this API would be to use a function, named with a lowercase verb phrase as usual, that returns a type whose name describes what it is. So that would be something more like:

func observe<T>(operation: @escaping @isolated(any) () -> T)
  -> AsyncObservationSequence<T>

or perhaps AsyncObserver<T> or something along those lines.

11 Likes

The proposed Observed initialiser does remind me of the Task initialiser:

let result = Task {
    try await Task.sleep(nanoseconds: NSEC_PER_SEC)
    return true
}

I do like the expressiveness of a global function.

1 Like

I worry that a global function will end up underused (like sequence) because it’s harder to discover. But maybe there is a way to ensure these APIs get the attention they deserve?

1 Like

I like this name, although it's looks too general. I think if there's the possibility of future async observation sequences with similar API but different behavior the best name is right in the title: TransactionalObservationSequence.

1 Like

Based on my own experience designing observation apis, I would be OK to read and write:

let names = AsyncObservationSequence.tracking {
  "Hello \(person.name)"
}

This can be extended with similar names or arguments:

// Start with the current value on first call to `next()
let names = AsyncObservationSequence.trackingWithInitialValue {
  "Hello \(person.name)"
}

// Some policy that avoids some kind of undesired dropped values
let names = AsyncObservationSequence.tracking(bufferingPolicy: ...) {
  "Hello \(person.name)"
}

I'll remind that the "coalescing (1)" of the proposed sequence is probably not the "coalescing (2)" that many users expect.

With the proposed sequence, it's completely possible that an application user stares at a screen that displays an obsolete value for an infinite amount of time. If some new change never happens, the "coalescing (1)" can't fix this situation.

Many developers would call this a bug, and it lies in the fact that the proposed sequence does not implement "coalescing (2)", the coalescing that never leaves an iterator with an obsolete value for too long (async subtleties come into play here, but basically the iterator should get the latest value as soon as possible, even if the iterator was idle when the transaction for that change was completed).

3 Likes

I agree that the name Observed is a bit of an odd duck, however during the initial drafting and in the first pitch no real alternatives stood out. Naming it Async*Sequence seems very redundant; we don't name types by their protocol unless there is some sort of good reason to do so - either they are in the default import and need specificity or they are in conflict with a well known name that either is in the default import or in a very common import (such as Foundation).

One name that did come up during the initial design was Observation but... that suffers the fate of names that clash with the module name. So that directly is out, but there is a potential name of AsyncObservation; less redundant than slapping on a "Sequence" on the tail in addition but more specific than saying this is a synchronous did-set based observation (which has been raised again as a desire) - so it does provide some specificity.

The global function has the problem of not being extremely discoverable from autocomplete but would likely be spelled as func observe<Element, Failure: Error>(operation: @escaping @isolated(any) () throws(Failure) -> T) -> some AsyncSequence<Element, Failure> & Sendable instead. Making the public surface area of the interface just two functions the observe and observeUntilFinished. This was a considered avenue but was only slightly disfavored due to the nature of this specific AsyncSequence being a bit special for it's role/behavior in that this one is ok for it being stored and having a concrete type makes it more approachable for that.

8 Likes

I have no particular attachment to the exact names I suggested, just the way that they're following the existing API naming style.

That said, I have to note that I ended up suggesting AsyncObservationSequence by first asking the question "Well, how do we name existing AsyncSequence implementations in the standard library" and immediately finding AsyncCompactMapSequence, AsyncDropFirstSequence, AsyncDropWhileSequence, AsyncFilterSequence, AsyncFlatMapSequence, AsyncMapSequence, AsyncPrefixWhileSequence, and of course the Throwing versions of all of the above. So I don't entirely buy the "we don't name types by their protocol" claim. But I can see how those might be in their own category in a way.

8 Likes

With my review manager hat off, I do like that this top-level function approach allows us to pull apart "what is a good, pithy way for us to declare an observed expression" and "what is a good name for describing the sequence of values that is produced". In many cases I think there's enough overlap for initializers to serve double duty, but I think there's a decent argument that between the inference of generic arguments and the omission of argument labels for trailing closures, we'd be better off splitting these two concerns apart.

I wouldn't hate if users had to write something like:

AsyncObservation { ... }

but given that users will already have to import Observation to get these features at all I think its okay to use top-level names which we might otherwise think of as too general.

I also kind of like Observations as a 'double-duty' name here—while I'm not sure if we have precedent in the standard library for this sort of plural naming, it's in line with the AsyncBytes types in URL and FileHandle.

3 Likes

Following up here on some of the feedback - After some thinking I believe the naming is a bit odd as Observed and Observations is a decent alternative that I think is acceptable.

Additionally there is a behavioral tweak that should be added to avoid sequences that will never have a value produced; the change is somewhat subtile - but it alters the initial value production rule to apply to all creation of iterators. This means that if two iterations would be created before a value is produced then they both would fetch the initial value. Practically speaking this is the "pump priming" behavior now applied as a started-or-not per iterator. This ensures that any sequence that is fetched will always start from an isolated value from the closure and then observe subsequent values changed from that closure. That does not change the expected convergence to an eventual consistency that currently exists in the proposal but starts everything off on an equal footing no matter if it is one iterator or not. This however does have a potential impact if there were additional side effects from calling that closure.

The proposal was updated for both of these changes and additional section was added to the example behaviors to illustrate an exaggerated case of where there may have been an issue in the past but now is consistent with expectations.

5 Likes

Thanks, @Philippe_Hausler!

Since we are right at the close of the review period I am going to extend the review for another week to allow consideration of these late-breaking changes, through May 13th.

4 Likes

I see an issue mentioned earlier in this thread that seems to have gone unanswered. There is also a second scenario I would like to see addressed in the behavioral notes in order to make the expected behavior entirely clear to potential users.

Dependency changes while processing values breaks iteration

I feel the issue raised by SE-0475: Transactional Observation of Values - #9 by jamieQ of the iteration breaking down when the producer outpaces the consumer seems to have been skipped over, or at least I didn't see it addressed in this thread nor in the proposal.

If the behavior (both intended and implemented) is one of eventual consistency, then I think it's important the behavioral notes illustrate that.

The current example in the proposal shows the last value to have been skipped.
I think this particular behavior will create issues. Eventual consistency implies that the latest value will always be received and that is not what is happening here.

If the last value is skipped because the consummer is aborted before receiving it, then I'd argue this should be mentioned explicitly but also, it's a bad example of eventual consistency, as we don't see the latest value reach the consummer.

So I think the second example in behavioral notes should be updated so a value other than the last one is skipped.

I think we should also show an actual example of eventual consistency between consummers. Meaning a case where Consumers A & B do not receive all values, but both receive the last one.
For example, Producer produces 0, 1, 2, 3, 4, 5, 6, 7. Consumer A receives 0, 1, 2, 4, 6, 7. Consumer B receives 0, 2, 3, 4, 6, 7.

In particular, I think it should be impossible for the implementation to silently drop the last/a latter value.

Initial value delivery for consummers starting iteration at different point in time

The second point I haven't seen mentioned up to now is how this intends to deal with consummers starting iterating at different point in time. Naively (and assuming no value gets skipped because of an outpaced consumer), I would expect this kind of scenario:

  1. Initial value: 0
  2. Consumer A starts iterating (receives 0)
  3. Producer sets 1, 2, 3, 4 (Consumer A receives 1, 2, 3, 4)
  4. Consumer B starts iterating (receives 4 because it's the then initial value)
  5. Producer sets 5, 6, 7, 8 (Consumer A and B both receives 5, 6, 7, 8)

Is this what happens? I think it would be important to provide an example of this in the behavioral notes, if only to emphasize that there is no buffering/replay going on beyond the current initial value.

3 Likes

It was answered to some extent by the examples, but unfortunately I cannot enumerate all potential examples (and the listed behaviors are perhaps a bit long-in-the-tooth already in the proposal).

The major issue to consider here is that because the observation framework cannot and does not offer semantical atomicity nor does it offer any sort of mechanism to avoid blocking systems that prevent forward progress it is at the whims of how that functions - concretely if you do something unsafe like access a property outside of a lock from multiple isolations that is not made to be any less dangerous than it was before, likewise if you block asynchronous work with a semaphore bad things will happen. It is clearly good practice to avoid situations like that.

Moving on to the behavior to make it clearer: The AsyncSequence Observations has the following behavior outlined in the list below.

  1. The construction - this captures but does not immediately execute a closure for generating new emissions of values. That closure will presume it will be called on the same isolation (if specified) as it was created on, if there is none then it will be executed wherever (read global executor).
  2. The first time next is called on an instance of an iterator withObservationTracking is called on the captured isolation which inside its closure will be the closure captured on initialization. That means that the first time next is called on an instance the iterator it will call to the isolation and produce a value without awaiting a change. This ensures ALL iterators get at least one value.
  3. Subsequent calls to next will await the willSet previously setup by tracking; which is presumed to be safe by the enclosing @Observable type. Once that fires it will enqueue a call to the isolation previously captured. That will then mean that all future willSets (or didSets) will be subsumed until the next suspension point. Then on that isolation it will call withObservationTracking and return the element produced.
  4. the cycle is then repeated back to step 3 until termination

This means that we have a few fallout behaviors:

  1. all instances of Observations will always get at least 1 value no matter the mutations that occur.
  2. all cases will eventually reach a value (if validly a mutation occurs) provided nothing is stalled out by usages of things like semaphores - as long as the rest of the program makes forward progress then the mutation events will too.
  3. Values produced are always accessed in the scope of the isolation of their construction (provided there is one specified). That means that any value obtained by this will always be as safe as it is provided else wise to the program.
  4. Crossing boundaries will still be somewhat tricky since that relies on the behavior produced by the constructing object and its sendability.

The previous examples that showed a "last" value being dropped are examples to show the minutia and not per se the "last-ness" of the value. it is more so that a value might be coalesced into the next change.

Comparatively this approach gives a very comprehensively useful version of the willSet transactions, but it should be also understood that it is not aiming to be a buffering system that provides perfect replication of all values. That is a different tool and has differing design considerations. No system like this can be universal, there are variations that could be designed differently that follow the same behavioral constraints provided by the type even.

In short the rules that this proposal follows grant an ability to make a very similar system to SwiftUI without using SwiftUI. There are potential future directions of other approaches that could exist but are not directly part of the future directions of this particular pitch that could include things like buffering or custom coalescing or didSet triggered events etc.

I did put up a small extracted version of this here. That I think with playing with it might give some insight. As with any software it may have bugs, but so far this implementation has been quite robust and seems to follow the rule-set that I set forth earlier in this response.

Your outline of events is an accurate example of how the initial value exists.

4 Likes

to make some of the prior questions & concerns about the current implementation more concrete, here is a test case added to the sample implementation package demonstrating how the production-outpaces-consumption scenario can render the sequence non-functional[1] (example reproduced below):

import Observation
import ObservationSequence
import Testing

@Observable
@MainActor
final class N {
  var value = 0

  func increment() { value += 1 }

  var squares: Observations<Int, Never> {
    Observations { self.value * self.value }
  }
}

// adjust for different sequence behaviors
let productionRate = Duration.milliseconds(250)
let consumptionRate = Duration.milliseconds(500)

@MainActor
@Test(.timeLimit(.minutes(1)))
func testproducerOutpacingConsumerBreaksObserved() async {
  let numbers = N()
  let squares = numbers.squares

  let maxIters = 10
  var observedValues: [Int] = []

  // enqueue iteration to consume sequence
  let consumingTask = Task { @MainActor in
    for await square in squares {
      print("observed value: \(square)")
      observedValues.append(square)
      try? await Task.sleep(for: consumptionRate)

      if numbers.value >= maxIters {
        break
      }
    }
    print("consumer completed")
  }

  while numbers.value < maxIters {
    print("producer incrementing value to: \(numbers.value + 1)")
    numbers.increment()
    // if production outpaces consumption, the sequence breaks
    // and no longer produces any subsequent values despite the
    // 'data source' continuing to change
    try? await Task.sleep(for: productionRate)
  }

  // wait for consumer to complete
  await _ = consumingTask.value

  #expect(true)
}

/*
test log output:

◇ Test testproducerOutpacingConsumerBreaksObserved() started.
producer incrementing value to: 1
observed value: 1
producer incrementing value to: 2
producer incrementing value to: 3
producer incrementing value to: 4
producer incrementing value to: 5
producer incrementing value to: 6
producer incrementing value to: 7
producer incrementing value to: 8
producer incrementing value to: 9
producer incrementing value to: 10
✘ Test testproducerOutpacingConsumerBreaksObserved() recorded an issue at ObservationsTests.swift:23:2: Time limit was exceeded: 60.000 seconds
*/

with this setup, the sequence only ever emits a single value and is then 'stuck' awaiting subsequent observation callbacks that never occur. swapping the timing rate parameters causes all values to be consumed, but the behavior in general is non-deterministic.

i think what is missing from the behavioral description above is clarification on how the sequence will handle observation-tracked changes occurring while an iterator is processing an element and has not yet awaited the next 'willSet' trigger.


  1. i had to make some minor alterations to get the package to work with Xcode 16.0/macos 14, but i think those shouldn't affect the substance of the issue ↩

5 Likes

Unfortunately, I think the current implementation is subject to a serious race condition and shouldn't be merged.

Any changes that occur in the producer while the iterator is between calls to next are lost. I described the scenario in which this occurs during the pitch phase, here but you can also get the same problem by adding a try? await Task.sleep(for: .seconds(1.0)) after the print line in the first example in the Behavioral Notes section of the proposal – all values after the first will be lost and no matter how long the iterator waits, it will never receive another value.

I think this is the same problem discussed by @jamieQ in the previous reply.

The proposal mentions values that "slip between isolations" – which would be fine if these were simply aggregated. But from the point an iterator is constructed, it should always know that some kind of value change has occurred, so that when next is called again, it can take the "prime the pump" path. Instead, the iterator is completely unaware and potentially every value in the sequence after the first may be lost.

I'm not sure how you'd maintain tracking of value changed between calls to next but without it, I can't see this being usable. It is just a deadlock waiting to happen.

8 Likes

FYI this is an implementation bug and I have a fix posted. Again that is well beyond the proposal part of this review and just an implementation detail (an important one) and I think that this discussion is kind-of detracting from the design considerations of the discussion. If you would like; posting more tests like this would definitely aide in tracking down any other remaining bugs.

3 Likes

I was asked privately if the recent proposal updates have addressed my concerns.

I welcome the addition of the initial value, yes :+1:

I'm not against the proposal or the proposed sequence. It may have its use cases. But I will probably never use it, and I will strongly advise against it. Blog posts will welcome the new observation sequence of the stdlib, but sooner or later, bug reports will flourish: "observation does not work!" People won't know how to fix the bugs. And the stdlib authors will be entitled to discard the bug reports with "behaves as expected". A feedback deadlock.

No. A SwiftUI view never remains stuck displaying an obsolete value.


To understand the problem, consider that the proposed sequence has no buffer. That's written in the proposal. This was mentioned in the review thread. Let's draw conclusions:

When the next() method of an iterator is called, the only value it can get immediately is the initial value. Good. After this initial value was delivered, calling next() can not get the last known value because this last known value was not buffered. It can only suspend until a future change happens. Ergo the last known value is never delivered, and the last value returned by next() is an obsolete value. The application user is staring at an obsolete value on screen until the value changes again (which may happen, or not).

All you have to do for entering this scenario is to modify the value while the consumer task is busy handling a value (i.e. is not awaiting in next()):

let names = Observations { person.name }
for await name in names {
    await display(name)
}
  1. next() is initially called, and returns the initial name.
  2. display() is called with the initial name (and has not returned yet).
  3. name is modified.
  4. display() returns.
  5. next() is called, and awaits for the next value.

At this point, the app is displaying the obsolete initial value, and will never display anything else until the name is modified again. This goes against most expectations, will thus trigger bug reports, and that's why I call this a bug.

To simplify my demonstration, I assumed everything has the same isolation, and used await display(name) so that the value is modified concurrenly with its usage. A precise study of for await name in names { display(name) } (without await), in various isolation contexts, is left to the careful reviewer.


In the end, please DO NOT ship this proposal as it is. It is examplar of a missed target.

My suggestions are:

  1. Keep the proposed sequence, but warn that it has nothing in common with SwiftUI and document clearly how to avoid the bug (if it is even possible).
  2. Fix the bug right away, and adjust the proposed sequence so that it buffers the last known value. Challenge the Sendable conformance of the sequence if it goes against the runtime expectations I have outlined.
  3. Challenge the multiple iterations of the same sequence. I'm not sure this is needed, and once shipped you won't be able to remove it in order to apply the desired adjusments in an efficient way.
13 Likes