SE-0475: Transactional Observation of Values

Hello Swift community,

The review of SE-0475: Transactional Observation of Values begins now and runs through April 24th, 2025.

Reviews are an important part of the Swift evolution process. All review feedback should be either on this forum thread or, if you would like to keep your feedback private, directly to me as the review manager by forums DM or by email. When contacting the review manager directly, please put "SE-0475" in the subject line.

Trying it out

Toolchains including the implementation of this feature are available for macOS, Windows, and Linux.

What goes into a review?

The goal of the review process is to improve the proposal under review through constructive criticism and, eventually, determine the direction of Swift. When writing your review, here are some questions you might want to answer in your review:

  • What is your evaluation of the proposal?
  • Is the problem being addressed significant enough to warrant a change to Swift?
  • Does this proposal fit well with the feel and direction of Swift?
  • If you have used other languages or libraries with a similar feature, how do you feel that this proposal compares to those?
  • How much effort did you put into your review? A glance, a quick reading, or an in-depth study?

More information about the Swift evolution process is available at:

swift-evolution/process.md at main · swiftlang/swift-evolution · GitHub

Thank you for your participation!

Freddy Kellison-Linn
Review Manager

12 Likes

Looking pretty good to me, but I am trying to wrap my head around what this means:


let names = Observed { person.firstName + " " + person.lastName }

Task.detached {
  for await name in names {
    print("Task1: \(name)")
  }
}

Task.detached {
  for await name in names {
    print("Task2: \(name)")
  }
}

In this case both tasks will get the same values upon the same events. This can
be achieved without needing an extra buffer since the suspension of each side of
the iteration are continuations resuming all together upon the accessor's
execution on the specified isolation. This facilitates subject-like behavior
such that the values are sent from the isolation for access to the iteration's
continuation.

So, unless I have a fundamental misunderstanding of "physics", something like this either

  1. drops values
  2. applies back-pressure (ie: somehow limit the amount of changing)
  3. buffers

I don't see how it would back-pressure (you can set stuff on the source freely whenever your "isolation" gets scheduled), and the proposal says there is no buffering.

That would conclude that this sentence

In this case both tasks will get the same values upon the same events.

should probably read "the same events mostly, sometimes things do be dropping, depends on scheduling and threading, decisions had to me made".

Or am I not seeing something? There is no guarantee that each observer makes it back to .next() is time, right?


EDIT: ok, it seems I jumped the gun a bit ; ) there is a whole section further down explaining the details. I shall digest the whole thing first, please excuse my trigger-happiness.

1 Like

I’m -1 on this proposal as it currently stands, as I’m not convinced asynchronous sequences will produce the expected behaviour, particularly in the context of UI updates and animation transactions.

The core problem this proposal attempts to address is important, and I’m very much in favour of solving it. However, packaging changes into a transaction without solving the timing issue only gets us halfway there.

This system would introduce a two-tier observation model:

  1. Synchronous observation on the leading (willSet) edge.

  2. Asynchronous observation on the trailing (didSet) edge.

For synchronising with the underlying animation engine, it’s critical that willSet is called first, so the UI engine can snapshot the before state of the interface. This enables it to animate to the after state at the end of the current event loop cycle.

The problem is that asynchronous sequences guarantee delivery after the current loop cycle. This makes the exact moment that observations arrive indeterminate—regardless of whether they’re batched in a transaction or not.

The result for end users will be broken invariants, unpredictable updates, and out-of-sync or janky animations.

What I’d really like is access to didSet behaviour that is synchronous and immediate, so I can respond to changes and update dependent properties inside the same UI transaction. That’s the model Combine (and other Rx-style frameworks) has supported for good reason.

9 Likes

In the Behavioral Notes section, I read:

This case dropped the last value of the iteration [...]

I don't see in the sample code anything that stops the task that iterates. I find it hard to contemplate any situation where an observer that misses the last value can be acceptable.

I have no problem with dropped values. But dropping the last means that the part of the program that observes remains stuck with an obsolete value. This looks like a failed observation to me, a deception of the user expectations, a serious bug.

Maybe it is possible to use the observation api in a way that guarantees that this problem is avoided. But I would suggest amending the proposal or its implementation so that it can never happen in the first place.


When designing the GRDB api for SQLite observation, I also wrote behavioral notes, in the section ValueObservation Behavior. I took care of mentioning:

ValueObservation may coalesce subsequent changes into a single notification.

This "coalesce" word was chosen so that the reader understands that if some values may be dropped, an observer that does not explicitly stops observing will never remain stuck with an obsolete value.

It was my "duty", as an api implementor, to come up with a mental model of the api user, and to make sure the implementation fits the expectations that come with this mental model.

In summary:

  • Coalescing changes is OK.
  • Letting an observer stuck with an obsolete value is a serious issue IMHO.
11 Likes

This proposal does not aim to address synchronous and immediate didSet behaviors; that is a different set of requirements and a different usage case. Neither of which preclude each other.
I don't think either this nor a didSet asynchronous or synchronous system will ever solve a 100% of all cases - there are certain use patterns for certain cases. This proposal aims to solve a good amount of non-SwiftUI uses. There are other existing solutions like property-observers ala adding willSet and didSet to your properties; this proposal does not remove those and is additive to their existence.

The "dropping behavior" is practically this: values themselves are not per-se dropped but instead it is the willSet that may not be serviced in time, the value is still coalesced to an eventual consistency so it will never be "stuck" with an obsolete value (unless the consumer is somehow blocking execution and never calls next - and in that case that is on them to be stuck or get unstuck).

1 Like

That’s fair – and I completely agree that this proposal targets a different set of requirements. But I do think it’s worth highlighting a practical concern: in the absence of a complementary solution for the synchronous/immediate case, many programmers will reasonably expect Observable to support that use case out of the box, especially in UI or animation-driven contexts.

From the outside, an Observable type implies a reactive model. In that model, users typically expect change notifications to be:

  • Deterministic in delivery timing,
  • Delivered during the same run loop, and
  • Safe to react to immediately, in ways that maintain invariants or animate coherently.

Because this proposal introduces trailing-edge asynchronous delivery, it breaks those expectations in subtle ways – and that can be a foot gun, particularly when used with UI frameworks or state machines that rely on fine-grained state coordination.

Absolutely – but those solutions don’t scale well to multi-consumer cases, which are exactly what an Observable abstraction is designed to support. In fact, many developers reaching for Observable will be doing so because they want something more composable and scalable than didSet or delegates.

To be clear: I think this proposal solves a real problem. But I worry that if it lands without a complementary solution for the immediate/synchronous case, it may create confusion or disappointment for programmers who expect Observable to behave more like Combine or Rx-style systems. Ideally, the language would offer both edge-triggered sync observations and async coalesced transactions – clearly documented and differentiated.

8 Likes

I want to understand how the proposal ensures (1) the mutations on the observed object, and (2) the process of element emission, happen in the same isolation domain? I think this is crucial to avoid the tearing problem.

The current API is accepting an @isolated(any) non-async closure:

public struct Observed<Element: Sendable, Failure: Error>: AsyncSequence, Sendable {
  public init(
    @_inheritActorContext _ emit: @escaping @isolated(any) @Sendable () throws(Failure) -> Element
  )
}

so the user can write such code:

final class Object: Sendable {
    let a = Mutex<Int>(0)
    let b = Mutex<Int>(0)
}

let object = Object()
func observe() {
    _ = Observed { @MainActor in
        object.a.withLock { $0 } + object.b.withLock { $0 }
    }
}
   
nonisolated func mutate() {
    object.a.withLock { $0 += 1 }
    object.b.withLock { $0 += 1 }
}

Here the mutation is nonisolated while the emission is main-actor isolated. It then is possible for observe to get a midchanged object.

This example does not leverage any unsafe language features. Am I missing something here?

If a and b are intended to be conjoined then their mutations should be as well in a Sendable type. Sendable does not have any real restrictions that prevent this (because the issue is semantical not syntactical) and it violates the expectations of Sendable types. Observed does not change that this is a bug since it does not add any additional synchronization mechanism - all it brings is its own internal synchronization and transactionality.

1 Like

i think the proposal provides some interesting ideas that are probably useful in certain contexts, but the guarantees the type aims to provide around value delivery still seem uncomfortably vague. additionally, the implementation still has some undesirable behaviors and it's unclear if/how they will be handled.

the following are, in no particular order, some further thoughts on the design and current implementation. some are similar to those raised in the pitch thread and implementation PR, but i think are worth repeating here:

Multi-consumption is prone to race conditions

the proposal states that if there are multiple consuming Tasks, then:

... tasks will get the same values upon the same events.

with the current implementation, this seems like behavior that cannot in general be upheld. suppose we have an Observed closure that produces increasing integer values v_i, and two iterators, I_1 and I_2 that both consume the sequence's values. assume the source closure invocations and iteration all occur on the same actor (let's say @MainActor). the following sequence of events can occur:

  1. I_1 & I_2 both suspend awaiting a 'will set' event
  2. the Observed input value increments to v_1, scheduling resumption of the iterators
  3. a second increment of the input to v_2 is asynchronously scheduled on the source isolation
  4. I_1 resumes and reads v_1
  5. the Observed input value increments to v_2
  6. I_2 resumes and reads v_2

so in this case, the two iterators will see different values, despite the fact that all events take place on the same isolation, and they were initially suspended awaiting the same 'will set' trigger.

Multi-consumer initial value delivery

currently only one iterator will get the initial value of the sequence. this was raised a couple times in the pitch thread, but the proposal doesn't clarify what the expected behavior is.

Dependency changes while processing values breaks iteration

in the current implementation, if an iterator's consumer is processing a value returned by next() and the withObservationTracking change handler fires before the consumer installs and awaits its next 'will set' continuation, then, due to the 'one-shot' nature of the change handler, the sequence effectively breaks and the iterator remains suspended indefinitely. subsequent changes to the Observed object go unseen because the 'chain' of observations is broken in this case. the proposal obliquely touches on this in the 'behavioral notes' section with the 'producer outpaces consumer' example. it suggests that all values should eventually be seen, but this isn't how the implementation works today, and further clarity on how this is intended to be handled would be good.

IMO this is probably the most serious issue with the current implementation, since it's fairly easy to cause (even inadvertently), and can result in an iterator being entirely non-functional.

Use of @isolated(any) may be exploiting a compiler bug

sort of a tangential and implementation specific observation, but the original design was changed to have the Observed closure be marked @isolated(any). this makes sense as it is supposed to capture a fixed isolation upon initialization. however, it is a synchronous closure, and as such should not be callable within the context of a withObservationTracking block, as @isolated(any) functions must generally be awaited. the implementation gets around this today by seemingly relying on a compiler bug[1] that allows an isolation dropping function conversion to take place when calling the function through Result(catching:).


  1. see A few @isolated(any) function conversion questions ↩︎

1 Like

If a and b are intended to be conjoined then their mutations should be as well in a Sendable type. Sendable does not have any real restrictions that prevent this (because the issue is semantical not syntactical) and it violates the expectations of Sendable types.

I really doubt this.

If the proposed APIs rely on this interpretation of "correctly implemented" Codable types, I assume there'll be countless anti-pattern use cases in production.

I understand that Sendable alone is not enough to guarantee logic correctness, but if at the end of the day, it is still the developer's work to add some locking mechanisms, like this:

final class Object: Sendable {
   struct InternalState { 
       var a: Int 
       var b: Int
   }
   func withLock<R>(_ mutations: (inout InternalState) -> R) -> R { /*...*/ }
}

let object = Object()
func observe() {
    _ = Observed { @MainActor in
        objcect.withLock { state in
            state.a + state.b
        }
    }
}

nonisolated func mutate() {
    object.withLock { 
        $0.a += 1 
        $0.b += 1
    }
}

then, the concept of suspension points does not contribute much towards solving the tearing problem, which negates the following rationale

Tearing is ... Swift has a mechanism for expressing the grouping of changes together: isolation ... Swift concurrency enforces safety around these by making sure that isolation is respected.

Further more, in the above code, there're still chances for tearing in greater granularity:

mutate()
// <- sometimes, the closure passed to `Observed.init` could run in between
mutate()