Coming back to this concrete example:
I'd like to suggest (again, but in more detail this time) that the problem with this is not limited to the concurrency in the system - i.e. that the problem is not that you made two requests, but they ran out-of-order due to task scheduling. The problem expressed by this example is inherent to most networking operations; perhaps the user is in the middle of a handover between the cellular and WiFi networks, or some data needs to be retransmitted due to packet loss - even if you place scheduling restrictions on each task, so requests are guaranteed to be made in the order [A, B]
, the responses may arrive in the order [B, A]
, and so your database overwrites fresher data with stale data.
Serial queueing is the most conservative, most extreme attempt to solve this problem. It performs one network request at a time, and waits for its response to arrive before making another request, so there is no risk of responses arriving out of order. However, it also goes against all advice I've ever seen related to making network requests - it means more load for servers, as each client's connection lasts longer or each client keeps disconnecting and reconnecting, and it is difficult to take advantage of multiplexed protocols such as HTTP2/3; for mobile devices, it means keeping power-hungry radios alive or at higher power states for longer. It's also incredibly slow - imagine if when you visited a website, every resource on the page was loaded in a serial queue. Page load times would be orders of magnitude slower.
But at least the data evolves in a predictable way now, right? Ah, well... I said that with a serial queue, responses arrive in order. That is not the same as saying there is no risk of overwriting fresh data with stale data. Even if the system makes requests in the order [A, B]
, and even if the responses arrive in that order [A, B]
, a later response might still not necessarily give you a more recent version of the data, due to effects such as caching by the remote systems which process the request - in other words, even if you serialise everything on the client side, you still might overwrite fresh data with stale data. Even if we're willing to pay the performance and power costs for all of that, we still can't guarantee order.
So even serialisation cannot entirely solve this problem.
I get that this is just an example, but what I'm trying to show here is that for many kinds of systems, trying to impose order locally is just a losing battle. In my experience, it is better to accept the system as it is rather than fight it.
So how do you deal with data that can arrive out-of-order, or where you might unexpectedly see past values? IMO, the best place to tackle that is at the data model layer.
The freshness of the data isn't a property of how or when you made the request - in general they are completely unrelated. It's a kind of metadata, and the information you have and how you process that tends to be highly application-specific.