Solving the mutating slice CoW problem

Hey, thanks so much for writing all this down, and so clearly! Most of it makes complete sense to me and I have no questions or even nits to pick for the most part, but there are a couple, and I got lost towards the end…

I had to google OSSA and found this and this, neither of which is obviously final. Wish I had known about that document at the time, but in fairness I probably would have been too busy to absorb it. Is there an official version somewhere?

Theorem 1: for any reference-counted type, checking uniqueness requires apparent mutability of the reference.

Here I'd like to suggest we use the term apparent possible mutation (or just apparent mutation as you use later) of the reference since mutability is a type system concept. IIUC it's not the mutability of the reference in that sense you're talking about, but the inability for the compiler to prove that it isn't actually mutated, right?

Anyway, in light of how I understand(?) that, I got lost here:

Axiom 3: for any CoW type, mutability implies uniqueness.

It took me a long time to connect this “mutability” to the “apparent mutability” above, in which I hope I'm headed down the right path; before that I didn't have a clue. But even after I rephrase it as “apparent mutation of the reference” I can't make sense of it. It only seems axiomatically true if I read it as “mutation of the buffer referred to by the reference,” which is a very different thing and now I'm pretty sure I'm just guessing. Can you help me out with this one?

The only way out of the rules I've put forth above would be a language level calling convention that provides for a single reference argument to be net released more than once (without corresponding retains). I'm comfortable stating that we will never have such a convention.

Agreed, we won't. But it's not obvious to me that we couldn't come up with a new set of rules, provable to preserve the semantics of existing code code, while making some new things possible. I realize it's speculation on the order of “couldn't we solve this with some [magic] language feature?” but I would like to understand how fundamentally you hold your statement to be true.

Regarding your conclusions, they're pretty encouraging. The excess retain/release on a buffer that is mutable but never actually mutated under rather bizarre circumstances does not seem like it could have much impact on real programs. And, not that I'm counting on it, but this seems like one of those places where—with a different set of rules like what I referred to above—it might be possible for the compiler to prove that the the operations could be eliminated.

It's a little disturbing to me that under the current rules, the fix depends on an opaque, possibly-retaining, possibly-releasing, possibly-mutating, possibly-escaping use of the reference, and very disturbing that isUniquelyReferenced depends on the same opacity. It just seems intuitively like we must be robbing the compiler of more important information than necessary around these operations that are so crucial to performance and are in fact pure, and that it would better to give them some kind of special status in the rules. Am I wrong to be concerned?

Final question: when you wrote,

Let's keep those issues in separate posts.

Did you mean “in separate threads,” or would you be happy to deal with those issues in follow-up posts here?

Thanks again!
-Dave