Solving the mutating slice CoW problem

But Michael, it's irrelevant what you were suggesting. Meeting your demand to “prove that there aren't additional language features that would make this unnecessary” means proving something about an infinite set of unknown possible language features. And please don't say I'm just taking you too literally; solving this problem is going to require precise communication. If that's not your actual condition for accepting burning something into the ABI, what is it, exactly?

Even just limited to your suggested features, I think the condition is pretty unreasonable—how does one prove that a given tool couldn't be used to solve some problem? But I'll try anyway…

  • The problem under discussion is how to make the use of existing APIs (roughly, subscripts and swapAt), more efficient. These APIs have to work in existing code on types that don't use any new features, so the only way new features could be used to solve these problems is inside the implementation of the APIs concerned.
  • Move-only types don't allow new semantics to be implemented, they just allow us to enforce certain semantics with the type system. It's easy to see by doing a mental code transformation: just make the hypothetical move-only type copyable, take the code that would be in its destructor and instead call it explicitly in the right places. Similarly, struct destructors don't add an ability to create new semantics, they just allow certain things to be done implicitly rather than explicitly. But we don't care how explicit the standard library's implementation needs to be in order to solve this problem.
  • I still don't know enough about your "inout locals" idea to be sure, but in the only reasonable implementation of such a feature that I can imagine, they don't add any new semantics either: any “region of code” in which you could create an inout local could instead have been coded as a call to a helper function with a regular inout argument.

Most of all, I don't think any of these features get at what I see as the fundamental issue:

When an array has been inout-sliced:

  1. we want to allow the slice buffer to be mutated in-place without copying, but
  2. can't allow the buffer to be freed if the slice is assigned or resized.

Today, 1. requires uniqueness and 2. requires non-uniqueness at the point of mutation, which at least proves to me that uniqueness is insufficient information to use as the sole criterion for making decisions about whether a mutation can be done in-place. That's why my solution adds a bit.

I actually think I know how to solve the problem without relying on the ability to observe refcounts > 2, which I think is at the core of @Andrew_Trick's objection. But it still needs a bit to store the information that an array buffer is being inout-sliced. ¹ As of today I don't see a way around adding dynamic information, because the ultimate mutation being applied to the slice may not be visible to the compiler so the extra information can't reach the site of the mutation statically. That's about as close as I can come to a proof that ABI changes are needed, so maybe I did it after all. Q.E.D.?


¹ I suspect it will be hard to demonstrate because ARC isn't making all the optimizations I think Andy told me it should (“if it's doing its job”) but I'll try to code it. [Update] yeah, it's as I suspected: the compiler eagerly issues a retain for the slice, and no optimizations we have today seem to be able to move that retain even as far as the yield.

4 Likes