RFC: Deque: should it be self-slicing

SwiftNIO's CircularBuffer which is a deque too but uses opaque indices and is self-slicing which can be a rather nice property.

I was wondering what the reasons were to make Deque not self-slicing. Sure self-slicing and Int indices is bad design but self-slicing & opaque indices worked quite well for us. Deque not being self-slicing is actually somewhat of a problem with rewriting CircularBuffer to be on top of Deque because is makes the slicing /O(n)/.

Doesn’t self-slicing break the O(1) append/prepend guarantee?

var d = Deque(0..<3)
d[0...1].append(5)     // This is not O(1)

I don't see why it can't still be amortized O(1) (which is all that is guaranteed/required).

Johannes, can you clarify why this happens? It is not at all obvious to me, but I assume that I'm just not seeing something.

Sorry, I phrased this badly. Implementing CircularBuffer (which is self-slicing) on top of Deque makes CircularBuffer's slicing O(n) because I'll need to make a Deque from a Slice<Deque>.

Are you suggesting that Deque should provide O(1) inserts at arbitrary positions?

What goes wrong if you back it by a Slice<Deque>?

1 Like

No. I'm suggesting that a self-slicing deque can have O(1) amortized append/prepend.

1 Like

That’s equivalent to (amortized) O(1) inserts, because one can write:

extension SelfSlicingDeque {
  mutating func fastInsert(_ newElement: Element, at index: Index)  {
    self[..<index].append(newElement)
  }
}

Ah! This explains my confusion over in RFC: Deque: should it have an opaque index?.

Even though I was a fan of it for a while, I don't think self-slicing is desirable in general -- see my post in the other topic for my reasoning. (I think I actually first heard it from you, based on your experience with ByteBuffer. :see_no_evil:)

Luckily, it is possible to implement the original CircularBuffer semantics on top of Deque. Given the documented semantics below, we don't even need to keep the index values stable across count-changing mutations, so there isn't any reason to have indices contain the exact same values before/after a push/pop. We can simply have them store Deque indices.

/// - note: Every index is invalidated as soon as you perform a length-changing operating on the `CircularBuffer`
///         but remains valid when you replace one item by another using the subscript.

(Although if there was one, we could probably emulate that by maintaining a bias value that gets subtracted from CircularBuffer indices to convert them to a Deque index. It gets tricky because the bias can grow arbitrarily large, but that can be handled by careful use of wrapping arithmetic. The Comparable conformance of Index would get a lot more tricky, unless the Index would contain the bias too -- but then it wouldn't be constant anymore, and in fact it would be just a rather pointless & weird decomposition of the underlying Deque offset.)

Here is one draft on how it can be done:

(You can keep Index 8 bytes wide by narrowing the offset value and the version number.)

struct CircularBuffer<Element> {
  var _base: Deque<Element> = []
  var _start: Int
  var _end: Int
  var _version: Int = { 
    var rng = SystemRandomNumberGenerator()
    return rng.next() 
  }()

  struct Index: Comparable {
    var _offset: Int
    var _version: Int

    static func == (left: Self, right: Self) -> Bool {
      precondition(left._version == right._version, 
        "Indices from different collections aren't comparable")
      return left._offset == right._offset
    }

    static func < (left: Self, right: Self) -> Bool {
      precondition(left._version == right._version, 
        "Indices from different collections aren't comparable")
      return left._offset < right._offset
    }
  }

  var startIndex: Index { Index(_offset: _start, _version: _version) }
  var endIndex: Index { Index(_offset: _end, _version: _version) }
  var count: Int { _end - _start }
  
  func index(_ i: Index, offsetBy delta: Int) -> Index {
    Index(_position: i._offset &+ delta)
  }

  subscript(_ i: Index) -> Element {
    get { 
      precondition(i._version == _version && _start <= i._offset && i._offset < _end, "Invalid index")
      _base[i._offset] 
    }
    _modify { 
      precondition(i._version == _version && _start <= i._offset && i._offset < _end, "Invalid index")
      yield &_base[i._offset] 
    }
  }
   
  func append(_ item: Element) { 
    _version &+= 1 // Note: this leads to stricter index validation than in the original
    _base.insert(item, at: _end) 
    _end += 1
  }

  func removeLast() -> Element { 
    precondition(count > 0)
    _version &+= 1 // Note: this leads to stricter index validation than in the original
    _end -= 1
    return _base.remove(at: _end) 
  }

  func prepend(_ item: Element) {
    _version &+= 1 // Note: this leads to stricter index validation than in the original
    _base.insert(item, at: _start)
    _end += 1
  }

  func removeFirst() -> Element {
    precondition(count > 0)
    _version &+= 1 // Note: this leads to stricter index validation than in the original
    _end -= 1
    return  _base.remove(at: _start)
  }
}

(Being forced to implement range-replacing mutations on slices is another great reason to dislike self-slicing. :wink:)

3 Likes

Appending to a tiny slice in the middle of a huge self-sliced CircularBuffer is still an amortized O(1) operation -- in this case, the constant factor simply includes the size of the hidden parts that precede/follow the slice. (Which remains constant no matter how many times you append (or prepend) to the slice.)

Yes, this does mean that the big-omicron notation can be rather misleading in practice. (Despite that, it's still useful -- things would be even worse if appending to a slice took linear time.)

Edit: This does not apply in cases like this one, where we aren't operating on the same slice instance:

for foo in someValues {
  deque[range].append(foo)
}

This is a quadratic loop. (We aren't currently(?) able to implement in-place slice mutations in Swift, so a copy-on-write copy is made in every iteration. Yes, it does pain me every time I think of it. In a nutshell, the problem is that we have no way to differentiate between range replacements and range mutations, so we don't know whether we need to retain a reference to the original storage buffer before vending the mutable slice.)

1 Like

You'll definitely get a proliferation of index checks (in Slice and then again in Deque) and last time I checked the modify accessor also broke. It'll also add 2 words to the wrapping data structure. I've tried backing things by Slice<Base> a few times but it never really ended up working. I should've probably been more diligent in filing bugs.

1 Like

Right, in my implementation I also just store Deque indices and that works well. And yes, as you did, you can store another start/end property but you'll need to remember to check it which doubles your amount of index checking. In other words, implementing CircularBuffer on Deque makes is slower than necessary (it'll probably still be faster than the current CircularBuffer because we never wrote the code to actually base it on UnsafeMutableBufferPointer, currently it's still backed by Array).

Terms of Service

Privacy Policy

Cookie Policy