Response Throttling and Disconnection

I have two semi-related needs for my testing server.

  1. Throttle a large download to a particular speed so that the test client has the opportunity to cancel the download and resume, to test resume handling from a byte range. Perhaps the best I can do is return the file in small segments?
  2. During such a large download, disconnect from the client on the server side, in order to trigger an error on the client and test download resumption after error.

I haven't been able to find APIs to enable either one, though I think I could respond with delayed segments in the first case.

You can probably hook into FileIO's readFile(at:chunkSize:onRead:) and effectively reimplement streamFile with some copy and paste. That would allow you to add additional logic in onRead to either hack in a thread sleep to slow the download down and a check to see if to return a failed future which should trigger a client disconnect.... I think

Upon consideration and first implementation, implementing most of what I want seems simpler as generated data rather than an actual file upload. That way I don't need to deploy a big file alongside my executable and (eventually) other resources.

What I have so far doesn't accomplish either of my original goals (throttling, interruptibility) but works for random data downloads as well as resumption using URLSessionDownloadTask.

app.on(.GET, "download", ":count") { request -> Response in
    guard let totalCount = request.parameters["count", as: Int.self], totalCount <= 10_000_000 else {
        return .init(status: .badRequest)
    }
    
    let formatter = DateFormatter()
    formatter.locale = Locale(identifier: "en_US_POSIX")
    formatter.timeZone = TimeZone(secondsFromGMT: 0)
    formatter.dateFormat = "EEE, dd MMM yyyy hh:mm:ss zzz"
    let lastModified = formatter.string(from: Date(timeIntervalSinceReferenceDate: 0))
    
    let response: Response
    
    if let range = request.headers.range {
        let byteCount = range.ranges.reduce(0) { result, value in
            switch value {
            case let .start(value):
                return result + (totalCount - value)
            case let .tail(value):
                return result + value
            case let .within(start, end):
                return result + (end - start)
            }
        }
        
        let buffer = request.application.allocator.buffer(repeating: UInt8.random(), count: byteCount)
        response = Response(status: .partialContent, body: .init(buffer: buffer))
        response.headers.contentRange = .init(unit: .bytes, range: .within(start: (totalCount - byteCount), end: totalCount))
    } else {
        let buffer = request.application.allocator.buffer(repeating: UInt8.random(), count: totalCount)
        response = Response(body: .init(buffer: buffer))
        response.headers.add(name: .acceptRanges, value: "bytes")
    }
    
    response.headers.replaceOrAdd(name: .contentType, value: "application/octet-stream")
    response.headers.add(name: .lastModified, value: lastModified)
    
    return response
}

I'd like to reduce the data I need to send to make the test reliable (local transfer is so fast I need 5MB+ to allow for reliable cancellation). So I figure I can stream chunks of the body over time to accomplish the throttling I want, giving the client more time to properly cancel over time. I also figure your failed future approach may work during that body stream. What do you think?

This actually turned out to be fairly easy given my previous work with delayed stream bodies. After turning off the automatic chunking I'd previously requested, it works to throttle the overall response and allows me to easily inject an error that's properly interpreted to create download resume data. Passing an optional query parameter lets me control whether the error is produced during the response.

var buffer = request.application.allocator.buffer(repeating: UInt8.random(), count: totalCount)
response = Response(body: .init(stream: { writer in
    var bytesToSend = totalCount
    let segment = (totalCount / 10)
    request.eventLoop.scheduleRepeatedTask(initialDelay: .seconds(0), delay: .milliseconds(1)) { task in
        guard bytesToSend > 0 else { task.cancel(); _ = writer.write(.end); return }
        
        if shouldProduceError, bytesToSend < (totalCount / 2) {
            task.cancel()
            _ = writer.write(.error(URLError(.networkConnectionLost)))
            return
        }
        _ = writer.write(.buffer(buffer.readSlice(length: segment)!))
        bytesToSend -= segment
    }
}))
response.headers.add(name: .acceptRanges, value: "bytes")
response.headers.replaceOrAdd(name: .contentLength, value: "\(totalCount)")
response.headers.remove(name: .transferEncoding)

The range response from earlier is unconditional.