Performance issues with Foundation.Data

I like the idea of Foundation's Data type a lot (the equivalent of a RawPointer for managed data storage), but every time I try to use it, it ends up making everything really slow.

Benchmark used is available here. Make sure to compile it with -O. Most of the printouts are to make sure the optimizer doesn't remove things, they can be generally ignored and were removed from the output posted here.

First, simply allocating a Data takes 2.5 times as long as it does for array
Here's a comparison of the time it takes to allocate 220 collections ([UInt8] or Data) holding a 32-byte payload:

> time ./DataSpeedTest alloc array 20
        0.23 real         0.19 user         0.03 sys
> time ./DataSpeedTest alloc data 20
        0.62 real         0.54 user         0.07 sys

If you look at the memory allocations made using Instruments, you'll see that one [UInt8] storing 32 bytes allocates 8 bytes on the stack and 64 bytes on the heap, while a Data allocates 24 bytes on the stack, and both a 96 byte Foundation._DataStorage and a 48 byte payload.

Personally, I actually find this the most reasonable of the issues, since Data does support custom deallocators and such, but it does mean that if you want a container to store an object that's a few bytes of raw data, Data is not the container for you.

The rest of the tests are performed on a collection storage holding a 226 byte (64MB) payload of repeated ASCII 'a's. If you want to run with a different size, supply the log2 of the count you want to use as the final argument to the program.

Simply looping over a data is much slower than an array (test does a for loop that counts and sums the collection's contents)

> time ./DataSpeedTest for array
        0.08 real         0.05 user         0.02 sys
> time ./DataSpeedTest for data
        0.51 real         0.48 user         0.03 sys

This carries over to generic functions like reduce

> time ./DataSpeedTest reduce array
        0.08 real         0.05 user         0.03 sys
> time ./DataSpeedTest reduce data
        0.48 real         0.44 user         0.03 sys

...and String.init(decoding:as:)

> time ./DataSpeedTest string array
        0.26 real         0.20 user         0.05 sys
> time ./DataSpeedTest string data
        1.05 real         0.99 user         0.05 sys

Finally, the worst issue I found was with slicing Data, which for some reason causes a memory allocation. The test code loops over the collection by repeatedly shrinking a slice and reading from the beginning, something I've found useful for algorithms that want to take variably-sized pieces off of a collection.

> time ./DataSpeedTest slice array
        0.10 real         0.06 user         0.03 sys
> time ./DataSpeedTest slice data
       10.62 real        10.56 user         0.04 sys

Am I using Data wrong or misunderstanding what it's meant for? Is it only supposed to be used for large allocations that you access using withUnsafeBytes?


More of a meta question for @Ben_Cohen/@Michael_Ilseman/@lorentey. Are there any missing benchmarks here that we need for the benchmark suite?

cc @millenomi for the original questions (or to find someone for the original questions)

November 2016:

The benchmark suite does have a number of benchmarks for Data, but there is always room for more. (In particular, I don't see any benchmarks for slicing!)


The benchmarks we have are great for catching performance regressions and verifying improvements, but they don't really tell us how Data compares to similar collections. It's nice to see these results; they point out potential performance improvements.

1 Like

IterateData benchmark shows that Iterator is not being inlined, always doing dynamic dispatch (at least that’s how I’m interpreting the calls to protocol witness table visible in Instruments). Reading the 2016 write up linked above, it’s unclear to me if this should be working this way or is an optimization issue…

Trace from ./Benchmark_O IterateData --num-iters=1 --num-samples=2000


For folks invested in the subject, you might be interested in the work @Philippe_Hausler and I are doing on


That's really nice! It's nice to also see DispatchData getting some love :blush:

Although I would say: if you're thinking about making large changes to Data's API, we should consider what it would take to bring it in to the standard library (like we do for other bridged types, like String/Array/Dictionary/Set).

There have been topics about this before, but one thing I've noticed recently is that a few members of the community have been building Swift for 16-bit MCUs, and there has been some discussion about porting to WebAssembly. The distinction between stdlib and foundation is significant in those cases. Many of these targets might at least be able to support Data without the rest of foundation.


We don't plan on lowering Data itself into the standard library, but adding the new DataProtocol to the stdlib is under consideration. This would allow us to provide a good layer of abstraction over various types at a level lower than Foundation, which might be beneficial for those efforts.

This is still undergoing some discussion internally, but if we decide to do this, we will put DataProtocol through Swift Evolution process as well.


Thanks for reporting this kind of internal discussion/ considerations.

Does this mean the direction goes rather into something like

extension Array: DataProtocol where Element == UInt8 {}


1 Like

Why not? The standard library could surely do with a safe (in the Swift sense of the word) primitive data type, and it seems silly to invent another one. Presumably we would also want such a type to be bridged to NSData just like Foundation's Data is, etc...

I like the DataProtocol design, but there is still a need for something in the standard library to pass a reference-counted (i.e. safe) chunk of memory around.


Indeed, as part of the PR, you can see proposed extensions on both Array and ContiguousArray (as well as some pointer types) to offer conformance to DataProtocol.

1 Like

At least two reasons off the top of my head:

  1. There's a significant amount of work that the standard library needs to do to build in knowledge of Foundation types in bridging cases — there's a lot of complexity to "up-linking" in a sense to Foundation (involving subclassing Foundation types without importing Foundation and re-parenting at runtime, and similar shenanigans). This comes at a significant code complexity cost, and at times, a performance cost. @Michael_Ilseman can speak to the back-and-forth that has to happen for String, as well as some of the implicit bugs that happen there (e.g. differences between String and NSString which are not apparent)
  2. Data has knowledge of memory allocation strategies beyond system-preferred ones exposted via Data.Deallocator. The standard library does not currently deal all that much with direct memory allocation beyond UnsafeMutable*Pointer.allocate(capacity:), and I don't think we promise any specific allocation method (e.g. malloc vs. other allocators), nor direct interop with various allocation strategies like mmap or virtual memory on Darwin platforms. That's not to say that the standard library shouldn't deal with this stuff, but this is a non-trivial area of exploration that the standard library would need to take on for various platforms (and not all platforms have these various allocation strategies — especially embedded)

Of course, these aren't showstoppers — with sufficient demonstrated need, it might be possible, but given the constraints for Swift 5 at the moment, this seems highly unlikely. If we manage to lower DataProtocol, at the very least you would be able to write functions generic on DataProtocol that give a Data-like interface, but could be backed by, say, ContiguousArray<UInt8> in cases where you really cannot import Foundation for some reason.


Sure - it's not the most urgent thing given the Swift 5 deadline, and I still think this is a significant improvement to the current situation.

The major use-cases would be for compression/encryption, and readers/writers for structured binary file formats. Those are the kind of things you might expect to see on a stripped-down IOT toolchain (reading some sensor values, storing or transmitting a compressed/encrypted package) or WebAssembly toolchain (where you want better performance than JS). WebAssembly also gives us another reason to keep code size down, as libraries will need to be downloaded.

Regardless whether you're using a native Swift library or a wrapper for something written in C - ideally, there would be a common concrete type for trafficking in binary data without dipping in to "unsafe" types. "Generic indirection" is an interesting workaround, but I'm not sure it's such a great situation to have people decompressing binary streams in to an Array<UInt8> rather than a Data on certain platforms. It works, but it's also less than ideal.


Hear hear. It’s really weird that you, in what’s meant to be a safe systems language, can’t write a simple byte-by-byte reader/writer without either dropping down to unsafe constructs, or importing a sort-of-cross-platform-but-really-not-and-also-closed-source library.


SwiftNIO comes with ByteBuffer which is designed to be a versatile (and fast) data type for encoding/decoding binary formats.

1 Like

This isn’t the version used on Apple platforms.

i think what he means is there’s effectively a closed source “real Foundation” that works on apple platforms and an incomplete open source one for linux. it’s like graphics drivers.

1 Like

The Data type on Apple platforms is also open source.
The two versions are slightly out of sync but once were and are meant to be the same.