Does an unnecessary FixedWidthInteger(big/littleEndian:) get optimized away?

I make heavy use of FixedWidthInteger(bigEndian:) and FixedWidthInteger(littleEndian:) in my TIFF file parser. Is the library and compiler clever enough to optimize away the copy if byte swapping isn't needed?

In a word: yes.

1 Like

Can it do this for an call too? Or should I make an effort to avoid that call if I know bytes don’t need to be swapped?

What’s the best way to determine the current execution environment’s endianness?

Endianness is a property of the compile target. You're never unsure at runtime which endianness the platform is.

We have a condition for it, though:

#if _endian(big)
 // ...

FWIW, you're almost never going to be running on a big-endian platform. Looking at Debian's list of supported ports, the big-endian ports are all dead or discontinued, and the only official ports for bi-endian machines are the little-endian varieties.

The lone holdout seems to be the IBM s390X. If you're not planning on running on IBM mainframes, you can basically ignore that big-endian machines exist.


In normal use, you should never need to know whether you're running on big endian or little endian. Simply use the appropriate init to load data that is big or little endian.

Call to will not be optimized away

1 Like

But you should never be mapping a naked littleEndian: or bigEndian: init; you would roll it in with what you're actually doing with those values. And in that context, it will be optimized away.


Here’s an example of where I’m mapping the array to fix the byte ordering (it’s an array of UInt16 pixel values). I don’t want to do this if I don't have to, as I am reading gigabytes of this data.

getArray<T: FixedWidthInteger>(_ ioArray: inout [T])
    let bytesRead = try ioArray.withUnsafeMutableBytes{ (inBuf) -> Int in
        let bytesRead = try inBuf)
        return bytesRead
    assert(bytesRead == MemoryLayout<T>.size * ioArray.count)
    if self.bigEndian
        ioArray = { T(bigEndian: $0) }
        ioArray = { T(littleEndian: $0) }

Ah yes, of course! Even the processors that can switch I don't think switch mid-process.

In any case, this compiler condition is what wanted.

I'd rather not make assumptions about what processor architectures will or won’t exist in the future, and just do my best to write non-presumptive code today.

That's fair enough, but it's also important to consider the costs today of trying to support both. It's your judgement to make, of course, but I was considering this recently for a library I'm working on, so I'd like to share some of the things I considered:

  • Continuous integration. If the code isn't being tested, it isn't worth having it sit around, growing stale. AFAIK only travis offers CI for IBM Z-series (for FOSS projects only).
  • What if something goes wrong? I've had this problem recently with Windows - if a CI run fails for some reason on this platform, how easily can you reproduce the environment to debug it locally? Trying to guess at fixes through a CI log is super-unproductive.
  • Availability of Swift. There was a port for z/OS, but the latest version I could find is 5.0.2 (from all the way back in July 2019!). Even small language additions, like the ability to omit a "return" in single-expression functions, won't be available - not to mention concurrency and any future extensions. Why box yourself in with those constraints?
  • Reliability of the Swift port. I remember that users of the z/OS port would post here quite frequently with issues caused by bugs which only appeared on that platform - in particular, enums were quite fragile, and could result in strange crashes (remember that Optional is also an enum, so it comes up a lot), there were issues with heavily optimised types like String, etc - mostly resulting from endianness, as it happens. Even if somebody did port Swift to a BE machine, it's going to be some work.

Ultimately, I decided it wasn't worth it. Either you have a bunch of untested code lying around, or you limit yourself to a buggy version of 5.0.2. And for what? The ability to say my project supports some incredibly niche system that the compiler itself doesn't even support any more?

If there ever is such a system worth porting to, it would be much easier to just port the project at that time.


I agree that the lack of ability to test the code is significant. Eh, maybe you've persuaded me to drop it; it certainly simplifies the coding (although it wasn't so bad in the end, since most of it bottlenecks through one method).

1 Like
Terms of Service

Privacy Policy

Cookie Policy