In a playground I tried:
var g: UInt16 = 0 // 0
g <<= 8 // 0
g |= UInt16(0x12) // 18
g <<= 8 // 4608
g |= UInt16(0xAB) // 4779
String(g, radix: 16, uppercase: true) // "12AB"
String(UInt16(bigEndian: g), radix: 16, uppercase: true) // "AB12"
String(UInt16(littleEndian: g), radix: 16, uppercase: true) // "12AB"
String(g.bigEndian, radix: 16, uppercase: true) // "AB12"
String(g.littleEndian, radix: 16, uppercase: true) // "12AB"
I loaded the octets into g
with the more significant one first. So I thought I should use the big-endian setting. But that's wrong, I needed the little-endian one. BTW, I'm using an (Intel) Mac, so my system is a little-endian one. Would my assumption of converting a manually big-endian construction with the big-endian initializer work I was on a big-endian system? (Maybe someone with a big-endian Linux system could check?)
In my main code, I ended up accepting my manual big-endian setup and byte-swapping when little endian is needed:
public mutating func next() -> Element? {
var result: Element = 0
for _ in 0 ..< MemoryLayout<Element>.size {
guard let byte = base.next() else { return nil }
result <<= 8
result |= Element(byte)
}
switch endian {
case .big:
return result
case .little:
return result.byteSwapped
}
}
I hope this works on all architectures.