If the bytes are [0xAB, 0xCD, 0xEF, 0x11], you're logically constructing the following values on each pass through the loop:
0x000000AB
0x0000ABCD
0x00ABCDEF
0xABCDEF11
Note that this result is the same independent of the endianness of your CPU. Internally, on a big-endian system, 0xABCDEF11 is represented by the consecutive bytes [0xAB, 0xCD, 0xEF, 0x11]. On a little-endian system, it's represented by the consecutive bytes [0x11, 0xEF, 0xCD, 0xAB]. But regardless, as a logical UInt32 value, it's the number 0xABCDEF11 either way.
If you want the number 0xABCDEF11, for that sequence of bytes, then you want to always return result directly.
If you want the number 0x11EFCDAB for that sequence of bytes, then you want to always return result.byteSwapped.
If you want the sequence of bytes to be interpreted differently based on the endianness of the system you're running on, then your implementation of next() is acceptable. But it seems unlikely to me that you actually want that.
In this case, you are constructing a number byte-by-byte, so it's fair to be wondering about endianness concerns. But it's the endianness of the bytes you're receiving that matters, not the endianness of the host CPU. Whoever is providing those bytes to you probably intends them to represent a particular number, and serialized those bytes in a particular order to represent that number.
My favorite example for explaining endianness is a hypothetical Fraction struct. Imagine the following definitions:
struct Fraction1 {
var numerator: Int
var denominator: Int
}
struct Fraction2 {
var denominator: Int
var numerator: Int
}
The two structs are semantically equivalent; they both represent a fraction by storing a numerator and a denominator. The only difference is the order of the internal fields, but that order is irrelevant to anyone who is not inspecting the raw bytes underlying the struct.
In a similar way, little/big-endianness only affects the order of the raw bytes underlying various Int types. It does not affect their semantic value at all. 0xAB << 8 is semantically 0xAB00 regardless of the order of the underlying bytes.
// Two representations of 0xABCDEF11
struct MyBigEndianInt {
var highOrderByte: UInt8 // 0xAB
var mediumHighOrderByte: UInt8 // 0xCD
var mediumLowOrderByte: UInt8 // 0xEF
var lowOrderByte: UInt8 // 0x11
}
struct MyLittleEndianInt {
var lowOrderByte: UInt8 // 0x11
var mediumLowOrderByte: UInt8 // 0xEF
var mediumHighOrderByte: UInt8 // 0xCD
var highOrderByte: UInt8 // 0xAB
}