Why does 1_000 return an Int, but 1e3 a Double?

Hi :wave:

I'm curious if there's a specific reason why using exponent (e) always returns a Double.
At the moment it feels a bit inconsistent how you can use the thousands separator for both integers and floating points:

let a: Int = 1_000 // Fine (default)
let b: Double = 1_000 // Also fine
let c: UInt = 1_000 // Still fine

But with exponent it's always a Double:

let d: Int = 1e3 // Error

Just curious about the reasoning behind this design. Thanks! :grinning_face_with_smiling_eyes:

1 Like

Swift actually has two kinds of numeric literals.

Those with only digits, separators, and optionally a prefix (0x, 0b, 0o) are integer literals, representing whole numbers. Any type conforming to ExpressibleByIntegerLiteral can initialize from them.

Those that also have a radix point (or decimal point for base-10), e, or p are floating-point literals. They include non-integer numbers as well. So they instead require ExpressibleByFloatLiteral.

Since Double conforms to both protocols, you can use either kind of literals, but you can only use integer literals for Int since it doesn't conform to ExpressibleByFloatLiteral.


One minor quirk here is that if you don't specify a type, floating-point literals will default to Double, and integer literals will default to Int.


Can the compiler be smart enough to check for integers and apply a more "intuitive" rule? Perhaps. But the real question is, can we?

// Question
let x = 0x1.23p8, y = 0x1.23p7
x + y // Would this succeed or fail

// Answer
let x = 0x1.23p8 // 291, so Int
let y = 0x1.23p7 // 145.5, so Double

x + y // Error: Binary operator `+` cannot be applied to operands of type `Int` and `Double`
5 Likes

Cheers for the detailed explanation! I now also learned about the p prefix for hex literals :grinning_face_with_smiling_eyes:

Thinking about it further now, you made me realize I overlooked something very obvious: negative exponents. A value like 1e-3 could only end up being a floating point, of course.

Thanks again!

p is nonsensical, though. It shifts by powers of 2, but it can't be used with binary literals. :pensive:

I say if your number is an integer, and whatever comes after the e is positive, then you should get an integer out of it. Same behavior as this:

public extension Numeric {
  /// Raise this base to a `power`.
  func toThe<Power: UnsignedInteger>(_ power: Power) -> Self
  where Power.Stride: SignedInteger {
    power == 0
    ? 1
    : (1..<power).reduce(self) { result, _ in result * self }
  }
}

But you can't even put . in binary/octal/hex literals in Swift. They're assumed to be integers.

Does it make any difference in terms of compile time if you use an "integer literal" vs. a "floating-point literal" in the expression that's a floating point type, for ex in swiftui:

.frame(width: 123)  // integer literal, but still end up be a CGFloat

vs.

.frame(width: 123.0)

any different between the two? And do CGFloat/Double equivalent has any significant here?

Is there any benefit to use floating-point literal in this case?

I doubt it matters. Most definitely not in terms of correctness since they’re most likely made to behave the same way. Perf-wise, you’d of course have to benchmark compile time and/or runtime to be sure. Still, premature optimization is the root of all evil.

1 Like

Seems to make longer type check: https://reddit.com/r/SwiftUI/comments/r119lk/_/hm159lv/?context=1

*shrug* long type-check error is about as reliable as a printer low on cyan for your purpose. Still, assuming that it is true, I wouldn’t lose sleep over it.

That example is more likely about the - operator than the actual literal checking.

Hexfloat with a base-2 exponent is an established format standardized in the IEEE 754, C, and C++ standards.

The format you’re describing is not. Which doesn’t mean that Swift can’t define it, but does mean that it wouldn’t be interoperable with most existing tooling, that it wouldn’t really be familiar to anyone, and that the burden to justify it is proportionally higher.

2 Likes

Why? This stuff is from before my time. I don't even understand what a p is. Power? Because Exponent wasn't available in hex?

What I do understand is that e magnifies order (e.g. 10e1 == 100)…

…and there's no equivalent for hex, that left-shifts by whatever's on the right of some arcane letter, by 4, resulting in the same literal with more zeros on the right.

Yes, because E is a hex digit. The separator had to be outside of 0-9a-fA-F. X would be confusing with the hex indicator. exPonent was chosen (this format dates to before my involvement in any standard, so I don’t know the full story). :man_shrugging:t2:

I think your objection is just that the significand is written in radix-16 but the exponent is radix-2? It’s really all radix-2, except binary significands are too wide to be convenient; 16 is a convenient power of two that makes them much less unwieldy to write and read and especially less error-prone.

4 Likes
Terms of Service

Privacy Policy

Cookie Policy