Issue when trying to get localised string for float

I am trying to get localised string for float value using

**let** floatValue: Float = 492612.6
String.localizedStringWithFormat("%g", floatValue) -> 492613

The rounding is done automatically which is what I don't want.
Although the immediate solution is to use %1f which will keep the decimal but I don't want to fix my decimal precision to be 1. I have different precision value ranging from 1 to 5. What will be the best way to achieve this.

Also using NumberFormatter doesn't seem to be better option as the precision value change using the same e.g.

**let** formatter = NumberFormatter()
formatter.numberStyle = .decimal
print(numberFormatter.string(for: NSNumber(value: floatValue)) ?? "NaN") -> 492,612.594

Can you give us some examples of numbers and how you want them to render? Right now I’m not sure what you’re looking for.

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

I am looking for localised format of float.
In this case
let floatValue: Float = 492612.6
Should be displayed as "4,92,612.6" in US region and "4 92 612,6" for Sweden region.

That should be “492,612.6” and “492 612,6”.

Maybe you’re thinking of the lakh and crore system they use in India? In most other places, numbers are grouped in groups of three, and certainly in the US and Sweden. Thousands, millions, billions, etc.

Yes You are correct.
But what I am expecting here is not position about grouping separator instead how can I get the exact number in localised format.
So as you mentioned I need a string "492 612,6" as localised string.

You can handle this specific case using minimumFractionDigits and maximumFractionDigits. For example:

let nf = NumberFormatter()
nf.numberStyle = .decimal
nf.minimumFractionDigits = 1
nf.maximumFractionDigits = 1

let floatValue: Float = 492612.6

nf.locale = Locale(identifier: "en_US")
print(nf.string(from: floatValue as NSNumber)!) // 492,612.6

nf.locale = Locale(identifier: "sv_SE")
print(nf.string(from: floatValue as NSNumber)!) // 492 612,6

However, I was hoping you’d post more examples of what you’re looking for. For example, the above will render 492612.0 as 492,612.0 (in US English) and it’s not clear to me that this is what you want. If you can expand you’re examples, I can expand my explanation.

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

Well I am not looking for this example particularly. As I can' fix number of precision it could be anything in between 1 to 5 fixing the value for minimumFractionDigits and maximumFractionDigits is not a possible solution.

For example

492612.6 -> 492,612.6
49261.647 -> 4,9261.647
1.23456 -> 1.23456

What I need is localised represent of Float value with same precision.

If you're on Apple's platforms, you should look at NumberFormatter. It has usesSignificantDigits which seems like it suits your needs.

What I need is localised represent of Float value with same precision.

Unfortunately this problem is still not well specified. The issue is that the numbers you’re printing don’t have a precise representation in Float, so there always has to be some rounding.

Consider this:

for f in [1, 2, 0.5, 0.647] as [Float] {
    NSLog("%.10g", f)
// … 1
// … 2
// … 0.5
// … 0.6470000148

Float is a binary floating point value, so 1, 2, and 0.5 all have precise representations. However, 0.647 does not. That means there’s no way to get back the same precision because information about that precision is lost when you convert the literal to a Float.

To solve this problem you have to take a step back and look at the big picture. Where are these numbers coming from? Can you convert them to a type that does a better job of representing precise decimal numbers (like Decimal)? Can you store the expected precision along with the number? And so on.

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

Could you provide some more context to the problem you're trying to solve? Where are the significant digit constraints coming from? Are you storing and handling the precision yourself?

Barring that, the SwiftCurrency library has to handle varying precision of decimal values and localizing them.

Particularly these two sections might be useful:

Let us know if we can be more help.

I understand the float representation issue. What I don't get is

let floatValue: Float = 492612.6
print("\(floatValue)")  -> Produces 492612.6 But
String.localizedStringWithFormat("%g", floatValue) -> Produces 492613

if print statement produces the same value can I expect that Float type was able to represent the value, Hence the result. What's the change in scenario of localised string that change the same presentation.

When you write:




It will end up using floatValue.description which is the shortest textual decimal representation of the value that will convert back to the exact same floating point value (see LosslessStringConvertible).

So let's say we have these:

let v1: Float = 492612.6
let v0 = v1.nextDown // The closest representable value less than v1
let v2 = v1.nextUp // The closest representable value greater than v1

Note that the value of v1 is not exactly 492612.6, it's just the closest representable floating point value of that value. Here's what their bit patterns (sign bit, exponent bits and significand bits) look like:

v0.bitPattern == 0b0_10010001_11100001000100010010010
v1.bitPattern == 0b0_10010001_11100001000100010010011
v2.bitPattern == 0b0_10010001_11100001000100010010100

When we print v0, v1, v2 (which ends up using their .description property), we get the shortest textual decimal representation that can be used to convert back to the exact same underlaying floating point values:

print(v0) // 492612.56
print(v1) // 492612.6
print(v2) // 492612.62

print(v0 == Float("492612.56")) // true
print(v1 == Float("492612.6"))  // true
print(v2 == Float("492612.62")) // true

Again, note that the decimal representations we see here are just the shortest ones that will map back to the same underlying floating point values, and they are not necessarily exact:

print(v1 == Float("492612.59")) // true
print(492612.6 as Float == 492612.59 as Float) // true

Since every (finite) single precision IEEE 754 floating point value can be represented exactly with a finite number of decimal digits, we can see what they are:

print(String(format: "%.10f", v0)) // 492612.5625000000
print(String(format: "%.10f", v1)) // 492612.5937500000
print(String(format: "%.10f", v2)) // 492612.6250000000

So, 492612.6 is not representable exactly as a Float value, and the closest representable Float value is exactly 492612.59375.

Note that there would be little point in writing this Float value in decimal as
492612.59 or
492612.594 or
492612.5938 instead of just
since they all map to the same closest representable Float value, because the distance between representable values at this magnitude is
0.03125 == Float(492612.6).ulp.

And here's what v1 = Float(492612.6) looks like when printed with 11 to 1 significant decimal digits using .localizedStringWithFormat on my system:

for p in (1 ... 11).reversed() {
    print(String.localizedStringWithFormat("%.\(p)g", v1))
492 612,59375
492 612,5938
492 612,594
492 612,59
492 612,6
492 613

(Format specifiers (like "%f", "%g" and "%.3g") are described in the IEEE printf specification.)


I think what @vpadmnabh is asking is how to get the shortest decimal representation, but localized, without pre-specifying the number of decimal places.

1 Like

If that is the case, it is important to note that the shortest (lossless-string-convertible) decimal representation only makes sense in relation to a specific floating point type (like Float), and that it is not something that is particularly standard or commonly used everywhere. Even in Swift's own REPL (at least on my MBP), you can see a different method being used for rendering Float values in decimal:

1> let v: Float = 492612.6
v: Float = 492612.594   <- Here! (Exact value is 492612.59375 though)
2> v.description
$R1: String = "492612.6"
3> print(v)

Also, the shortest lossless-string-convertible decimal representation will use the following format for large values:

let oneBillion: Float = 1_000_000_000
print(oneBillion) // 1e+09

And, just to further point out the things to keep in mind when using floating point types, note that the distance between each representable value at this magnitude is 64, so:

let a: Float = 1_000_000_000
let b: Float = a + 32
let c: Float = a / 100 + 0.5
print(a == b) // true
print(a) // 1e+09
print(b) // 1e+09
print(c) // 10000000.0

When it prints c, it chooses to use one decimal even though Float(10_000_000).ulp is 1.0 so the three closest representable values are:


Which means that the decimal digit to the right of the decimal point is unnecessary, so it's not the shortest possible decimal representation (I actually don't know what the method that is used for this is called). Perhaps the redundant decimal digit is used to differentiate it from an integer value.