# Pre-proposal: Safer Decimal Calculations

First draft towards a tentative pre-proposal:

···

------

Pre-proposal: Safer Decimal Calculations
Proposal: TBD
Author(s): Rainer Brockerhoff
Status: TBD
Review manager: TBD

Quoting the “The Swift Programming Language” book: “Swift adopts safe
programming patterns…”; “Swift is friendly to new programmers”. The
words “safe” and “safety” are found many times in the book and in online
documentation. The usual rationale for safe features is, to quote a
typical sentence, “…enables you to catch and fix errors as early as
possible in the development process”.

One frequent stumbling point for both new and experienced programmers
stems from the vagaries of binary floating-point arithmetic. This
tentative pre-proposal suggests one possible way to make the dangers
somewhat more clear.

My intention here is to start a discussion on this to inform the ongoing
(and future) reasoning on extending and regularising arithmetic in Swift.

Motivation

Floating-point hardware on most platforms that run Swift — that is,
Intel and ARM CPUs — uses the binary representation forms of the IEEE
754-2008 standard. Although some few mainframes and software libraries
implement the decimal representations this is not currently leveraged by
Swift. Apple's NSDecimal and NSDecimalNumber implementation is awkward
to use in Swift, especially as standard arithmetic operators cannot be
used directly.

Although it is possible to express floating-point constants in
hexadecimal (0x123.AB) with an optional binary exponent (0x123A.Bp-4),
decimal-form floating-point constants (123.45 or 1.2345e2) are extremely
common in practice.

Unfortunately it is tempting to use floating-point arithmetic for
financial calculations or other purposes such as labelling graphical or
statistical data. Constants such as 0.1, 0.01, 0.001 and variations or
multiples thereof will certainly be used in such applications — and
almost none of these constant can be precisely represented in binary
floating-point format.

Rounding errors will therefore be introduced at the outset, causing
unexpected or outright buggy behaviour down the line which will be
surprising to the user and/or the programmer. This will often happen at
some point when the results of a calculation are compared to a constant
or to another result.

Current Solution

As things stand, Swift's default print() function, Xcode playgrounds
etc. do some discreet rounding or truncation to make the problem less
apparent - a Double initialized with the literal 0.1 prints out as 0.1
instead of the exact value of the internal representation, something
like 0.100000000000000005551115123125782702118158340454101562.

This, unfortunately, masks this underlying problem in settings such as
“toy” programs or educational playgrounds, leading programmers to be
surprised later when things won't work. A cursory search on
StackOverflow reveals tens of thousands of questions with headings like
“Is floating point math broken?".

Warning on imprecise literals

To make decimal-format floating-point literals safe, I suggest that the
compiler should emit a warning whenever a literal is used that cannot be
safely represented as an exact value of the type expected. (Note that
0.1 cannot be represented exactly as any binary floating-point type.)

The experienced programmer will, however, be willing to accept some
imprecision under circumstances that cannot be reliably determined by
the compiler. I suggest, therefore, that this acceptance be indicated by
an annotation to the literal; a form such as ~0.1 might be easiest to
read and implement, as the prefix ~ operator currently has no meaning
for a floating-point value. A “fixit” would be easily implemented to
insert the missing notation.

Conversely, to avoid inexperienced or hurried programmers to strew ~s
everywhere, it would be useful to warn, and offer to fix, if the ~ is
present but the literal does have an exact representation.

Tolerances

A parallel idea is that of tolerances, introducing an ‘epsilon’ value to
be used in comparisons. Unfortunately an effective value of the epsilon
depends on the magnitude of the operands and there are many edge cases.

Introducing a special type along the lines of “floating point with
tolerances” — using some accepted engineering notation for literals like
100.5±0.1 — might be useful for specialised applications but will not
solve this specific problem. Expanding existing constructs to accept an
optional tolerance value, as has been proposed elsewhere, may be useful
in those specific instances but not contribute to raise programmer
awareness of unsafe literals.

Full Decimal type proposal

There are cogent arguments that prior art/habits and the already complex
interactions between Double, Float, Float80 and CGFloat are best left alone.

However, there remains a need for a precise implementation of a workable
Decimal value type for financial calculations. IMHO repurposing the
existing NSDecimalNumber from Objective-C is not the best solution.

As most experienced developers know, the standard solution for financial
calculations is to internally store fixed-point values — usually but not
always in cents — and then print the “virtual” point (or decimal comma,
for the rest of us) on output.

I propose, therefore, an internal data layout like this:

UInt16 - position of the “virtual” point, starting at 0
UInt16 - data array size - 1
[Int32] - contiguous data array, little-endian order, grown as needed.
Note that both UInt16 fields being zero implies that the number is
reduced to a 32-bit Integer. Number literals in Swift can be up to 2048
bits in size, so the maximum data array size would be 64, although it
could conceivably grow beyond that. The usual cases of the virtual point
position being 0 or 2 could be aggressively optimized for normal
arithmetic operators.

Needless to say such a Decimal number would accept and represent
literals such as 0.01 with no problems. It would also serve as a BigNum
implementation for most purposes.

No doubt implementing this type in the standard library would allow for
highly optimized implementations for all major CPU platforms. In
particular, the data array should probably be [Int64] for 64-bit platforms.

Acknowledgement

Thanks to Erica Sadun for their help with an early version of this
pre-proposal.

Some references

http://code.jsoftware.com/wiki/Essays/Tolerant_Comparison

--
Rainer Brockerhoff <rainer@brockerhoff.net>
Belo Horizonte, Brazil
"In the affairs of others even fools are wise
In their own business even sages err."

By using a table of signed Int32, I think you'll lose one bit every time the number is extended.
Should the sign of the number be hidden in the data array size, either as the sign or as the lsb?
- second field = (data array size - 1) << 1 | (negative ? 1 : 0)

Dany

···

Le 18 mars 2016 à 18:42, Rainer Brockerhoff via swift-evolution <swift-evolution@swift.org> a écrit :

First draft towards a tentative pre-proposal:
Pre-proposal: Safer Decimal Calculations · GitHub
------

I propose, therefore, an internal data layout like this:

UInt16 - position of the “virtual” point, starting at 0
UInt16 - data array size - 1
[Int32] - contiguous data array, little-endian order, grown as needed.
Note that both UInt16 fields being zero implies that the number is
reduced to a 32-bit Integer. Number literals in Swift can be up to 2048
bits in size, so the maximum data array size would be 64, although it
could conceivably grow beyond that. The usual cases of the virtual point
position being 0 or 2 could be aggressively optimized for normal
arithmetic operators.

I agree with all of this; I don’t really know enough to comment on the specific implementation of a decimal type, but we definitely need something other than NSDecimal.

I don’t know if it’s possible, but I think that tolerances should be able to take a percentage. For example, I could write 0.0000001±1%, which would be much clearer (and less error prone) than 0.0000001± 0.000000001

This is also useful because I think we could also benefit from the addition of tolerances to types, allowing us to declare something like: var foo:Float±0.01 = 0, which specifies a floating point value that the Swift compiler will not allow values to be added/substracted etc. to/from if they have a tolerance higher than my requirement. While cumulative error could still result in issues, if my tolerance is set reasonably low for my use-case then it would allow me to limit how much error I can accumulate in the lifetime of my data. In this being able to specify a percentage is useful when the variable has a clear right hand side such as var foo:Float±5% = 0.1 (effectively var foo:Float±0.005 = 0.1).

···

On 18 Mar 2016, at 22:42, Rainer Brockerhoff via swift-evolution <swift-evolution@swift.org> wrote:

First draft towards a tentative pre-proposal:
Pre-proposal: Safer Decimal Calculations · GitHub
------

Pre-proposal: Safer Decimal Calculations
Proposal: TBD
Author(s): Rainer Brockerhoff
Status: TBD
Review manager: TBD

Quoting the “The Swift Programming Language” book: “Swift adopts safe
programming patterns…”; “Swift is friendly to new programmers”. The
words “safe” and “safety” are found many times in the book and in online
documentation. The usual rationale for safe features is, to quote a
typical sentence, “…enables you to catch and fix errors as early as
possible in the development process”.

One frequent stumbling point for both new and experienced programmers
stems from the vagaries of binary floating-point arithmetic. This
tentative pre-proposal suggests one possible way to make the dangers
somewhat more clear.

My intention here is to start a discussion on this to inform the ongoing
(and future) reasoning on extending and regularising arithmetic in Swift.

Motivation

Floating-point hardware on most platforms that run Swift — that is,
Intel and ARM CPUs — uses the binary representation forms of the IEEE
754-2008 standard. Although some few mainframes and software libraries
implement the decimal representations this is not currently leveraged by
Swift. Apple's NSDecimal and NSDecimalNumber implementation is awkward
to use in Swift, especially as standard arithmetic operators cannot be
used directly.

Although it is possible to express floating-point constants in
hexadecimal (0x123.AB) with an optional binary exponent (0x123A.Bp-4),
decimal-form floating-point constants (123.45 or 1.2345e2) are extremely
common in practice.

Unfortunately it is tempting to use floating-point arithmetic for
financial calculations or other purposes such as labelling graphical or
statistical data. Constants such as 0.1, 0.01, 0.001 and variations or
multiples thereof will certainly be used in such applications — and
almost none of these constant can be precisely represented in binary
floating-point format.

Rounding errors will therefore be introduced at the outset, causing
unexpected or outright buggy behaviour down the line which will be
surprising to the user and/or the programmer. This will often happen at
some point when the results of a calculation are compared to a constant
or to another result.

Current Solution

As things stand, Swift's default print() function, Xcode playgrounds
etc. do some discreet rounding or truncation to make the problem less
apparent - a Double initialized with the literal 0.1 prints out as 0.1
instead of the exact value of the internal representation, something
like 0.100000000000000005551115123125782702118158340454101562.

This, unfortunately, masks this underlying problem in settings such as
“toy” programs or educational playgrounds, leading programmers to be
surprised later when things won't work. A cursory search on
StackOverflow reveals tens of thousands of questions with headings like
“Is floating point math broken?".

Warning on imprecise literals

To make decimal-format floating-point literals safe, I suggest that the
compiler should emit a warning whenever a literal is used that cannot be
safely represented as an exact value of the type expected. (Note that
0.1 cannot be represented exactly as any binary floating-point type.)

The experienced programmer will, however, be willing to accept some
imprecision under circumstances that cannot be reliably determined by
the compiler. I suggest, therefore, that this acceptance be indicated by
an annotation to the literal; a form such as ~0.1 might be easiest to
read and implement, as the prefix ~ operator currently has no meaning
for a floating-point value. A “fixit” would be easily implemented to
insert the missing notation.

Conversely, to avoid inexperienced or hurried programmers to strew ~s
everywhere, it would be useful to warn, and offer to fix, if the ~ is
present but the literal does have an exact representation.

Tolerances

A parallel idea is that of tolerances, introducing an ‘epsilon’ value to
be used in comparisons. Unfortunately an effective value of the epsilon
depends on the magnitude of the operands and there are many edge cases.

Introducing a special type along the lines of “floating point with
tolerances” — using some accepted engineering notation for literals like
100.5±0.1 — might be useful for specialised applications but will not
solve this specific problem. Expanding existing constructs to accept an
optional tolerance value, as has been proposed elsewhere, may be useful
in those specific instances but not contribute to raise programmer
awareness of unsafe literals.

Full Decimal type proposal

There are cogent arguments that prior art/habits and the already complex
interactions between Double, Float, Float80 and CGFloat are best left alone.

However, there remains a need for a precise implementation of a workable
Decimal value type for financial calculations. IMHO repurposing the
existing NSDecimalNumber from Objective-C is not the best solution.

As most experienced developers know, the standard solution for financial
calculations is to internally store fixed-point values — usually but not
always in cents — and then print the “virtual” point (or decimal comma,
for the rest of us) on output.

I propose, therefore, an internal data layout like this:

UInt16 - position of the “virtual” point, starting at 0
UInt16 - data array size - 1
[Int32] - contiguous data array, little-endian order, grown as needed.
Note that both UInt16 fields being zero implies that the number is
reduced to a 32-bit Integer. Number literals in Swift can be up to 2048
bits in size, so the maximum data array size would be 64, although it
could conceivably grow beyond that. The usual cases of the virtual point
position being 0 or 2 could be aggressively optimized for normal
arithmetic operators.

Needless to say such a Decimal number would accept and represent
literals such as 0.01 with no problems. It would also serve as a BigNum
implementation for most purposes.

No doubt implementing this type in the standard library would allow for
highly optimized implementations for all major CPU platforms. In
particular, the data array should probably be [Int64] for 64-bit platforms.

Acknowledgement

Thanks to Erica Sadun for their help with an early version of this
pre-proposal.

Some references

--
Rainer Brockerhoff <rainer@brockerhoff.net>
Belo Horizonte, Brazil
"In the affairs of others even fools are wise
In their own business even sages err."
http://brockerhoff.net/blog/
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I have no stake in this proposal, except for:

I suggest, therefore, that this acceptance be indicated by
an annotation to the literal; a form such as ~0.1 might be easiest to
read and implement, as the prefix ~ operator currently has no meaning
for a floating-point value.

Whatever you do, don't touch the literals! I specify NSTimeIntervals of 0.1, 0.2, 0.25 etc all over the place, and I couldn't care less if my animations are one femtosecond off.

Don't pollute everyone's apps with tildes just because there's a niche that needs to care about precision loss.

A.

First draft towards a tentative pre-proposal:
Pre-proposal: Safer Decimal Calculations · GitHub
------

Pre-proposal: Safer Decimal Calculations
Proposal: TBD
Author(s): Rainer Brockerhoff
Status: TBD
Review manager: TBD

Quoting the “The Swift Programming Language” book: “Swift adopts safe
programming patterns…”; “Swift is friendly to new programmers”. The
words “safe” and “safety” are found many times in the book and in online
documentation. The usual rationale for safe features is, to quote a
typical sentence, “…enables you to catch and fix errors as early as
possible in the development process”.

One frequent stumbling point for both new and experienced programmers
stems from the vagaries of binary floating-point arithmetic. This
tentative pre-proposal suggests one possible way to make the dangers
somewhat more clear.

My intention here is to start a discussion on this to inform the ongoing
(and future) reasoning on extending and regularising arithmetic in Swift.

Motivation

Floating-point hardware on most platforms that run Swift — that is,
Intel and ARM CPUs — uses the binary representation forms of the IEEE
754-2008 standard. Although some few mainframes and software libraries
implement the decimal representations this is not currently leveraged by
Swift. Apple's NSDecimal and NSDecimalNumber implementation is awkward
to use in Swift, especially as standard arithmetic operators cannot be
used directly.

Although it is possible to express floating-point constants in
hexadecimal (0x123.AB) with an optional binary exponent (0x123A.Bp-4),
decimal-form floating-point constants (123.45 or 1.2345e2) are extremely
common in practice.

Unfortunately it is tempting to use floating-point arithmetic for
financial calculations or other purposes such as labelling graphical or
statistical data. Constants such as 0.1, 0.01, 0.001 and variations or
multiples thereof will certainly be used in such applications — and
almost none of these constant can be precisely represented in binary
floating-point format.

Rounding errors will therefore be introduced at the outset, causing
unexpected or outright buggy behaviour down the line which will be
surprising to the user and/or the programmer. This will often happen at
some point when the results of a calculation are compared to a constant
or to another result.

Current Solution

As things stand, Swift's default print() function, Xcode playgrounds
etc. do some discreet rounding or truncation to make the problem less
apparent - a Double initialized with the literal 0.1 prints out as 0.1
instead of the exact value of the internal representation, something
like 0.100000000000000005551115123125782702118158340454101562.

This, unfortunately, masks this underlying problem in settings such as
“toy” programs or educational playgrounds, leading programmers to be
surprised later when things won't work. A cursory search on
StackOverflow reveals tens of thousands of questions with headings like
“Is floating point math broken?".

Warning on imprecise literals

To make decimal-format floating-point literals safe, I suggest that the
compiler should emit a warning whenever a literal is used that cannot be
safely represented as an exact value of the type expected. (Note that
0.1 cannot be represented exactly as any binary floating-point type.)

The experienced programmer will, however, be willing to accept some
imprecision under circumstances that cannot be reliably determined by
the compiler. I suggest, therefore, that this acceptance be indicated by
an annotation to the literal; a form such as ~0.1 might be easiest to
read and implement, as the prefix ~ operator currently has no meaning
for a floating-point value. A “fixit” would be easily implemented to
insert the missing notation.

Conversely, to avoid inexperienced or hurried programmers to strew ~s
everywhere, it would be useful to warn, and offer to fix, if the ~ is
present but the literal does have an exact representation.

Tolerances

A parallel idea is that of tolerances, introducing an ‘epsilon’ value to
be used in comparisons. Unfortunately an effective value of the epsilon
depends on the magnitude of the operands and there are many edge cases.

Introducing a special type along the lines of “floating point with
tolerances” — using some accepted engineering notation for literals like
100.5±0.1 — might be useful for specialised applications but will not
solve this specific problem. Expanding existing constructs to accept an
optional tolerance value, as has been proposed elsewhere, may be useful
in those specific instances but not contribute to raise programmer
awareness of unsafe literals.

Full Decimal type proposal

There are cogent arguments that prior art/habits and the already complex
interactions between Double, Float, Float80 and CGFloat are best left alone.

However, there remains a need for a precise implementation of a workable
Decimal value type for financial calculations. IMHO repurposing the
existing NSDecimalNumber from Objective-C is not the best solution.

As most experienced developers know, the standard solution for financial
calculations is to internally store fixed-point values — usually but not
always in cents — and then print the “virtual” point (or decimal comma,
for the rest of us) on output.

I propose, therefore, an internal data layout like this:

UInt16 - position of the “virtual” point, starting at 0
UInt16 - data array size - 1
[Int32] - contiguous data array, little-endian order, grown as needed.
Note that both UInt16 fields being zero implies that the number is
reduced to a 32-bit Integer. Number literals in Swift can be up to 2048
bits in size, so the maximum data array size would be 64, although it
could conceivably grow beyond that. The usual cases of the virtual point
position being 0 or 2 could be aggressively optimized for normal
arithmetic operators.

Needless to say such a Decimal number would accept and represent
literals such as 0.01 with no problems. It would also serve as a BigNum
implementation for most purposes.

No doubt implementing this type in the standard library would allow for
highly optimized implementations for all major CPU platforms. In
particular, the data array should probably be [Int64] for 64-bit platforms.

Acknowledgement

Thanks to Erica Sadun for their help with an early version of this
pre-proposal.

Some references

Rainer: I quickly skimmed this. Just to make sure I am understanding 100%: you are proposing a fixed point decimal calculation or a floating point decimal calculation. The former, no?

···

On Mar 18, 2016, at 3:42 PM, Rainer Brockerhoff via swift-evolution <swift-evolution@swift.org> wrote:

--
Rainer Brockerhoff <rainer@brockerhoff.net>
Belo Horizonte, Brazil
"In the affairs of others even fools are wise
In their own business even sages err."
http://brockerhoff.net/blog/
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Hm. I didn't really mean that the sign should be repeated in every
element, so technically it should be [UInt32] with a note saying
negative numbers are stored in 2's complement.

···

On 3/18/16 21:00, Dany St-Amant via swift-evolution wrote:

Le 18 mars 2016 à 18:42, Rainer Brockerhoff via swift-evolution <swift-evolution@swift.org> a écrit :

First draft towards a tentative pre-proposal:
Pre-proposal: Safer Decimal Calculations · GitHub
------

I propose, therefore, an internal data layout like this:

UInt16 - position of the “virtual” point, starting at 0
UInt16 - data array size - 1
[Int32] - contiguous data array, little-endian order, grown as needed.
Note that both UInt16 fields being zero implies that the number is
reduced to a 32-bit Integer. Number literals in Swift can be up to 2048
bits in size, so the maximum data array size would be 64, although it
could conceivably grow beyond that. The usual cases of the virtual point
position being 0 or 2 could be aggressively optimized for normal
arithmetic operators.

By using a table of signed Int32, I think you'll lose one bit every time the number is extended.
Should the sign of the number be hidden in the data array size, either as the sign or as the lsb?
- second field = (data array size - 1) << 1 | (negative ? 1 : 0)

--
Rainer Brockerhoff <rainer@brockerhoff.net>
Belo Horizonte, Brazil
"In the affairs of others even fools are wise
In their own business even sages err."

Just wanted to expand on the type tolerances idea with an example:

let a:Float±0.1 = 1234.56
let b:Float±0.5 = 123.456

let result:Float±0.25 = a + b // Error as b’s tolerance > 0.25

Of course this still requires developers to actually specify the tolerances at some point, however Swift’s type inference could allow them to be passed down. For example, if I omitted the type for result, then Swift could infer it as Float±0.5 as that is the higher tolerance of the two values. A plain definition of Float would be equivalent Float±infinity.

This would also make the proposed tilde operator even more useful as it could be used to ignore exceptions to the tolerance, so I could change my last line to:

let result:Float±0.25 = a + ~b

This allows me to use the value of b (albeit with potential error higher than I’d like), without changing the tolerance of result for later operations.

Also, once we have a full accuracy decimal type, I wonder if perhaps we should make it the default on the grounds of safety? While it might be overkill for many programs, I think that those programs that would see a performance impact would be too burdened by having to specify “unsafe” floating point by setting the Double or Float type (with or without a tolerance).

···

On 19 Mar 2016, at 00:15, Haravikk via swift-evolution <swift-evolution@swift.org> wrote:

I agree with all of this; I don’t really know enough to comment on the specific implementation of a decimal type, but we definitely need something other than NSDecimal.

I don’t know if it’s possible, but I think that tolerances should be able to take a percentage. For example, I could write 0.0000001±1%, which would be much clearer (and less error prone) than 0.0000001± 0.000000001

This is also useful because I think we could also benefit from the addition of tolerances to types, allowing us to declare something like: var foo:Float±0.01 = 0, which specifies a floating point value that the Swift compiler will not allow values to be added/substracted etc. to/from if they have a tolerance higher than my requirement. While cumulative error could still result in issues, if my tolerance is set reasonably low for my use-case then it would allow me to limit how much error I can accumulate in the lifetime of my data. In this being able to specify a percentage is useful when the variable has a clear right hand side such as var foo:Float±5% = 0.1 (effectively var foo:Float±0.005 = 0.1).

On 18 Mar 2016, at 22:42, Rainer Brockerhoff via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

First draft towards a tentative pre-proposal:
Pre-proposal: Safer Decimal Calculations · GitHub
------

Pre-proposal: Safer Decimal Calculations
Proposal: TBD
Author(s): Rainer Brockerhoff
Status: TBD
Review manager: TBD

Quoting the “The Swift Programming Language” book: “Swift adopts safe
programming patterns…”; “Swift is friendly to new programmers”. The
words “safe” and “safety” are found many times in the book and in online
documentation. The usual rationale for safe features is, to quote a
typical sentence, “…enables you to catch and fix errors as early as
possible in the development process”.

One frequent stumbling point for both new and experienced programmers
stems from the vagaries of binary floating-point arithmetic. This
tentative pre-proposal suggests one possible way to make the dangers
somewhat more clear.

My intention here is to start a discussion on this to inform the ongoing
(and future) reasoning on extending and regularising arithmetic in Swift.

Motivation

Floating-point hardware on most platforms that run Swift — that is,
Intel and ARM CPUs — uses the binary representation forms of the IEEE
754-2008 standard. Although some few mainframes and software libraries
implement the decimal representations this is not currently leveraged by
Swift. Apple's NSDecimal and NSDecimalNumber implementation is awkward
to use in Swift, especially as standard arithmetic operators cannot be
used directly.

Although it is possible to express floating-point constants in
hexadecimal (0x123.AB) with an optional binary exponent (0x123A.Bp-4),
decimal-form floating-point constants (123.45 or 1.2345e2) are extremely
common in practice.

Unfortunately it is tempting to use floating-point arithmetic for
financial calculations or other purposes such as labelling graphical or
statistical data. Constants such as 0.1, 0.01, 0.001 and variations or
multiples thereof will certainly be used in such applications — and
almost none of these constant can be precisely represented in binary
floating-point format.

Rounding errors will therefore be introduced at the outset, causing
unexpected or outright buggy behaviour down the line which will be
surprising to the user and/or the programmer. This will often happen at
some point when the results of a calculation are compared to a constant
or to another result.

Current Solution

As things stand, Swift's default print() function, Xcode playgrounds
etc. do some discreet rounding or truncation to make the problem less
apparent - a Double initialized with the literal 0.1 prints out as 0.1
instead of the exact value of the internal representation, something
like 0.100000000000000005551115123125782702118158340454101562.

This, unfortunately, masks this underlying problem in settings such as
“toy” programs or educational playgrounds, leading programmers to be
surprised later when things won't work. A cursory search on
StackOverflow reveals tens of thousands of questions with headings like
“Is floating point math broken?".

Warning on imprecise literals

To make decimal-format floating-point literals safe, I suggest that the
compiler should emit a warning whenever a literal is used that cannot be
safely represented as an exact value of the type expected. (Note that
0.1 cannot be represented exactly as any binary floating-point type.)

The experienced programmer will, however, be willing to accept some
imprecision under circumstances that cannot be reliably determined by
the compiler. I suggest, therefore, that this acceptance be indicated by
an annotation to the literal; a form such as ~0.1 might be easiest to
read and implement, as the prefix ~ operator currently has no meaning
for a floating-point value. A “fixit” would be easily implemented to
insert the missing notation.

Conversely, to avoid inexperienced or hurried programmers to strew ~s
everywhere, it would be useful to warn, and offer to fix, if the ~ is
present but the literal does have an exact representation.

Tolerances

A parallel idea is that of tolerances, introducing an ‘epsilon’ value to
be used in comparisons. Unfortunately an effective value of the epsilon
depends on the magnitude of the operands and there are many edge cases.

Introducing a special type along the lines of “floating point with
tolerances” — using some accepted engineering notation for literals like
100.5±0.1 — might be useful for specialised applications but will not
solve this specific problem. Expanding existing constructs to accept an
optional tolerance value, as has been proposed elsewhere, may be useful
in those specific instances but not contribute to raise programmer
awareness of unsafe literals.

Full Decimal type proposal

There are cogent arguments that prior art/habits and the already complex
interactions between Double, Float, Float80 and CGFloat are best left alone.

However, there remains a need for a precise implementation of a workable
Decimal value type for financial calculations. IMHO repurposing the
existing NSDecimalNumber from Objective-C is not the best solution.

As most experienced developers know, the standard solution for financial
calculations is to internally store fixed-point values — usually but not
always in cents — and then print the “virtual” point (or decimal comma,
for the rest of us) on output.

I propose, therefore, an internal data layout like this:

UInt16 - position of the “virtual” point, starting at 0
UInt16 - data array size - 1
[Int32] - contiguous data array, little-endian order, grown as needed.
Note that both UInt16 fields being zero implies that the number is
reduced to a 32-bit Integer. Number literals in Swift can be up to 2048
bits in size, so the maximum data array size would be 64, although it
could conceivably grow beyond that. The usual cases of the virtual point
position being 0 or 2 could be aggressively optimized for normal
arithmetic operators.

Needless to say such a Decimal number would accept and represent
literals such as 0.01 with no problems. It would also serve as a BigNum
implementation for most purposes.

No doubt implementing this type in the standard library would allow for
highly optimized implementations for all major CPU platforms. In
particular, the data array should probably be [Int64] for 64-bit platforms.

Acknowledgement

Thanks to Erica Sadun for their help with an early version of this
pre-proposal.

Some references

--
Rainer Brockerhoff <rainer@brockerhoff.net>
Belo Horizonte, Brazil
"In the affairs of others even fools are wise
In their own business even sages err."
http://brockerhoff.net/blog/
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Yes, I'm now agreeing that the ideal solution is to leave the whole
binary floating-point mess alone and write a reasonable `Decimal` type.
I'll update my text accordingly when I find time.

···

On 3/20/16 14:43, Andrey Tarantsov via swift-evolution wrote:

I have no stake in this proposal, except for:

I suggest, therefore, that this acceptance be indicated by an
annotation to the literal; a form such as ~0.1 might be easiest to
read and implement, as the prefix ~ operator currently has no
meaning for a floating-point value.

Whatever you do, don't touch the literals! I specify NSTimeIntervals
of 0.1, 0.2, 0.25 etc all over the place, and I couldn't care less if
my animations are one femtosecond off.

Don't pollute everyone's apps with tildes just because there's a
niche that needs to care about precision loss.

--
Rainer Brockerhoff <rainer@brockerhoff.net>
Belo Horizonte, Brazil
"In the affairs of others even fools are wise
In their own business even sages err."

The alternative would be to make the Decimal type the default anywhere that Double or Float isn’t used; while this may be overkill, it shouldn’t impact performance of most applications. At this point the less-safe but higher performance types would be opt-in by either specifying the type or using the tilde operator.

This could actually expand the definition of the ± on constants to instead imply “pick the type that can represent this”, so 0.1±0.01 would pick Double, Float or Decimal as appropriate, favouring the higher performance types only if they can represent that value without exceeding the allowable error.

It is a tricky area though, Swift does have a goal of safety so it may be worth pushing a change that promotes that, even if it means that means a few changes to get maximum performance back; this could be partly avoided by the way the change is handled though, providing warnings where the intent is ambiguous, or assuming performance over safety for existing code?

···

On 20 Mar 2016, at 17:54, Rainer Brockerhoff via swift-evolution <swift-evolution@swift.org> wrote:

On 3/20/16 14:43, Andrey Tarantsov via swift-evolution wrote:

I have no stake in this proposal, except for:

I suggest, therefore, that this acceptance be indicated by an
annotation to the literal; a form such as ~0.1 might be easiest to
read and implement, as the prefix ~ operator currently has no
meaning for a floating-point value.

Whatever you do, don't touch the literals! I specify NSTimeIntervals
of 0.1, 0.2, 0.25 etc all over the place, and I couldn't care less if
my animations are one femtosecond off.

Don't pollute everyone's apps with tildes just because there's a
niche that needs to care about precision loss.

Yes, I'm now agreeing that the ideal solution is to leave the whole
binary floating-point mess alone and write a reasonable `Decimal` type.
I'll update my text accordingly when I find time.

Right, fixed-point. (NSDecimalNumber is decimal floating-point, of course).

I'll be updating my pre-proposal in a few days with the received feedback.

···

On 3/22/16 23:20, Michael Gottesman via swift-evolution wrote:

On Mar 18, 2016, at 3:42 PM, Rainer Brockerhoff via swift-evolution <swift-evolution@swift.org> wrote:

First draft towards a tentative pre-proposal:
Pre-proposal: Safer Decimal Calculations · GitHub
------

Pre-proposal: Safer Decimal Calculations
Proposal: TBD
Author(s): Rainer Brockerhoff
Status: TBD
Review manager: TBD
...
Full Decimal type proposal

There are cogent arguments that prior art/habits and the already complex
interactions between Double, Float, Float80 and CGFloat are best left alone.

However, there remains a need for a precise implementation of a workable
Decimal value type for financial calculations. IMHO repurposing the
existing NSDecimalNumber from Objective-C is not the best solution.

As most experienced developers know, the standard solution for financial
calculations is to internally store fixed-point values — usually but not
always in cents — and then print the “virtual” point (or decimal comma,
for the rest of us) on output.

I propose, therefore, an internal data layout like this:

UInt16 - position of the “virtual” point, starting at 0
UInt16 - data array size - 1
[Int32] - contiguous data array, little-endian order, grown as needed.
Note that both UInt16 fields being zero implies that the number is
reduced to a 32-bit Integer. Number literals in Swift can be up to 2048
bits in size, so the maximum data array size would be 64, although it
could conceivably grow beyond that. The usual cases of the virtual point
position being 0 or 2 could be aggressively optimized for normal
arithmetic operators.

Needless to say such a Decimal number would accept and represent
literals such as 0.01 with no problems. It would also serve as a BigNum
implementation for most purposes.

No doubt implementing this type in the standard library would allow for
highly optimized implementations for all major CPU platforms. In
particular, the data array should probably be [Int64] for 64-bit platforms.

Rainer: I quickly skimmed this. Just to make sure I am understanding 100%: you are proposing a fixed point decimal calculation or a floating point decimal calculation. The former, no?

--
Rainer Brockerhoff <rainer@brockerhoff.net>
Belo Horizonte, Brazil
"In the affairs of others even fools are wise
In their own business even sages err."

Just wanted to expand on the type tolerances idea with an example:

let a:Float±0.1 = 1234.56
let b:Float±0.5 = 123.456

let result:Float±0.25 = a + b // Error as b’s tolerance > 0.25

That example is a good motivation for two things I would like to see in Swift:
- Generic value parameters (has been proposed, but lacks traction)
- Inheritance for structs / "newtype"-feature (should have been proposed weeks ago…)

With both things available, it would be possible to implement your example with a slightly different syntax:
let a: FloatNumber<tolerance: 0.1> = 1234.56

Better support for plain simple calculations would be really cool — I still remember how impressed I've been the first time I saw the Scheme-Interpreter printing a number that filled the whole screen ;-)

Tino

I still absolutely think the best proposal for this is to add a Fraction type to the standard library with easy conversions to and from all the numeric types.

Your precision is only limited by the size of IntMax, and you can do whatever operations you want without losing precision. There’s a great package for it by Jaden Geller on GitHub here:

— Harlan

···

On Mar 21, 2016, at 7:04 AM, Haravikk via swift-evolution <swift-evolution@swift.org> wrote:

On 20 Mar 2016, at 17:54, Rainer Brockerhoff via swift-evolution <swift-evolution@swift.org> wrote:

On 3/20/16 14:43, Andrey Tarantsov via swift-evolution wrote:

I have no stake in this proposal, except for:

I suggest, therefore, that this acceptance be indicated by an
annotation to the literal; a form such as ~0.1 might be easiest to
read and implement, as the prefix ~ operator currently has no
meaning for a floating-point value.

Whatever you do, don't touch the literals! I specify NSTimeIntervals
of 0.1, 0.2, 0.25 etc all over the place, and I couldn't care less if
my animations are one femtosecond off.

Don't pollute everyone's apps with tildes just because there's a
niche that needs to care about precision loss.

Yes, I'm now agreeing that the ideal solution is to leave the whole
binary floating-point mess alone and write a reasonable `Decimal` type.
I'll update my text accordingly when I find time.

The alternative would be to make the Decimal type the default anywhere that Double or Float isn’t used; while this may be overkill, it shouldn’t impact performance of most applications. At this point the less-safe but higher performance types would be opt-in by either specifying the type or using the tilde operator.

This could actually expand the definition of the ± on constants to instead imply “pick the type that can represent this”, so 0.1±0.01 would pick Double, Float or Decimal as appropriate, favouring the higher performance types only if they can represent that value without exceeding the allowable error.

It is a tricky area though, Swift does have a goal of safety so it may be worth pushing a change that promotes that, even if it means that means a few changes to get maximum performance back; this could be partly avoided by the way the change is handled though, providing warnings where the intent is ambiguous, or assuming performance over safety for existing code?
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

First draft towards a tentative pre-proposal:
Pre-proposal: Safer Decimal Calculations · GitHub
------

Pre-proposal: Safer Decimal Calculations
Proposal: TBD
Author(s): Rainer Brockerhoff
Status: TBD
Review manager: TBD
...
Full Decimal type proposal

There are cogent arguments that prior art/habits and the already complex
interactions between Double, Float, Float80 and CGFloat are best left alone.

However, there remains a need for a precise implementation of a workable
Decimal value type for financial calculations. IMHO repurposing the
existing NSDecimalNumber from Objective-C is not the best solution.

As most experienced developers know, the standard solution for financial
calculations is to internally store fixed-point values — usually but not
always in cents — and then print the “virtual” point (or decimal comma,
for the rest of us) on output.

I propose, therefore, an internal data layout like this:

UInt16 - position of the “virtual” point, starting at 0
UInt16 - data array size - 1
[Int32] - contiguous data array, little-endian order, grown as needed.
Note that both UInt16 fields being zero implies that the number is
reduced to a 32-bit Integer. Number literals in Swift can be up to 2048
bits in size, so the maximum data array size would be 64, although it
could conceivably grow beyond that. The usual cases of the virtual point
position being 0 or 2 could be aggressively optimized for normal
arithmetic operators.

Needless to say such a Decimal number would accept and represent
literals such as 0.01 with no problems. It would also serve as a BigNum
implementation for most purposes.

No doubt implementing this type in the standard library would allow for
highly optimized implementations for all major CPU platforms. In
particular, the data array should probably be [Int64] for 64-bit platforms.

Rainer: I quickly skimmed this. Just to make sure I am understanding 100%: you are proposing a fixed point decimal calculation or a floating point decimal calculation. The former, no?

Right, fixed-point. (NSDecimalNumber is decimal floating-point, of course).

Ok. (That was the source of my confusion).

I'll be updating my pre-proposal in a few days with the received feedback.

When you feel that this is ready, we should have Steve Canon +CC look at this.

···

On Mar 23, 2016, at 5:26 AM, Rainer Brockerhoff via swift-evolution <swift-evolution@swift.org> wrote:
On 3/22/16 23:20, Michael Gottesman via swift-evolution wrote:

On Mar 18, 2016, at 3:42 PM, Rainer Brockerhoff via swift-evolution <swift-evolution@swift.org> wrote:

--
Rainer Brockerhoff <rainer@brockerhoff.net>
Belo Horizonte, Brazil
"In the affairs of others even fools are wise
In their own business even sages err."
http://brockerhoff.net/blog/

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

First draft towards a tentative pre-proposal:
Pre-proposal: Safer Decimal Calculations · GitHub
------

Pre-proposal: Safer Decimal Calculations
Proposal: TBD
Author(s): Rainer Brockerhoff
Status: TBD
Review manager: TBD
...
Full Decimal type proposal

There are cogent arguments that prior art/habits and the already complex
interactions between Double, Float, Float80 and CGFloat are best left alone.

However, there remains a need for a precise implementation of a workable
Decimal value type for financial calculations. IMHO repurposing the
existing NSDecimalNumber from Objective-C is not the best solution.

As most experienced developers know, the standard solution for financial
calculations is to internally store fixed-point values — usually but not
always in cents — and then print the “virtual” point (or decimal comma,
for the rest of us) on output.

I propose, therefore, an internal data layout like this:

UInt16 - position of the “virtual” point, starting at 0
UInt16 - data array size - 1
[Int32] - contiguous data array, little-endian order, grown as needed.
Note that both UInt16 fields being zero implies that the number is
reduced to a 32-bit Integer. Number literals in Swift can be up to 2048
bits in size, so the maximum data array size would be 64, although it
could conceivably grow beyond that. The usual cases of the virtual point
position being 0 or 2 could be aggressively optimized for normal
arithmetic operators.

Needless to say such a Decimal number would accept and represent
literals such as 0.01 with no problems. It would also serve as a BigNum
implementation for most purposes.

No doubt implementing this type in the standard library would allow for
highly optimized implementations for all major CPU platforms. In
particular, the data array should probably be [Int64] for 64-bit platforms.

Rainer: I quickly skimmed this. Just to make sure I am understanding 100%: you are proposing a fixed point decimal calculation or a floating point decimal calculation. The former, no?

Right, fixed-point. (NSDecimalNumber is decimal floating-point, of course).

What you’re describing is actually not a fixed-point format at all, but rather a variant of what IEEE 754 calls an “extendable precision [floating-point] format”.

A fixed-point format has a *fixed* radix point (or scale) determined by the type. A couple examples of common fixed point formats are:

- 8 bit pixel formats in imaging, which use a fixed scale of 1/255 so that 0xff encodes 1.0 and 0x00 encodes 0.0.

- “Q15” or “Q1.15”, a fairly ubiquitous format in signal processing, which uses 16-bit signed integers with a fixed scale of 2**-15, so that 0x8000 encodes -1.0 and 0x7fff encodes 0.999969482421875.

Your proposed format, by contrast encodes the radix point / scale as part of the number; instead of being constant for all values of the type it “floats”, making it a floating-point format. Now that terminology is squared away, let’s look at what IEEE 754 says about such formats. We don’t necessarily need to follow the guidelines, but we need a good reason if we’re going to do something different. I’ve included some explanatory commentary inline with the standard text:

These formats are characterized by the parameters b, p, and emax, which may match those of an interchange format and shall:

b here is the radix or base; in your format b is 10 (decimal).
p is the precision, or the number of (base-10) digits that are stored.
emax is the largest allowed (finite) encoded exponent (the exponent bias and hence minimum exponent then fall out via formulas; the exponent range is approximately symmetric about zero).

• ― provide all the representations of floating-point data defined in terms of those parameters in 3.2 and 3.3

This just says that you should be able to represent +/-0, +/-infinity, quiet and signaling NaNs as well as finite values.

• ― provide all the operations of this standard, as defined in Clause 5, for that format.

This says that you should provide all the "basic operations”: round to integral in the various modes, nextup, nextdown, remainder, min, max, quantize, scale and log, addition, subtraction, multiplication, division, square root, fused multiply add, conversions to and from other formats and strings, comparisons, abs, copysign, and a few other things. The exact list isn’t terribly important. It’s worth noting that I’ll be trying to drive the FloatingPoint protocol to match these requirements, so we can really just say “your type should conform to FloatingPoint”.

This standard does not require an implementation to provide any extended or extendable precision format. Any encodings for these formats are implementation-defined, but should be fixed width and may match those of an interchange format.

This just says that such types are optional; languages don’t need to have them to claim IEEE 754 support.

Language standards should define mechanisms supporting extendable precision for each supported radix. Language standards supporting extendable precision shall permit users to specify p and emax. Language standards shall also allow the specification of an extendable precision by specifying p alone; in this case emax shall be defined by the language standard to be at least 1000×p when p is ≥ 237 bits in a binary format or p is ≥ 51 digits in a decimal format.

This says that users should be able to define a number in this format just by the precision, leaving the implementation to choose the exponent range. In practice, this means that you’ll want to have a 32- or 64-bit exponent field; 16 bits isn’t sufficient. I would suggest that the swifty thing is to use Int for the exponent field and size both.

The usual thing would be to use a sign-magnitude representation (where the significand is unsigned and the signbit is tracked separately), rather than a twos-complement significand. It just works out more nicely if you can treat all the words in the significand the same way.

To the IEEE 754 recommendations, it sounds like you would want to add a policy of either growing the precision to keep results exact when possible, or indicating an error when the result is not exact. How do you propose to handle division / square root, where the results are essentially never finitely representable?

– Steve

···

On Mar 23, 2016, at 5:26 AM, Rainer Brockerhoff via swift-evolution <swift-evolution@swift.org> wrote:

On 3/22/16 23:20, Michael Gottesman via swift-evolution wrote:

On Mar 18, 2016, at 3:42 PM, Rainer Brockerhoff via swift-evolution <swift-evolution@swift.org> wrote:

Just wanted to expand on the type tolerances idea with an example:

let a:Float±0.1 = 1234.56
let b:Float±0.5 = 123.456

let result:Float±0.25 = a + b // Error as b’s tolerance > 0.25

That example is a good motivation for two things I would like to see in
Swift:
- Generic value parameters (has been proposed, but lacks traction)
- Inheritance for structs / "newtype"-feature (should have been proposed
weeks ago…)

With both things available, it would be possible to implement your
example with a slightly different syntax:
let a: FloatNumber<tolerance: 0.1> = 1234.56

Or, for my proposal
let cost: Decimal<2> = 2.34;

Can't quite see where "Inheritance for structs / "newtype"-feature"
enables this, however. Care to explain?

···

On 3/19/16 09:03, Tino Heth via swift-evolution wrote:

Better support for plain simple calculations would be really cool — I
still remember how impressed I've been the first time I saw the
Scheme-Interpreter printing a number that filled the whole screen ;-)

--
Rainer Brockerhoff <rainer@brockerhoff.net>
Belo Horizonte, Brazil
"In the affairs of others even fools are wise
In their own business even sages err."

Can't quite see where "Inheritance for structs / "newtype"-feature"
enables this, however. Care to explain?

It's not required, but unless you want to implement a completely new numeric type (like fractions), it is tedious to declare all operations and conversions that are useful or required:

struct CustomDouble {
let value: Double
}

func == (a: CustomDouble, b: CustomDouble) -> Bool {
return abs(a.value - b.value) < 0.01
}

This type can handle the comparison, but actually, you don't want a struct that contains a double, but that is a double (and inherits all abilities of this type).
Greg Titus already did some research and made the observation that such simple container-types have no memory or performance penalty, but as a developer, you have to write many stupid functions that do nothing but forwarding operations on x to x.value…

Tino

Pre-proposal: Safer Decimal Calculations Proposal: TBD
Author(s): Rainer Brockerhoff Status: TBD Review manager: TBD
... Full Decimal type proposal ...

Rainer: I quickly skimmed this. Just to make sure I am
understanding 100%: you are proposing a fixed point decimal
calculation or a floating point decimal calculation. The former,
no?

Right, fixed-point. (NSDecimalNumber is decimal floating-point, of
course).

What you’re describing is actually not a fixed-point format at all,
but rather a variant of what IEEE 754 calls an “extendable precision
[floating-point] format”.

Stephen, thanks for replying in so much detail and setting me straight
on nomenclature. Nothing better than talking to a specialist ;-)

I'm working on a rewrite of my text (between medical time-outs) but

A fixed-point format has a *fixed* radix point (or scale) determined
by the type. A couple examples of common fixed point formats are:
... Your proposed format, by contrast encodes the radix point / scale
as part of the number; instead of being constant for all values of
the type it “floats”, making it a floating-point format.

See your point, I missed the "determined by the type" part.

In fact, one alternative I'm considering of proposing is to do a real
fixed-point type, with enough digits (on both sides of the decimal
point) to be useful for most real-world problems. Referring, of
course, to
How Many Decimals of Pi Do We Really Need? - Edu News | NASA/JPL Edu,
but see below...

The usual thing would be to use a sign-magnitude representation
(where the significand is unsigned and the signbit is tracked
separately), rather than a twos-complement significand. It just
works out more nicely if you can treat all the words in the
significand the same way.

Yep, I jumped the gun here, prematurely thinking of optimized
assembly-language implementations.

To the IEEE 754 recommendations, it sounds like you would want to
add a policy of either growing the precision to keep results exact
when possible, or indicating an error when the result is not exact.

Initially my idea was growing the precision to keep results exact, up to
some maximum, and then rounding. And, in practice, having round up/down
functions to N digits.

I see that conforming to IEEE 754 with, as you said, "±0, ±infinity,
quiet and signaling NaNs" etc. goes beyond my aims here, since:

It’s worth noting that I’ll be trying to drive the FloatingPoint
protocol to match these requirements, so we can really just say “your
type should conform to FloatingPoint”.
...
How do you propose to handle division / square root, where the
results are essentially never finitely representable?

So, I see several, maybe partially conflicting, aims here.

There's the mismatch between decimal representation of binary formats,
causing confusion for very common cases like 0.01. There's your work in
upgrading the FloatingPoint protocol. There's the question of
modernizing NSDecimalNumber or writing a new decimal type. The
scientific community needs IEEE 754, the mathematical community needs
exact-precision bignums, the financial community needs predictable but
small decimal precision, the educators need simple decimal numbers for
teaching and graphing.

IMHO the existing Double, Float and CGFloat types don't cover all those
use cases.

Maybe we need a DecimalLiteralConvertible not as generic as
FloatLiteralConvertible, so that we can have a built-in type - call it
Decimal or SimpleDecimal - that would be inferred in a statement like
`let x = 0.01`
such that, thereafter, calculations with x are sufficient for 99% of
real-world graphing and financial calculations, with exact comparisons
and so forth, but with none of the IEEE 754 complications. (Of course if
you need trigonometry and square root etc. just convert to Double.)

I'm certainly not qualified to discuss the implementation details, so
I'm content to get the discussion rolling here.

Thanks again,

···

On 3/24/16 09:54, Stephen Canon wrote:

On Mar 23, 2016, at 5:26 AM, Rainer Brockerhoff via swift-evolution > <swift-evolution@swift.org> wrote: ...

On 3/22/16 23:20, Michael Gottesman via swift-evolution wrote:

--
Rainer Brockerhoff <rainer@brockerhoff.net>
Belo Horizonte, Brazil
"In the affairs of others even fools are wise
In their own business even sages err."

Do you have a link to that research? I'd be very interested.

···

On Mar 19, 2016, at 11:46 AM, Tino Heth via swift-evolution <swift-evolution@swift.org> wrote:

Can't quite see where "Inheritance for structs / "newtype"-feature"
enables this, however. Care to explain?

It's not required, but unless you want to implement a completely new numeric type (like fractions), it is tedious to declare all operations and conversions that are useful or required:

struct CustomDouble {
let value: Double
}

func == (a: CustomDouble, b: CustomDouble) -> Bool {
return abs(a.value - b.value) < 0.01
}

This type can handle the comparison, but actually, you don't want a struct that contains a double, but that is a double (and inherits all abilities of this type).
Greg Titus already did some research and made the observation that such simple container-types have no memory or performance penalty, but as a developer, you have to write many stupid functions that do nothing but forwarding operations on x to x.value…

Tino
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I believe that providing the IEEE 754 “Decimal128” type (plus support for decimal literals) would satisfy 99% of the requirements listed here (while also conforming to FloatingPoint). It provides 34 significant digits (sufficient to represent the US nation debt in Zimbabwean dollars).

Arbitrary-precision integer arithmetic is likely better served by a dedicated big integer type. Does that seem reasonable?

– Steve

···

On Mar 24, 2016, at 10:01 AM, Rainer Brockerhoff <rainer@brockerhoff.net> wrote:

There's the mismatch between decimal representation of binary formats,
causing confusion for very common cases like 0.01. There's your work in
upgrading the FloatingPoint protocol. There's the question of
modernizing NSDecimalNumber or writing a new decimal type. The
scientific community needs IEEE 754, the mathematical community needs
exact-precision bignums, the financial community needs predictable but
small decimal precision, the educators need simple decimal numbers for
teaching and graphing.

IMHO the existing Double, Float and CGFloat types don't cover all those
use cases.

Maybe we need a DecimalLiteralConvertible not as generic as
FloatLiteralConvertible, so that we can have a built-in type - call it
Decimal or SimpleDecimal - that would be inferred in a statement like
`let x = 0.01`
such that, thereafter, calculations with x are sufficient for 99% of
real-world graphing and financial calculations, with exact comparisons
and so forth, but with none of the IEEE 754 complications. (Of course if
you need trigonometry and square root etc. just convert to Double.)

Thunderbird.)

There's the mismatch between decimal representation of binary formats,
causing confusion for very common cases like 0.01. There's your work in
upgrading the FloatingPoint protocol. There's the question of
modernizing NSDecimalNumber or writing a new decimal type. The
scientific community needs IEEE 754, the mathematical community needs
exact-precision bignums, the financial community needs predictable but
small decimal precision, the educators need simple decimal numbers for
teaching and graphing.

IMHO the existing Double, Float and CGFloat types don't cover all those
use cases.

Maybe we need a DecimalLiteralConvertible not as generic as
FloatLiteralConvertible, so that we can have a built-in type - call it
Decimal or SimpleDecimal - that would be inferred in a statement like
`let x = 0.01`
such that, thereafter, calculations with x are sufficient for 99% of
real-world graphing and financial calculations, with exact comparisons
and so forth, but with none of the IEEE 754 complications. (Of course if
you need trigonometry and square root etc. just convert to Double.)

I believe that providing the IEEE 754 “Decimal128” type (plus support
for decimal literals) would satisfy 99% of the requirements listed
here (while also conforming to FloatingPoint). It provides 34
significant digits (sufficient to represent the US nation debt in
Zimbabwean dollars).

I now read up on Decimal128 and it looks excellent.

(Anecdote re: Zimbabwean dollars; during the hyperinflation period here
in Brazil we were unable to use any “foreign” home accounting apps, as
they, of course, didn't support millions and billions of currency

Can I conclude that you intend to introduce this type into Swift, then?
And would it be the default type for decimal literals, as I suggested?

If so, no need for me to update my (pre-)proposal at all.

Arbitrary-precision integer arithmetic is likely better served by a dedicated big integer type. Does that seem reasonable?

Indeed, I just mentioned that as a possible side-effect, but I don't
need it myself.

···

On 3/24/16 11:31, Stephen Canon via swift-evolution wrote:

On Mar 24, 2016, at 10:01 AM, Rainer Brockerhoff <rainer@brockerhoff.net> wrote:

--
Rainer Brockerhoff <rainer@brockerhoff.net>
Belo Horizonte, Brazil
"In the affairs of others even fools are wise
In their own business even sages err."