Provide native Decimal data type

Please forgive me if this have already been hashed out. Google and DuckDuckGo have not been helpful in my research, and the archives do not have a search functionality.

  I am new to Swift, and starting from 3.0 beta. I am aware that the language is not stabilized, so I feel this is an opportunity to introduce a native data type.

  Over history, data types typically follows what the CPU supports, so hence the int/float datatypes. char/String is technically an integer/array of integers under the hood (and unicode-supported Strings are a bit more complicated than that). Among customers I work with, data accuracy is very important. So when the only option for a value with a fractional portion does not guarantee data accuracy, such as:

return 3.0 * sideLength - 9.300000000001

  It gives us pause, as floating point numbers are always understood to be inaccurate, and the whole song and dance required to truncate/round the number to ensure accurate results make our financial customers very nervous. They want accuracy from the start and whenever they audit the results in the middle of the process, they expect to see a 9.3 on the dot.

  Having a native Decimal datatype would ensure that we have a consistent handling option for pulling and putting data to databases. Having worked with APT_Decimal within Orchestrate/PX Engine, I know we can do a better job if we could leverage the speed of float, but somehow ensure that even with that speed, we preserve the accuracy of the actual data in spite of the CPU’s quirks in handling floating point numbers in a maddening inaccurate manner.

  Is there any particular reason why we do not have a native Decimal data type for Swift?

-T.J.

  …in spite of the CPU’s quirks in handling floating point numbers in a maddening inaccurate manner.

Well, in the CPU’s defense, it’s only inaccurate because the puny humans insist on dividing their currency into fractions of 1/10, which has no exact representation in binary. (Apparently this is an ancient tradition commemorating the number of bony outgrowths on certain extremities of their grotesque meat-bodies.) I could — I mean, the computers could — point out that if we divided our currency units into 7 pieces, our precious decimal numbers would quickly become inaccurate too. :)

  Is there any particular reason why we do not have a native Decimal data type for Swift?

Cocoa’s Foundation framework has an NSDecimalNumber class that provides decimal numbers and arithmetic. The class docs for that include a note that "The Swift overlay to the Foundation framework provides the Decimal structure, which bridges to the NSDecimalNumber class. The Decimalvalue type offers the same functionality as the NSDecimalNumberreference type, and the two can be used interchangeably in Swift code that interacts with Objective-C APIs."

The question is whether this has been ported to the in-progress native Swift foundation library yet. I haven’t checked.

—Jens

···

On Sep 12, 2016, at 10:10 AM, Teej . via swift-users <swift-users@swift.org> wrote:

I agree on your point — base-10 notation is not compatible with a base-2 infrastructure. However, that is just exposing the reality of the floating point data type. It still does not answer my question on why we can’t just provide a decimal data type.

I am aware of the NSDecimalNumber class. But that is a layer on top of the core language, and not very performant relatively speaking - at least I would presume? Please correct me if I am wrong.

-T.J.

···

On Sep 12, 2016, at 10:26, Jens Alfke <jens@mooseyard.com> wrote:

On Sep 12, 2016, at 10:10 AM, Teej . via swift-users <swift-users@swift.org <mailto:swift-users@swift.org>> wrote:

  …in spite of the CPU’s quirks in handling floating point numbers in a maddening inaccurate manner.

Well, in the CPU’s defense, it’s only inaccurate because the puny humans insist on dividing their currency into fractions of 1/10, which has no exact representation in binary. (Apparently this is an ancient tradition commemorating the number of bony outgrowths on certain extremities of their grotesque meat-bodies.) I could — I mean, the computers could — point out that if we divided our currency units into 7 pieces, our precious decimal numbers would quickly become inaccurate too. :)

  Is there any particular reason why we do not have a native Decimal data type for Swift?

Cocoa’s Foundation framework has an NSDecimalNumber class that provides decimal numbers and arithmetic. The class docs for that include a note that "The Swift overlay to the Foundation framework provides the Decimal structure, which bridges to the NSDecimalNumber class. The Decimalvalue type offers the same functionality as the NSDecimalNumberreference type, and the two can be used interchangeably in Swift code that interacts with Objective-C APIs."

The question is whether this has been ported to the in-progress native Swift foundation library yet. I haven’t checked.

—Jens

  …in spite of the CPU’s quirks in handling floating point numbers in a maddening inaccurate manner.

Well, in the CPU’s defense, it’s only inaccurate because the puny humans insist on dividing their currency into fractions of 1/10, which has no exact representation in binary. (Apparently this is an ancient tradition commemorating the number of bony outgrowths on certain extremities of their grotesque meat-bodies.) I could — I mean, the computers could — point out that if we divided our currency units into 7 pieces, our precious decimal numbers would quickly become inaccurate too. :)

Expanding a bit on what Jens wrote here: decimal (unlike friendship) is not magic. All computer models of real-number arithmetic are necessarily inexact, because (almost all) real numbers are not computable. There are a bunch of reasonable choices that one can make, however (this is not an exhaustive list, just a sampling of the design space):

Binary floating point
Pro: represents modest integers exactly, extremely fast hardware implementations, fixed memory size, and rounding errors are extremely uniform—they don’t vary much with the number being represented.
Con: almost no decimal fractions have exact representations.

Decimal floating point
Pro: represents modest integers and decimal fractions exactly, slower than binary but still faster than almost anything else, fixed memory size.
Con: at least an order of magnitude slower than binary floating-point, and rounding error is significantly less scale-invariant.

Fixed-size rationals
Pro: represents all modestly-sized integers and fractions exactly, fixed memory size, four basic operations are exact until you hit the limits of representation.
Con: denominators quickly grow too quickly to be used for non-trivial computations (this is usually a deal-breaker).

Arbitrary-precision rationals
Pro: closed under four basic operations, represents most numbers most people will use exactly.
Con: representations get extremely large extremely quickly, large memory footprint if you have more than a few numbers.

Computable real numbers
Pro: any number you can describe, you can work with.
Con: your numbers are now computer programs, and you arithmetic system is Turing-complete. Testing for equality is equivalent to solving the halting problem.

···

On Sep 12, 2016, at 1:26 PM, Jens Alfke via swift-users <swift-users@swift.org> wrote:

On Sep 12, 2016, at 10:10 AM, Teej . via swift-users <swift-users@swift.org <mailto:swift-users@swift.org>> wrote:

Teej . via swift-users <swift-users@swift.org <mailto:swift-users@swift.org>> wrote:

It still does not answer my question on why we can’t just provide a decimal data type.

The only problem here is the “just”. We can and will provide a decimal data type. Like many other language changes, that we can and will do, it requires engineering time, and there are finite engineering resources available to work on it.

Keep in mind that decimal will not magically solve all accuracy problems, however. 1/3 is just as inaccurate in decimal as it is in binary floating-point (actually, it’s less accurate in decimal than in a comparable binary format due to the aforementioned non-uniformity of decimal rounding error).

– Steve

Regarding Decimal - it's not yet implemented on Linux, but it's a work in progress.

Alex

···

On 12 Sep 2016, at 18:49, Stephen Canon via swift-users <swift-users@swift.org> wrote:

On Sep 12, 2016, at 1:26 PM, Jens Alfke via swift-users <swift-users@swift.org <mailto:swift-users@swift.org>> wrote:

On Sep 12, 2016, at 10:10 AM, Teej . via swift-users <swift-users@swift.org <mailto:swift-users@swift.org>> wrote:

  …in spite of the CPU’s quirks in handling floating point numbers in a maddening inaccurate manner.

Well, in the CPU’s defense, it’s only inaccurate because the puny humans insist on dividing their currency into fractions of 1/10, which has no exact representation in binary. (Apparently this is an ancient tradition commemorating the number of bony outgrowths on certain extremities of their grotesque meat-bodies.) I could — I mean, the computers could — point out that if we divided our currency units into 7 pieces, our precious decimal numbers would quickly become inaccurate too. :)

Expanding a bit on what Jens wrote here: decimal (unlike friendship) is not magic. All computer models of real-number arithmetic are necessarily inexact, because (almost all) real numbers are not computable. There are a bunch of reasonable choices that one can make, however (this is not an exhaustive list, just a sampling of the design space):

Binary floating point
Pro: represents modest integers exactly, extremely fast hardware implementations, fixed memory size, and rounding errors are extremely uniform—they don’t vary much with the number being represented.
Con: almost no decimal fractions have exact representations.

Decimal floating point
Pro: represents modest integers and decimal fractions exactly, slower than binary but still faster than almost anything else, fixed memory size.
Con: at least an order of magnitude slower than binary floating-point, and rounding error is significantly less scale-invariant.

Fixed-size rationals
Pro: represents all modestly-sized integers and fractions exactly, fixed memory size, four basic operations are exact until you hit the limits of representation.
Con: denominators quickly grow too quickly to be used for non-trivial computations (this is usually a deal-breaker).

Arbitrary-precision rationals
Pro: closed under four basic operations, represents most numbers most people will use exactly.
Con: representations get extremely large extremely quickly, large memory footprint if you have more than a few numbers.

Computable real numbers
Pro: any number you can describe, you can work with.
Con: your numbers are now computer programs, and you arithmetic system is Turing-complete. Testing for equality is equivalent to solving the halting problem.

Teej . via swift-users <swift-users@swift.org <mailto:swift-users@swift.org>> wrote:

It still does not answer my question on why we can’t just provide a decimal data type.

The only problem here is the “just”. We can and will provide a decimal data type. Like many other language changes, that we can and will do, it requires engineering time, and there are finite engineering resources available to work on it.

Keep in mind that decimal will not magically solve all accuracy problems, however. 1/3 is just as inaccurate in decimal as it is in binary floating-point (actually, it’s less accurate in decimal than in a comparable binary format due to the aforementioned non-uniformity of decimal rounding error).

– Steve
_______________________________________________
swift-users mailing list
swift-users@swift.org
https://lists.swift.org/mailman/listinfo/swift-users