Hello everyone. My name is Song Zhi.
I am a second-year Software Engineering student from Xi’an University of Post & Telecommunications, China.
ls command and
my_shell in UNIX environment.
After roughly looking at the introduction and implementation in Java (BigDecimal), C#, and Python of decimal floating-point, I have some ideas on it.
- Swift has its Decimal structure, which bridges to the NSDecimalNumber class(Apple Developer Documentation). An NSDecimalNumber instance can be expressed as mantissa x 10^exponent where mantissa is a decimal integer up to 38 digits long, and the exponent is an integer from –128 through 127. If we represent 99999999999999999999999999999999999999(max length digits) to binary, it will take 128bits. Therefore, an NSDecimalNumber instance takes more than 128 bits.
Decimal in Swift is similar to the decimal module in CSharpe.
I don’t know too much on Swift, so I am curious about what different between Decimal64 and NSDecimalNumber. NSDecimalNumber seems to have all Decimal functions, why we should implement Decimal64 and Decimal128 additionally?
In some languages like Python and Java, they can adjust precision freely. Why we don’t implement the same kind of function? For performance or other reasons?
To implement Decimal64, could you tell me the rough thoughts of the whole process and basic theory?
What functions I should implement?
I believe the power of open source and cooperation with people from different backgrounds can split problems into small pieces like the decimal module in Python was accomplished by people's cooperation and open source.
Looking forward to your reply :)