Allow FloatLiteralType in FloatLiteralConvertible to be aliased to String


(Morten Bek Ditlevsen) #1

Currently, in order to conform to FloatLiteralConvertible you need to implement
an initializer accepting a floatLiteral of the typealias: FloatLiteralType.
However, this typealias can only be Double, Float, Float80 and other built-in
floating point types (to be honest, I do not know the exact limitation since I have
not been able to read find this in the documentation).

These floating point types have precision limitations that are not necessarily
present in the type that you are making FloatLiteralConvertible.

Let’s imagine a CurrencyAmount type that uses an NSDecimalNumber as the
representation of the value:

public struct CurrencyAmount {
  public let value: NSDecimalNumber
  // .. other important currency-related stuff ..
}

extension CurrencyAmount: FloatLiteralConvertible {
  public typealias FloatLiteralType = Double
    
  public init(floatLiteral amount: FloatLiteralType) {
    print(amount.debugDescription)
    value = NSDecimalNumber(double: amount)
  }
}

let a: CurrencyAmount = 99.99

The printed value inside the initializer is 99.989999999999995 - so the value
has lost precision already in the intermediary Double representation.

I know that there is also an issue with the NSDecimalNumber double initializer,
but this is not the issue that we are seeing here.

One suggestion for a solution to this issue would be to allow the
FloatLiteralType to be aliased to a String. In this case the compiler should
parse the float literal token: 99.99 to a String and use that as input for the
FloatLiteralConvertible initializer.

This would mean that arbitrary literal precisions are allowed for
FloatLiteralConvertibles that implement their own parsing of a String value.

For instance, if the CurrencyAmount used a FloatLiteralType aliased to String we
would have:

extension CurrencyAmount: FloatLiteralConvertible {
  public typealias FloatLiteralType = String
    
  public init(floatLiteral amount: FloatLiteralType) {
    value = NSDecimalNumber(string: amount)
  }
}

and the precision would be the same as creating an NSDecimalNumber from a
String:

let a: CurrencyAmount = 1.00000000000000000000000000000000001

print(a.value.debugDescription)

Would give: 1.00000000000000000000000000000000001

How does that sound? Is it completely irrational to allow the use of Strings as
the intermediary representation of float literals?
I think that it makes good sense, since it allows for arbitrary precision.

Please let me know what you think.


(Dmitri Gribenko) #2

Hi,

I think you are raising an important problem, but using String as the
intermediate type does not strike me as an efficient and clean
solution. We already have DictionaryLiteral, and extending that
family to also include FloatLiteral seems like the right direction to
me.

Dmitri

···

On Fri, May 6, 2016 at 2:24 AM, Morten Bek Ditlevsen via swift-evolution <swift-evolution@swift.org> wrote:

How does that sound? Is it completely irrational to allow the use of Strings as
the intermediary representation of float literals?
I think that it makes good sense, since it allows for arbitrary precision.

--
main(i,j){for(i=2;;i++){for(j=2;j<i;j++){if(!(i%j)){j=0;break;}}if
(j){printf("%d\n",i);}}} /*Dmitri Gribenko <gribozavr@gmail.com>*/


(Joe Groff) #3

Like Dmitri said, a String is not a particularly efficient intermediate representation. For common machine numeric types, we want it to be straightforward for the compiler to constant-fold literals down to constants in the resulting binary. For floating-point literals, I think we could achieve this by changing the protocol to "deconstruct" the literal value into integer significand and exponent, something like this:

// A type that can be initialized from a decimal literal such as
// `1.1` or `2.3e5`.
protocol DecimalLiteralConvertible {
  // The integer type used to represent the significand and exponent of the value.
  typealias Component: IntegerLiteralConvertible

  // Construct a value equal to `decimalSignificand * 10**decimalExponent`.
  init(decimalSignificand: Component, decimalExponent: Component)
}

// A type that can be initialized from a hexadecimal floating point
// literal, such as `0x1.8p-2`.
protocol HexFloatLiteralConvertible {
  // The integer type used to represent the significand and exponent of the value.
  typealias Component: IntegerLiteralConvertible

  // Construct a value equal to `hexadecimalSignificand * 2**binaryExponent`.
  init(hexadecimalSignificand: Component, binaryExponent: Component)
}

Literals would desugar to constructor calls as follows:

1.0 // T(decimalSignificand: 1, decimalExponent: 0)
0.123 // T(decimalSignificand: 123, decimalExponent: -3)
1.23e-2 // same

0x1.8p-2 // T(hexadecimalSignificand: 0x18, binaryExponent: -6)

-Joe

···

On May 6, 2016, at 2:24 AM, Morten Bek Ditlevsen via swift-evolution <swift-evolution@swift.org> wrote:

Currently, in order to conform to FloatLiteralConvertible you need to implement
an initializer accepting a floatLiteral of the typealias: FloatLiteralType.
However, this typealias can only be Double, Float, Float80 and other built-in
floating point types (to be honest, I do not know the exact limitation since I have
not been able to read find this in the documentation).

These floating point types have precision limitations that are not necessarily
present in the type that you are making FloatLiteralConvertible.

Let’s imagine a CurrencyAmount type that uses an NSDecimalNumber as the
representation of the value:

public struct CurrencyAmount {
public let value: NSDecimalNumber
// .. other important currency-related stuff ..
}

extension CurrencyAmount: FloatLiteralConvertible {
public typealias FloatLiteralType = Double

public init(floatLiteral amount: FloatLiteralType) {
   print(amount.debugDescription)
   value = NSDecimalNumber(double: amount)
}
}

let a: CurrencyAmount = 99.99

The printed value inside the initializer is 99.989999999999995 - so the value
has lost precision already in the intermediary Double representation.

I know that there is also an issue with the NSDecimalNumber double initializer,
but this is not the issue that we are seeing here.

One suggestion for a solution to this issue would be to allow the
FloatLiteralType to be aliased to a String. In this case the compiler should
parse the float literal token: 99.99 to a String and use that as input for the
FloatLiteralConvertible initializer.

This would mean that arbitrary literal precisions are allowed for
FloatLiteralConvertibles that implement their own parsing of a String value.

For instance, if the CurrencyAmount used a FloatLiteralType aliased to String we
would have:

extension CurrencyAmount: FloatLiteralConvertible {
public typealias FloatLiteralType = String

public init(floatLiteral amount: FloatLiteralType) {
   value = NSDecimalNumber(string: amount)
}
}

and the precision would be the same as creating an NSDecimalNumber from a
String:

let a: CurrencyAmount = 1.00000000000000000000000000000000001

print(a.value.debugDescription)

Would give: 1.00000000000000000000000000000000001

How does that sound? Is it completely irrational to allow the use of Strings as
the intermediary representation of float literals?
I think that it makes good sense, since it allows for arbitrary precision.

Please let me know what you think.


(Stephen Canon) #4

This seems like a very good approach to me.

– Steve

···

On May 6, 2016, at 12:41 PM, Joe Groff via swift-evolution <swift-evolution@swift.org> wrote:

On May 6, 2016, at 2:24 AM, Morten Bek Ditlevsen via swift-evolution <swift-evolution@swift.org> wrote:

Currently, in order to conform to FloatLiteralConvertible you need to implement
an initializer accepting a floatLiteral of the typealias: FloatLiteralType.
However, this typealias can only be Double, Float, Float80 and other built-in
floating point types (to be honest, I do not know the exact limitation since I have
not been able to read find this in the documentation).

These floating point types have precision limitations that are not necessarily
present in the type that you are making FloatLiteralConvertible.

Let’s imagine a CurrencyAmount type that uses an NSDecimalNumber as the
representation of the value:

public struct CurrencyAmount {
public let value: NSDecimalNumber
// .. other important currency-related stuff ..
}

extension CurrencyAmount: FloatLiteralConvertible {
public typealias FloatLiteralType = Double

public init(floatLiteral amount: FloatLiteralType) {
  print(amount.debugDescription)
  value = NSDecimalNumber(double: amount)
}
}

let a: CurrencyAmount = 99.99

The printed value inside the initializer is 99.989999999999995 - so the value
has lost precision already in the intermediary Double representation.

I know that there is also an issue with the NSDecimalNumber double initializer,
but this is not the issue that we are seeing here.

One suggestion for a solution to this issue would be to allow the
FloatLiteralType to be aliased to a String. In this case the compiler should
parse the float literal token: 99.99 to a String and use that as input for the
FloatLiteralConvertible initializer.

This would mean that arbitrary literal precisions are allowed for
FloatLiteralConvertibles that implement their own parsing of a String value.

For instance, if the CurrencyAmount used a FloatLiteralType aliased to String we
would have:

extension CurrencyAmount: FloatLiteralConvertible {
public typealias FloatLiteralType = String

public init(floatLiteral amount: FloatLiteralType) {
  value = NSDecimalNumber(string: amount)
}
}

and the precision would be the same as creating an NSDecimalNumber from a
String:

let a: CurrencyAmount = 1.00000000000000000000000000000000001

print(a.value.debugDescription)

Would give: 1.00000000000000000000000000000000001

How does that sound? Is it completely irrational to allow the use of Strings as
the intermediary representation of float literals?
I think that it makes good sense, since it allows for arbitrary precision.

Please let me know what you think.

Like Dmitri said, a String is not a particularly efficient intermediate representation. For common machine numeric types, we want it to be straightforward for the compiler to constant-fold literals down to constants in the resulting binary. For floating-point literals, I think we could achieve this by changing the protocol to "deconstruct" the literal value into integer significand and exponent, something like this:

// A type that can be initialized from a decimal literal such as
// `1.1` or `2.3e5`.
protocol DecimalLiteralConvertible {
// The integer type used to represent the significand and exponent of the value.
typealias Component: IntegerLiteralConvertible

// Construct a value equal to `decimalSignificand * 10**decimalExponent`.
init(decimalSignificand: Component, decimalExponent: Component)
}

// A type that can be initialized from a hexadecimal floating point
// literal, such as `0x1.8p-2`.
protocol HexFloatLiteralConvertible {
// The integer type used to represent the significand and exponent of the value.
typealias Component: IntegerLiteralConvertible

// Construct a value equal to `hexadecimalSignificand * 2**binaryExponent`.
init(hexadecimalSignificand: Component, binaryExponent: Component)
}

Literals would desugar to constructor calls as follows:

1.0 // T(decimalSignificand: 1, decimalExponent: 0)
0.123 // T(decimalSignificand: 123, decimalExponent: -3)
1.23e-2 // same

0x1.8p-2 // T(hexadecimalSignificand: 0x18, binaryExponent: -6)


(Joe Groff) #5

It occurs to me that "sign" probably needs to be an independent parameter, to be able to accurately capture literal -0 and 0:

// A type that can be initialized from a decimal literal such as
// `1.1` or `-2.3e5`.
protocol DecimalLiteralConvertible {
// The integer type used to represent the significand and exponent of the value.
typealias Component: IntegerLiteralConvertible

// Construct a value equal to `decimalSignificand * 10**decimalExponent * (isNegative ? -1 : 1)`.
init(decimalSignificand: Component, decimalExponent: Component, isNegative: Bool)
}

// A type that can be initialized from a hexadecimal floating point
// literal, such as `0x1.8p-2`.
protocol HexFloatLiteralConvertible {
// The integer type used to represent the significand and exponent of the value.
typealias Component: IntegerLiteralConvertible

// Construct a value equal to `hexadecimalSignificand * 2**binaryExponent * (isNegative ? -1 : 1)`.
init(hexadecimalSignificand: Component, binaryExponent: Component, isNegative: Bool)
}

-Joe

···

On May 6, 2016, at 9:42 AM, Stephen Canon <scanon@apple.com> wrote:

On May 6, 2016, at 12:41 PM, Joe Groff via swift-evolution <swift-evolution@swift.org> wrote:

On May 6, 2016, at 2:24 AM, Morten Bek Ditlevsen via swift-evolution <swift-evolution@swift.org> wrote:

Currently, in order to conform to FloatLiteralConvertible you need to implement
an initializer accepting a floatLiteral of the typealias: FloatLiteralType.
However, this typealias can only be Double, Float, Float80 and other built-in
floating point types (to be honest, I do not know the exact limitation since I have
not been able to read find this in the documentation).

These floating point types have precision limitations that are not necessarily
present in the type that you are making FloatLiteralConvertible.

Let’s imagine a CurrencyAmount type that uses an NSDecimalNumber as the
representation of the value:

public struct CurrencyAmount {
public let value: NSDecimalNumber
// .. other important currency-related stuff ..
}

extension CurrencyAmount: FloatLiteralConvertible {
public typealias FloatLiteralType = Double

public init(floatLiteral amount: FloatLiteralType) {
  print(amount.debugDescription)
  value = NSDecimalNumber(double: amount)
}
}

let a: CurrencyAmount = 99.99

The printed value inside the initializer is 99.989999999999995 - so the value
has lost precision already in the intermediary Double representation.

I know that there is also an issue with the NSDecimalNumber double initializer,
but this is not the issue that we are seeing here.

One suggestion for a solution to this issue would be to allow the
FloatLiteralType to be aliased to a String. In this case the compiler should
parse the float literal token: 99.99 to a String and use that as input for the
FloatLiteralConvertible initializer.

This would mean that arbitrary literal precisions are allowed for
FloatLiteralConvertibles that implement their own parsing of a String value.

For instance, if the CurrencyAmount used a FloatLiteralType aliased to String we
would have:

extension CurrencyAmount: FloatLiteralConvertible {
public typealias FloatLiteralType = String

public init(floatLiteral amount: FloatLiteralType) {
  value = NSDecimalNumber(string: amount)
}
}

and the precision would be the same as creating an NSDecimalNumber from a
String:

let a: CurrencyAmount = 1.00000000000000000000000000000000001

print(a.value.debugDescription)

Would give: 1.00000000000000000000000000000000001

How does that sound? Is it completely irrational to allow the use of Strings as
the intermediary representation of float literals?
I think that it makes good sense, since it allows for arbitrary precision.

Please let me know what you think.

Like Dmitri said, a String is not a particularly efficient intermediate representation. For common machine numeric types, we want it to be straightforward for the compiler to constant-fold literals down to constants in the resulting binary. For floating-point literals, I think we could achieve this by changing the protocol to "deconstruct" the literal value into integer significand and exponent, something like this:

// A type that can be initialized from a decimal literal such as
// `1.1` or `2.3e5`.
protocol DecimalLiteralConvertible {
// The integer type used to represent the significand and exponent of the value.
typealias Component: IntegerLiteralConvertible

// Construct a value equal to `decimalSignificand * 10**decimalExponent`.
init(decimalSignificand: Component, decimalExponent: Component)
}

// A type that can be initialized from a hexadecimal floating point
// literal, such as `0x1.8p-2`.
protocol HexFloatLiteralConvertible {
// The integer type used to represent the significand and exponent of the value.
typealias Component: IntegerLiteralConvertible

// Construct a value equal to `hexadecimalSignificand * 2**binaryExponent`.
init(hexadecimalSignificand: Component, binaryExponent: Component)
}

Literals would desugar to constructor calls as follows:

1.0 // T(decimalSignificand: 1, decimalExponent: 0)
0.123 // T(decimalSignificand: 123, decimalExponent: -3)
1.23e-2 // same

0x1.8p-2 // T(hexadecimalSignificand: 0x18, binaryExponent: -6)

This seems like a very good approach to me.


(Morten Bek Ditlevsen) #6

This would be an excellent solution to the issue.
Do you know if there are any existing plans for something like the
DecimalLiteralConvertible?

Another thought:
Would it make sense to have the compiler warn about float literal precision
issues?
Initialization of two different variables with the exact same literal value
could yield different precision results if one had a FloatLiteralType
aliased to Float80 and the other aliased to Float.

···

On Fri, May 6, 2016 at 6:46 PM Joe Groff <jgroff@apple.com> wrote:

> On May 6, 2016, at 9:42 AM, Stephen Canon <scanon@apple.com> wrote:
>
>
>> On May 6, 2016, at 12:41 PM, Joe Groff via swift-evolution < > swift-evolution@swift.org> wrote:
>>
>>>
>>> On May 6, 2016, at 2:24 AM, Morten Bek Ditlevsen via swift-evolution < > swift-evolution@swift.org> wrote:
>>>
>>> Currently, in order to conform to FloatLiteralConvertible you need to
implement
>>> an initializer accepting a floatLiteral of the typealias:
FloatLiteralType.
>>> However, this typealias can only be Double, Float, Float80 and other
built-in
>>> floating point types (to be honest, I do not know the exact limitation
since I have
>>> not been able to read find this in the documentation).
>>>
>>> These floating point types have precision limitations that are not
necessarily
>>> present in the type that you are making FloatLiteralConvertible.
>>>
>>> Let’s imagine a CurrencyAmount type that uses an NSDecimalNumber as the
>>> representation of the value:
>>>
>>>
>>> public struct CurrencyAmount {
>>> public let value: NSDecimalNumber
>>> // .. other important currency-related stuff ..
>>> }
>>>
>>> extension CurrencyAmount: FloatLiteralConvertible {
>>> public typealias FloatLiteralType = Double
>>>
>>> public init(floatLiteral amount: FloatLiteralType) {
>>> print(amount.debugDescription)
>>> value = NSDecimalNumber(double: amount)
>>> }
>>> }
>>>
>>> let a: CurrencyAmount = 99.99
>>>
>>>
>>> The printed value inside the initializer is 99.989999999999995 - so
the value
>>> has lost precision already in the intermediary Double representation.
>>>
>>> I know that there is also an issue with the NSDecimalNumber double
initializer,
>>> but this is not the issue that we are seeing here.
>>>
>>>
>>> One suggestion for a solution to this issue would be to allow the
>>> FloatLiteralType to be aliased to a String. In this case the compiler
should
>>> parse the float literal token: 99.99 to a String and use that as input
for the
>>> FloatLiteralConvertible initializer.
>>>
>>> This would mean that arbitrary literal precisions are allowed for
>>> FloatLiteralConvertibles that implement their own parsing of a String
value.
>>>
>>> For instance, if the CurrencyAmount used a FloatLiteralType aliased to
String we
>>> would have:
>>>
>>> extension CurrencyAmount: FloatLiteralConvertible {
>>> public typealias FloatLiteralType = String
>>>
>>> public init(floatLiteral amount: FloatLiteralType) {
>>> value = NSDecimalNumber(string: amount)
>>> }
>>> }
>>>
>>> and the precision would be the same as creating an NSDecimalNumber
from a
>>> String:
>>>
>>> let a: CurrencyAmount = 1.00000000000000000000000000000000001
>>>
>>> print(a.value.debugDescription)
>>>
>>> Would give: 1.00000000000000000000000000000000001
>>>
>>>
>>> How does that sound? Is it completely irrational to allow the use of
Strings as
>>> the intermediary representation of float literals?
>>> I think that it makes good sense, since it allows for arbitrary
precision.
>>>
>>> Please let me know what you think.
>>
>> Like Dmitri said, a String is not a particularly efficient intermediate
representation. For common machine numeric types, we want it to be
straightforward for the compiler to constant-fold literals down to
constants in the resulting binary. For floating-point literals, I think we
could achieve this by changing the protocol to "deconstruct" the literal
value into integer significand and exponent, something like this:
>>
>> // A type that can be initialized from a decimal literal such as
>> // `1.1` or `2.3e5`.
>> protocol DecimalLiteralConvertible {
>> // The integer type used to represent the significand and exponent of
the value.
>> typealias Component: IntegerLiteralConvertible
>>
>> // Construct a value equal to `decimalSignificand *
10**decimalExponent`.
>> init(decimalSignificand: Component, decimalExponent: Component)
>> }
>>
>> // A type that can be initialized from a hexadecimal floating point
>> // literal, such as `0x1.8p-2`.
>> protocol HexFloatLiteralConvertible {
>> // The integer type used to represent the significand and exponent of
the value.
>> typealias Component: IntegerLiteralConvertible
>>
>> // Construct a value equal to `hexadecimalSignificand *
2**binaryExponent`.
>> init(hexadecimalSignificand: Component, binaryExponent: Component)
>> }
>>
>> Literals would desugar to constructor calls as follows:
>>
>> 1.0 // T(decimalSignificand: 1, decimalExponent: 0)
>> 0.123 // T(decimalSignificand: 123, decimalExponent: -3)
>> 1.23e-2 // same
>>
>> 0x1.8p-2 // T(hexadecimalSignificand: 0x18, binaryExponent: -6)
>
> This seems like a very good approach to me.

It occurs to me that "sign" probably needs to be an independent parameter,
to be able to accurately capture literal -0 and 0:

// A type that can be initialized from a decimal literal such as
// `1.1` or `-2.3e5`.
protocol DecimalLiteralConvertible {
// The integer type used to represent the significand and exponent of the
value.
typealias Component: IntegerLiteralConvertible

// Construct a value equal to `decimalSignificand * 10**decimalExponent *
(isNegative ? -1 : 1)`.
init(decimalSignificand: Component, decimalExponent: Component,
isNegative: Bool)
}

// A type that can be initialized from a hexadecimal floating point
// literal, such as `0x1.8p-2`.
protocol HexFloatLiteralConvertible {
// The integer type used to represent the significand and exponent of the
value.
typealias Component: IntegerLiteralConvertible

// Construct a value equal to `hexadecimalSignificand * 2**binaryExponent
* (isNegative ? -1 : 1)`.
init(hexadecimalSignificand: Component, binaryExponent: Component,
isNegative: Bool)
}

-Joe


(Joe Groff) #7

This would be an excellent solution to the issue.
Do you know if there are any existing plans for something like the DecimalLiteralConvertible?

Not that I know of. Someone would have to submit a proposal.

Another thought:
Would it make sense to have the compiler warn about float literal precision issues?
Initialization of two different variables with the exact same literal value could yield different precision results if one had a FloatLiteralType aliased to Float80 and the other aliased to Float.

That's definitely a possibility. We already have machinery in place to raise errors when integer literals overflow Int* types, and we could do something similar for float literals that have excessive precision.

-Joe

···

On May 8, 2016, at 10:30 PM, Morten Bek Ditlevsen <bek@termestrup.dk> wrote:

On Fri, May 6, 2016 at 6:46 PM Joe Groff <jgroff@apple.com> wrote:

> On May 6, 2016, at 9:42 AM, Stephen Canon <scanon@apple.com> wrote:
>
>
>> On May 6, 2016, at 12:41 PM, Joe Groff via swift-evolution <swift-evolution@swift.org> wrote:
>>
>>>
>>> On May 6, 2016, at 2:24 AM, Morten Bek Ditlevsen via swift-evolution <swift-evolution@swift.org> wrote:
>>>
>>> Currently, in order to conform to FloatLiteralConvertible you need to implement
>>> an initializer accepting a floatLiteral of the typealias: FloatLiteralType.
>>> However, this typealias can only be Double, Float, Float80 and other built-in
>>> floating point types (to be honest, I do not know the exact limitation since I have
>>> not been able to read find this in the documentation).
>>>
>>> These floating point types have precision limitations that are not necessarily
>>> present in the type that you are making FloatLiteralConvertible.
>>>
>>> Let’s imagine a CurrencyAmount type that uses an NSDecimalNumber as the
>>> representation of the value:
>>>
>>>
>>> public struct CurrencyAmount {
>>> public let value: NSDecimalNumber
>>> // .. other important currency-related stuff ..
>>> }
>>>
>>> extension CurrencyAmount: FloatLiteralConvertible {
>>> public typealias FloatLiteralType = Double
>>>
>>> public init(floatLiteral amount: FloatLiteralType) {
>>> print(amount.debugDescription)
>>> value = NSDecimalNumber(double: amount)
>>> }
>>> }
>>>
>>> let a: CurrencyAmount = 99.99
>>>
>>>
>>> The printed value inside the initializer is 99.989999999999995 - so the value
>>> has lost precision already in the intermediary Double representation.
>>>
>>> I know that there is also an issue with the NSDecimalNumber double initializer,
>>> but this is not the issue that we are seeing here.
>>>
>>>
>>> One suggestion for a solution to this issue would be to allow the
>>> FloatLiteralType to be aliased to a String. In this case the compiler should
>>> parse the float literal token: 99.99 to a String and use that as input for the
>>> FloatLiteralConvertible initializer.
>>>
>>> This would mean that arbitrary literal precisions are allowed for
>>> FloatLiteralConvertibles that implement their own parsing of a String value.
>>>
>>> For instance, if the CurrencyAmount used a FloatLiteralType aliased to String we
>>> would have:
>>>
>>> extension CurrencyAmount: FloatLiteralConvertible {
>>> public typealias FloatLiteralType = String
>>>
>>> public init(floatLiteral amount: FloatLiteralType) {
>>> value = NSDecimalNumber(string: amount)
>>> }
>>> }
>>>
>>> and the precision would be the same as creating an NSDecimalNumber from a
>>> String:
>>>
>>> let a: CurrencyAmount = 1.00000000000000000000000000000000001
>>>
>>> print(a.value.debugDescription)
>>>
>>> Would give: 1.00000000000000000000000000000000001
>>>
>>>
>>> How does that sound? Is it completely irrational to allow the use of Strings as
>>> the intermediary representation of float literals?
>>> I think that it makes good sense, since it allows for arbitrary precision.
>>>
>>> Please let me know what you think.
>>
>> Like Dmitri said, a String is not a particularly efficient intermediate representation. For common machine numeric types, we want it to be straightforward for the compiler to constant-fold literals down to constants in the resulting binary. For floating-point literals, I think we could achieve this by changing the protocol to "deconstruct" the literal value into integer significand and exponent, something like this:
>>
>> // A type that can be initialized from a decimal literal such as
>> // `1.1` or `2.3e5`.
>> protocol DecimalLiteralConvertible {
>> // The integer type used to represent the significand and exponent of the value.
>> typealias Component: IntegerLiteralConvertible
>>
>> // Construct a value equal to `decimalSignificand * 10**decimalExponent`.
>> init(decimalSignificand: Component, decimalExponent: Component)
>> }
>>
>> // A type that can be initialized from a hexadecimal floating point
>> // literal, such as `0x1.8p-2`.
>> protocol HexFloatLiteralConvertible {
>> // The integer type used to represent the significand and exponent of the value.
>> typealias Component: IntegerLiteralConvertible
>>
>> // Construct a value equal to `hexadecimalSignificand * 2**binaryExponent`.
>> init(hexadecimalSignificand: Component, binaryExponent: Component)
>> }
>>
>> Literals would desugar to constructor calls as follows:
>>
>> 1.0 // T(decimalSignificand: 1, decimalExponent: 0)
>> 0.123 // T(decimalSignificand: 123, decimalExponent: -3)
>> 1.23e-2 // same
>>
>> 0x1.8p-2 // T(hexadecimalSignificand: 0x18, binaryExponent: -6)
>
> This seems like a very good approach to me.

It occurs to me that "sign" probably needs to be an independent parameter, to be able to accurately capture literal -0 and 0:

// A type that can be initialized from a decimal literal such as
// `1.1` or `-2.3e5`.
protocol DecimalLiteralConvertible {
// The integer type used to represent the significand and exponent of the value.
typealias Component: IntegerLiteralConvertible

// Construct a value equal to `decimalSignificand * 10**decimalExponent * (isNegative ? -1 : 1)`.
init(decimalSignificand: Component, decimalExponent: Component, isNegative: Bool)
}

// A type that can be initialized from a hexadecimal floating point
// literal, such as `0x1.8p-2`.
protocol HexFloatLiteralConvertible {
// The integer type used to represent the significand and exponent of the value.
typealias Component: IntegerLiteralConvertible

// Construct a value equal to `hexadecimalSignificand * 2**binaryExponent * (isNegative ? -1 : 1)`.
init(hexadecimalSignificand: Component, binaryExponent: Component, isNegative: Bool)
}

-Joe


(Rainer Brockerhoff) #8

This would be an excellent solution to the issue.
Do you know if there are any existing plans for something like the DecimalLiteralConvertible?

Not that I know of. Someone would have to submit a proposal.

Another thought:
Would it make sense to have the compiler warn about float literal precision issues?
Initialization of two different variables with the exact same literal value could yield different precision results if one had a FloatLiteralType aliased to Float80 and the other aliased to Float.

That's definitely a possibility. We already have machinery in place to raise errors when integer literals overflow Int* types, and we could do something similar for float literals that have excessive precision.

For compilation, it would probably be overkill to show even a warning
for non-representable numbers like 0.1 assigned to a binary
floating-point type, but perhaps such a warning might be acceptable in a
playground?

···

On 5/9/16 14:01, Joe Groff via swift-evolution wrote:

On May 8, 2016, at 10:30 PM, Morten Bek Ditlevsen <bek@termestrup.dk> wrote:

On Fri, May 6, 2016 at 6:46 PM Joe Groff <jgroff@apple.com> wrote:

On May 6, 2016, at 9:42 AM, Stephen Canon <scanon@apple.com> wrote:

On May 6, 2016, at 12:41 PM, Joe Groff via swift-evolution <swift-evolution@swift.org> wrote:

On May 6, 2016, at 2:24 AM, Morten Bek Ditlevsen via swift-evolution <swift-evolution@swift.org> wrote:

Currently, in order to conform to FloatLiteralConvertible you need to implement
an initializer accepting a floatLiteral of the typealias: FloatLiteralType.
However, this typealias can only be Double, Float, Float80 and other built-in
floating point types (to be honest, I do not know the exact limitation since I have
not been able to read find this in the documentation).

These floating point types have precision limitations that are not necessarily
present in the type that you are making FloatLiteralConvertible.

Let’s imagine a CurrencyAmount type that uses an NSDecimalNumber as the
representation of the value:

public struct CurrencyAmount {
public let value: NSDecimalNumber
// .. other important currency-related stuff ..
}

extension CurrencyAmount: FloatLiteralConvertible {
public typealias FloatLiteralType = Double

public init(floatLiteral amount: FloatLiteralType) {
  print(amount.debugDescription)
  value = NSDecimalNumber(double: amount)
}
}

let a: CurrencyAmount = 99.99

The printed value inside the initializer is 99.989999999999995 - so the value
has lost precision already in the intermediary Double representation.

I know that there is also an issue with the NSDecimalNumber double initializer,
but this is not the issue that we are seeing here.

One suggestion for a solution to this issue would be to allow the
FloatLiteralType to be aliased to a String. In this case the compiler should
parse the float literal token: 99.99 to a String and use that as input for the
FloatLiteralConvertible initializer.

This would mean that arbitrary literal precisions are allowed for
FloatLiteralConvertibles that implement their own parsing of a String value.

For instance, if the CurrencyAmount used a FloatLiteralType aliased to String we
would have:

extension CurrencyAmount: FloatLiteralConvertible {
public typealias FloatLiteralType = String

public init(floatLiteral amount: FloatLiteralType) {
  value = NSDecimalNumber(string: amount)
}
}

and the precision would be the same as creating an NSDecimalNumber from a
String:

let a: CurrencyAmount = 1.00000000000000000000000000000000001

print(a.value.debugDescription)

Would give: 1.00000000000000000000000000000000001

How does that sound? Is it completely irrational to allow the use of Strings as
the intermediary representation of float literals?
I think that it makes good sense, since it allows for arbitrary precision.

Please let me know what you think.

Like Dmitri said, a String is not a particularly efficient intermediate representation. For common machine numeric types, we want it to be straightforward for the compiler to constant-fold literals down to constants in the resulting binary. For floating-point literals, I think we could achieve this by changing the protocol to "deconstruct" the literal value into integer significand and exponent, something like this:

// A type that can be initialized from a decimal literal such as
// `1.1` or `2.3e5`.
protocol DecimalLiteralConvertible {
// The integer type used to represent the significand and exponent of the value.
typealias Component: IntegerLiteralConvertible

// Construct a value equal to `decimalSignificand * 10**decimalExponent`.
init(decimalSignificand: Component, decimalExponent: Component)
}

// A type that can be initialized from a hexadecimal floating point
// literal, such as `0x1.8p-2`.
protocol HexFloatLiteralConvertible {
// The integer type used to represent the significand and exponent of the value.
typealias Component: IntegerLiteralConvertible

// Construct a value equal to `hexadecimalSignificand * 2**binaryExponent`.
init(hexadecimalSignificand: Component, binaryExponent: Component)
}

Literals would desugar to constructor calls as follows:

1.0 // T(decimalSignificand: 1, decimalExponent: 0)
0.123 // T(decimalSignificand: 123, decimalExponent: -3)
1.23e-2 // same

0x1.8p-2 // T(hexadecimalSignificand: 0x18, binaryExponent: -6)

This seems like a very good approach to me.

It occurs to me that "sign" probably needs to be an independent parameter, to be able to accurately capture literal -0 and 0:

// A type that can be initialized from a decimal literal such as
// `1.1` or `-2.3e5`.
protocol DecimalLiteralConvertible {
// The integer type used to represent the significand and exponent of the value.
typealias Component: IntegerLiteralConvertible

// Construct a value equal to `decimalSignificand * 10**decimalExponent * (isNegative ? -1 : 1)`.
init(decimalSignificand: Component, decimalExponent: Component, isNegative: Bool)
}

// A type that can be initialized from a hexadecimal floating point
// literal, such as `0x1.8p-2`.
protocol HexFloatLiteralConvertible {
// The integer type used to represent the significand and exponent of the value.
typealias Component: IntegerLiteralConvertible

// Construct a value equal to `hexadecimalSignificand * 2**binaryExponent * (isNegative ? -1 : 1)`.
init(hexadecimalSignificand: Component, binaryExponent: Component, isNegative: Bool)
}

-Joe

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

--
Rainer Brockerhoff <rainer@brockerhoff.net>
Belo Horizonte, Brazil
"In the affairs of others even fools are wise
In their own business even sages err."
http://brockerhoff.net/blog/