FloatingPoint/BinaryFloatingPoint protocol and concrete FloatingPoint types

Suppose that I have a generic function that makes a complex floating calculation
with a generic floating point type.
Suppose that calculation require the max floating point machine precision so:
1) I need to convert the generic floating point type to Float80
2) make the complex calculation with Float80
3) convert back the Float80 result to the generic floating point type

func maxPrecisionCalculation( input:Float80 ) -> Float80 {
  // ....
}

Let's try with FloatingPoint:

func someComplexCalculation<T:FloatingPoint>( input:T ) -> T {
  let input80 = Float80( input ) // Error: Cannot invoke initializer for type 'Float80' with an argument list of type '(T)'
  let input80 = input as Float80 // Error 'T' is not convertible to 'Float80'; did you mean to use 'as!' to force downcast?
  
  let output80 = maxPrecisionCalculation( input:input80 )
  
  return output80 as T // Error: 'Float80' is not convertible to 'T'; did you mean to use 'as!' to force downcast?
  return T(output80) // Error: Non-nominal type 'T' does not support explicit initialization
}

How convert a generic FloatingPoint to a concrete floating point type and then convert it back?

And now let's try with BinaryFloatingPoint:

func someComplexCalculation <T:BinaryFloatingPoint>( input:T ) -> T {
  let input80 = Float80( input ) // Error: Cannot invoke initializer for type 'Float80' with an argument list of type '(T)'
  let input80 = input as Float80 // Error 'T' is not convertible to 'Float80'; did you mean to use 'as!' to force downcast?
  
  let output80 = maxPrecisionCalculation( input:Float80(0) )
  
  return T(output80) // Ok, now this work
  return output80 as T // Error: 'Float80' is not convertible to 'T'; did you mean to use 'as!' to force downcast?
}

How convert a generic BinaryFloatingPoint to a concrete floating point type?
In the opposite direction the conversion work.

NOTE: Using Double instead of Float80 don't make any difference at all.

ADDENDUM: The Float80 type exists from the first Swift version. Now we have Swift 4
and math function over Float80 are still unsupported, making this type substantially useless.
I still can’t call sin(x) and log(x) with a Float80 value.
There is a plan to resolve the issue?

protocol Float80Convertible : BinaryFloatingPoint {
    init(_ value: Float80)
    var float80: Float80 { get }
}
extension Double : Float80Convertible {
    var float80: Float80 { return Float80(self) }
}
extension Float : Float80Convertible {
    var float80: Float80 { return Float80(self) }
}

func maxPrecisionCalculation(input:Float80) -> Float80 {
    return input // but something actually reauiring high precision ...
}

func someComplexCalculation<T:Float80Convertible>(input: T) -> T {
    let input80 = input.float80
    let output80 = maxPrecisionCalculation(input: input80)
    return T(output80)
}

···

On Wed, Nov 29, 2017 at 12:45 PM, Antonino Ficarra via swift-users < swift-users@swift.org> wrote:

Suppose that I have a generic function that makes a complex floating
calculation
with a generic floating point type.
Suppose that calculation require the max floating point machine precision
so:
1) I need to convert the generic floating point type to Float80
2) make the complex calculation with Float80
3) convert back the Float80 result to the generic floating point type

func maxPrecisionCalculation( input:Float80 ) -> Float80 {
// ....
}

Let's try with FloatingPoint:

func someComplexCalculation<T:FloatingPoint>( input:T ) -> T {
let input80 = Float80( input ) // Error: Cannot invoke initializer for
type 'Float80' with an argument list of type '(T)'
let input80 = input as Float80 // Error 'T' is not convertible to
'Float80'; did you mean to use 'as!' to force downcast?

let output80 = maxPrecisionCalculation( input:input80 )

return output80 as T // Error: 'Float80' is not convertible to 'T'; did
you mean to use 'as!' to force downcast?
return T(output80) // Error: Non-nominal type 'T' does not support
explicit initialization
}

How convert a generic FloatingPoint to a concrete floating point type and
then convert it back?

And now let's try with BinaryFloatingPoint:

func someComplexCalculation <T:BinaryFloatingPoint>( input:T ) -> T {

let input80 = Float80( input ) // Error: Cannot invoke initializer for
type 'Float80' with an argument list of type '(T)'
let input80 = input as Float80 // Error 'T' is not convertible to
'Float80'; did you mean to use 'as!' to force downcast?

let output80 = maxPrecisionCalculation( input:Float80(0) )

return T(output80) // Ok, now this work
return output80 as T // Error: 'Float80' is not convertible to 'T'; did
you mean to use 'as!' to force downcast?
}

How convert a generic BinaryFloatingPoint to a concrete floating
point type?
In the opposite direction the conversion work.

NOTE: Using Double instead of Float80 don't make any difference at all.

ADDENDUM: The Float80 type exists from the first Swift version. Now we
have Swift 4
and math function over Float80 are still unsupported, making this type
substantially useless.
I still can’t call sin(x) and log(x) with a Float80 value.
There is a plan to resolve the issue?

_______________________________________________
swift-users mailing list
swift-users@swift.org
https://lists.swift.org/mailman/listinfo/swift-users

AFAIK, Float80 is the high precision format on macOS (well, Intel macs, anyway... can’t recall if Swift can target OSs old enough to run on PPC macs). I’d avoid using it, though. AFAIK it’s an x86-only format (it might even be Intel-only... 5-10 minutes of googling didn’t give me a clear answer on whether AMD’s CPUs support it).

I don’t know what we do with it on ARM targets, and I’m not at my computer to try to figure out.

Unless maybe the x86 or ARM vector extensions support 128 or 256 bit floats? I don’t think they do, but I’m not 100% on that.

- Dave Sweeris

···

On Dec 1, 2017, at 13:18, Jens Persson via swift-users <swift-users@swift.org> wrote:

func maxPrecisionCalculation(input:Float80) -> Float80 {
    return input // but something actually reauiring high precision ...
}

I'm not sure what you mean David. That function was just part of my attempt
at presenting a solution to Antonino's question (that particular function
is from Antonino's code).
Below is my solution to Antonino's problem again, including a perhaps
clearer comment in that function:

protocol Float80Convertible : BinaryFloatingPoint {
    init(_ value: Float80)
    var float80: Float80 { get }
}
extension Double : Float80Convertible {
    var float80: Float80 { return Float80(self) }
}
extension Float : Float80Convertible {
    var float80: Float80 { return Float80(self) }
}

func maxPrecisionCalculation(input:Float80) -> Float80 {
    return inpu
    // In the actual use case, this would of course not just
    // return input. Instead it would perform some computation
    // that (in contrast to just returning input) actually needs
    // the high precision of Float80.
}

func someComplexCalculation<T:Float80Convertible>(input: T) -> T {
    let input80 = input.float80
    let output80 = maxPrecisionCalculation(input: input80)
    return T(output80)
}

/Jens

···

On Fri, Dec 1, 2017 at 11:59 PM, David Sweeris <davesweeris@mac.com> wrote:

On Dec 1, 2017, at 13:18, Jens Persson via swift-users < > swift-users@swift.org> wrote:

func maxPrecisionCalculation(input:Float80) -> Float80 {
    return input // but something actually reauiring high precision ...
}

AFAIK, Float80 *is* the high precision format on macOS (well, Intel macs,
anyway... can’t recall if Swift can target OSs old enough to run on PPC
macs). I’d avoid using it, though. AFAIK it’s an x86-only format (it might
even be Intel-only... 5-10 minutes of googling didn’t give me a clear
answer on whether AMD’s CPUs support it).

I don’t know what we do with it on ARM targets, and I’m not at my computer
to try to figure out.

Unless maybe the x86 or ARM vector extensions support 128 or 256 bit
floats? I don’t think they do, but I’m not 100% on that.

- Dave Sweeris

Sorry, mostly I was just commenting on what I now see was an incorrect interpretation of the “// but something actually requiring high precision ...” comment in your example code. I’d read it as... you know, I’m not sure what I’d though it said... I think something that implied the code would convert the `Float80` data to another format with higher precision.

My mistake for not reading more carefully before replying.

- Dave Sweeris

···

On Dec 3, 2017, at 15:22, Jens Persson <jens@bitcycle.com> wrote:

I'm not sure what you mean David. That function was just part of my attempt at presenting a solution to Antonino's question (that particular function is from Antonino's code).
Below is my solution to Antonino's problem again, including a perhaps clearer comment in that function:

protocol Float80Convertible : BinaryFloatingPoint {
    init(_ value: Float80)
    var float80: Float80 { get }
}
extension Double : Float80Convertible {
    var float80: Float80 { return Float80(self) }
}
extension Float : Float80Convertible {
    var float80: Float80 { return Float80(self) }
}

func maxPrecisionCalculation(input:Float80) -> Float80 {
    return inpu
    // In the actual use case, this would of course not just
    // return input. Instead it would perform some computation
    // that (in contrast to just returning input) actually needs
    // the high precision of Float80.
}

func someComplexCalculation<T:Float80Convertible>(input: T) -> T {
    let input80 = input.float80
    let output80 = maxPrecisionCalculation(input: input80)
    return T(output80)
}

/Jens

On Fri, Dec 1, 2017 at 11:59 PM, David Sweeris <davesweeris@mac.com> wrote:

On Dec 1, 2017, at 13:18, Jens Persson via swift-users <swift-users@swift.org> wrote:

func maxPrecisionCalculation(input:Float80) -> Float80 {
    return input // but something actually reauiring high precision ...
}

AFAIK, Float80 is the high precision format on macOS (well, Intel macs, anyway... can’t recall if Swift can target OSs old enough to run on PPC macs). I’d avoid using it, though. AFAIK it’s an x86-only format (it might even be Intel-only... 5-10 minutes of googling didn’t give me a clear answer on whether AMD’s CPUs support it).

I don’t know what we do with it on ARM targets, and I’m not at my computer to try to figure out.

Unless maybe the x86 or ARM vector extensions support 128 or 256 bit floats? I don’t think they do, but I’m not 100% on that.

- Dave Sweeris