Hi there!

Is there a way to use mathematical functions like lgamma (), tgamma () or pow () with Float80 numbers and get Float80 numbers as a result?

Thanks in advance!

strike

Hi there!

Is there a way to use mathematical functions like lgamma (), tgamma () or pow () with Float80 numbers and get Float80 numbers as a result?

Thanks in advance!

strike

Wow. That's great!

Thanks!

Hi again!

It seems that there are to different versions of lgamma() with different return types:

```
lgamma(_ x: Double/Float/Float80) -> (Double/Float/Float80, Int)
```

and

```
lgamma(_x: Double/Float/Float80) -> Double/Float/Float80
```

How can I determine, which version will be used?

I suppose the Int will be used to store the sign of the Gamma function.

I want to have a generic lgamma-version as follows:

```
internal func lgamma<T: FloatingPoint>(_ x: T) -> T {
if x.self is Double {
return lgamma(x as! Double) as! T /* <- Ambiguous use of 'lgamma' */
}
if x.self is Float {
return lgamma(x as! Float) as! T /
}
#if arch(x86_64)
if x.self is Float80 {
return lgammal(x as! Float80) as! T
}
#endif
return T.nan
}
```

Best regards!

strike

You are getting an ambiguity error not for the reason you think, but because you have created the ambiguity yourself by defining a function with exactly the same name and return type. You can disambiguate by spelling the library function `Darwin.lgamma`

(or `Glibc.lgamma`

on Linux).

Hi!

No. Same error:

```
internal func lgamma1<T: FloatingPoint>(_ x: T) -> T {
if x.self is Double {
return Darwin.lgamma(x as! Double) as! T. /* Ambiguous use of 'lgamma' */
}
if x.self is Float {
return lgamma(x as! Float) as! T. /* <- no error */
}
#if arch(x86_64)
if x.self is Float80 {
return lgammal(x as! Float80) as! T
}
#endif
return T.nan
}
```

Regards!

Ah, right. You would use the `as`

operator to disambiguate return types. This operator looks like `as!`

but they are actually quite different: `as`

is a type coercion operator and `as!`

is a dynamic casting operator. In other words:

```
return Darwin.lgamma(x as! Double) as Double as! T
```

By the way, you want to write `if x is Double`

; there is no reason, ever, to use `.self`

on a value.

Hi!

Ok. That works. Partially:

internal func lgamma1<T: FloatingPoint>(_ x: T) -> T {

if x is Double {

return Darwin.lgamma(x as! Double) as Double as! T

}

if x is Float {

return Darwin.lgamma(x as! Float) as Float as! T /* Cannot convert value of type '(Float, Int)' to type 'Float' in coercion */

}

#if arch(x86_64)

if x is Float80 {

return Darwin.lgammal(x as! Float80) as Float80 as! T

}

#endif

return T.nan

}

:-|

I'm wondering how the compiler decides, which version will be called

lgamma(*:) -> (Double, Int) or lgamma(*:) -> Double

Sorry. That's confusing...

Using Swift 4.1 this error never showed up...

Regards!

strike

You want to use `lgammaf`

for the float version. This works for me (also using `switch`

... I find it's better style for this kind of thing):

```
internal func lgamma1<T: FloatingPoint>(_ x: T) -> T {
switch x {
case let d as Double:
return lgamma(d) as Double as! T
case let f as Float:
return lgammaf(f) as! T
#if arch(x86_64)
case let lf as Float80:
return lgammal(lf) as! T
#endif
default:
return T.nan
}
}
```

Ah!

That was it! Missing 'f'...

Thanks!!

strike

No, that's very much not intended, and also I don't recall any specific change that would have caused this.

@strike, can you file a bug report at bugs.swift.org? A two-sentence description of the problem and a link to this thread is fine.

1 Like

Thanks very much!