TrigonometricFloatingPoint/MathFloatingPoint protocol?

Ouch! No, that is not what I meant. I meant this: (Note the starting dot)

infix operator ** : BitwiseShiftPrecedence

protocol FloatingPointMath: FloatingPoint {
    static func sin(_ value: Self) -> Self
    static func cos(_ value: Self) -> Self
    static func pow(_ value: Self, _ exponent:Self) -> Self
    static func **(value: Self, exponent: Self) -> Self
}

extension FloatingPointMath {
    static func **(value: Self, exponent: Self) -> Self { return .pow(value, exponent) /*FIXME*/ }
}

extension Double: FloatingPointMath {
    static func sin(_ value: Double) -> Double { return value /*FIXME*/ }
    static func cos(_ value: Double) -> Double { return value /*FIXME*/ }
    static func pow(_ value: Double, _ exponent: Double) -> Double { return value /*FIXME*/ }
}

func testPythagorianIdentity(_ x: Double) {
    assert(1 == .sin(x)**2 + .cos(x)**2)
}

Sometimes you have to prefix the first function with the float type (Double) to disambiguate. As in Double.sin(x) for example.

Once you have the protocol, you can write a top-level free function as a trampoline:

func sin <T: FloatingPointMath> (_ x: T) -> T {
    return T.sin(x)
}

Yes, sure; but I usually don't create global functions for math. My global functions are usually overloaded factory functions that create a group of related types, such as a group of weak(...) functions to create different types of weak-reference-holding containers.

I agree with Alexander Momchilov's first comment/response, the math operation on the type (i.e. Double) looks better in my opinion.

You mean this?

I don't like when functions appear as properties like that. In extreme cases we can end up with something like x.log(10).sin.pow(3) which is just not very readable for me as a math expression. I am used to old school FORTRAN-style math expressions.

I guess I would prefer pow(sin(log(x)), 3), but all the type inference of using static members with prefixed . (requireing the base type to be inferred) adds a lot of type checker complexity for large expressions.

Would be nice to have something like C++'s scoped using, so that these functions can temporarily be made members of the global name space (there are way less members of the global name space then there are static members on all types)

Yes, that would be nice, but given how things stand now, it seems that static member functions cause less overhead compared to global functions.

By the way, I was responding to this:

I haven't found that to be exorbitant; since all of these are "homogeneous" (i.e., whatever type is the argument, that will be the type of the result), the type checker doesn't seem to have trouble at all. In the standard library, we already have .pi, .infinity, .nan, and other static members that are idiomatically written with the leading dot.

At the risk of some self-promotion (not that it really benefits me in any material way), I explored this design option in my protocol-based library NumericAnnex. The purpose was to see if various design options were practical and ergonomic to use. You'll see that I settled on the .sin(x) notation--it's pretty readable in the end, and it really does play well with the type checker.

1 Like

eww no pls

i think the spelling arguments right now are mostly bikeshedding until we have an actual implementation of generic, platform-independent sin and cos.

FYI, the standard library implementations of generic numeric functions sometimes implement fast-paths for known types. Perhaps something like that is also an option here?

For example (ignore the FIXME, we're talking about the "else" branch):

EDIT: Also, isn't that preview so awesome? I'm really loving these forums...

the fast-paths switch actually seems to incur some significant (~30%ish) overhead. i think @Slava_Pestov knows more about that than i do