TrigonometricFloatingPoint/MathFloatingPoint protocol?

The weird part is that generic specialization actually *hurt* performance.
I understand why inlining can be harmful sometimes, but I always assumed
specialization was always helpful. Weird. Can a core team member weigh in
here?

···

On Thu, Aug 3, 2017 at 7:17 PM, Karl Wagner <razielim@gmail.com> wrote:

On 3. Aug 2017, at 20:52, Taylor Swift via swift-evolution < > swift-evolution@swift.org> wrote:

In an effort to get this thread back on track, I tried implementing
cos(_:) in pure generic Swift code, with the BinaryFloatingPoint protocol.
It deviates from the _cos(_:) intrinsic by no more than
5.26362703423544e-11. Adding more terms to the approximation only has a
small penalty to the performance for some reason.

To make the benchmarks fair, and explore the idea of distributing a Math
module without killing people on the cross-module optimization boundary, I
enabled some of the unsafe compiler attributes. All of these benchmarks are
cross-module calls, as if the math module were downloaded as a dependency
in the SPM.

== Relative execution time (lower is better) ==

llvm intrinsic : 3.133
glibc cos() : 3.124

no attributes : 43.675
with specialization : 4.162
with inlining : 3.108
with inlining and specialization : 3.264

As you can see, the pure Swift generic implementation actually beats the
compiler intrinsic (and the glibc cos() but I guess they’re the same thing)
when inlining is used, but for some reason generic specialization and
inlining don’t get along very well.

Here’s the source implementation. It uses a taylor series (!) which
probably isn’t optimal but it does prove that cos() and sin() can be
implemented as generics in pure Swift, be distributed as a module outside
the stdlib, and still achieve competitive performance with the llvm
intrinsics.

@_inlineable
//@_specialize(where F == Float)
//@_specialize(where F == Double)
public
func cos<F>(_ x:F) -> F where F:BinaryFloatingPoint
{
    let x:F = abs(x.remainder(dividingBy: 2 * F.pi)),
        quadrant:Int = Int(x * (2 / F.pi))

    switch quadrant
    {
    case 0:
        return cos(on_first_quadrant: x)
    case 1:
        return -cos(on_first_quadrant: F.pi - x)
    case 2:
        return -cos(on_first_quadrant: x - F.pi)
    case 3:
        return -cos(on_first_quadrant: 2 * F.pi - x)
    default:
        fatalError("unreachable")
    }
}

@_versioned
@_inlineable
//@_specialize(where F == Float)
//@_specialize(where F == Double)
func cos<F>(on_first_quadrant x:F) -> F where F:BinaryFloatingPoint
{
    let x2:F = x * x
    var y:F = -0.0000000000114707451267755432394
    for c:F in [0.000000002087675698165412591559,
               -0.000000275573192239332256421489,
                0.00002480158730158702330045157,
               -0.00138888888888888880310186415,
                0.04166666666666666665319411988,
               -0.4999999999999999999991637437,
                0.9999999999999999999999914771
                ]
    {
        y = x2 * y + c
    }
    return y
}

On Thu, Aug 3, 2017 at 7:04 AM, Stephen Canon via swift-evolution < > swift-evolution@swift.org> wrote:

On Aug 2, 2017, at 7:03 PM, Karl Wagner via swift-evolution < >> swift-evolution@swift.org> wrote:

It’s important to remember that computers are mathematical machines, and
some functions which are implemented in hardware on essentially every
platform (like sin/cos/etc) are definitely best implemented as compiler
intrinsics.

sin/cos/etc are implemented in software, not hardware. x86 does have the
FSIN/FCOS instructions, but (almost) no one actually uses them to implement
the sin( ) and cos( ) functions; they are a legacy curiosity, both too slow
and too inaccurate for serious use today. There are no analogous
instructions on ARM or PPC.

– Steve

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Just a guess, but I’d expect inlining implies specialisation. It would be
weird if the compiler inlined a chunk of unoptimised generic code in to
your function.

Pretty cool figures, though.

- Karl

A math library should include vectorized operations as part of its vector
type support; i currently use this snippet
<https://gist.github.com/kelvin13/03d1fd5da024f058b6fd38fdbce665a4&gt; to
cover that. Even though they are evaluated as scalars right now, if
simd/sse support ever comes to Linux it’ll be easy to switch to it since
all the vectorizable operations are already routed through the “vector”
functions.

···

On Thu, Aug 3, 2017 at 5:03 PM, Nicolas Fezans <nicolas.fezans@gmail.com> wrote:

Interesting figures. I will not try to discuss the generics, inlineable,
etc. there are certainly good observations and comments to make here, but
most people in this list know certainly more about it than I do.

I just want to point out that IMO a core math library for swift should
comply with the IEEE 754 standard in terms of precision, domain, and
special values. *On the long term*, it should ideally be able to use SIMD
instructions when applied to arrays/matrices or when the compiler can
autovectorize some loops.

On Thu, Aug 3, 2017 at 8:52 PM, Taylor Swift via swift-evolution < > swift-evolution@swift.org> wrote:

In an effort to get this thread back on track, I tried implementing
cos(_:) in pure generic Swift code, with the BinaryFloatingPoint protocol.
It deviates from the _cos(_:) intrinsic by no more than
5.26362703423544e-11. Adding more terms to the approximation only has a
small penalty to the performance for some reason.

To make the benchmarks fair, and explore the idea of distributing a Math
module without killing people on the cross-module optimization boundary, I
enabled some of the unsafe compiler attributes. All of these benchmarks are
cross-module calls, as if the math module were downloaded as a dependency
in the SPM.

== Relative execution time (lower is better) ==

llvm intrinsic : 3.133
glibc cos() : 3.124

no attributes : 43.675
with specialization : 4.162
with inlining : 3.108
with inlining and specialization : 3.264

As you can see, the pure Swift generic implementation actually beats the
compiler intrinsic (and the glibc cos() but I guess they’re the same thing)
when inlining is used, but for some reason generic specialization and
inlining don’t get along very well.

Here’s the source implementation. It uses a taylor series (!) which
probably isn’t optimal but it does prove that cos() and sin() can be
implemented as generics in pure Swift, be distributed as a module outside
the stdlib, and still achieve competitive performance with the llvm
intrinsics.

@_inlineable
//@_specialize(where F == Float)
//@_specialize(where F == Double)
public
func cos<F>(_ x:F) -> F where F:BinaryFloatingPoint
{
    let x:F = abs(x.remainder(dividingBy: 2 * F.pi)),
        quadrant:Int = Int(x * (2 / F.pi))

    switch quadrant
    {
    case 0:
        return cos(on_first_quadrant: x)
    case 1:
        return -cos(on_first_quadrant: F.pi - x)
    case 2:
        return -cos(on_first_quadrant: x - F.pi)
    case 3:
        return -cos(on_first_quadrant: 2 * F.pi - x)
    default:
        fatalError("unreachable")
    }
}

@_versioned
@_inlineable
//@_specialize(where F == Float)
//@_specialize(where F == Double)
func cos<F>(on_first_quadrant x:F) -> F where F:BinaryFloatingPoint
{
    let x2:F = x * x
    var y:F = -0.0000000000114707451267755432394
    for c:F in [0.000000002087675698165412591559,
               -0.000000275573192239332256421489,
                0.00002480158730158702330045157,
               -0.00138888888888888880310186415,
                0.04166666666666666665319411988,
               -0.4999999999999999999991637437,
                0.9999999999999999999999914771
                ]
    {
        y = x2 * y + c
    }
    return y
}

On Thu, Aug 3, 2017 at 7:04 AM, Stephen Canon via swift-evolution < >> swift-evolution@swift.org> wrote:

On Aug 2, 2017, at 7:03 PM, Karl Wagner via swift-evolution < >>> swift-evolution@swift.org> wrote:

It’s important to remember that computers are mathematical machines, and
some functions which are implemented in hardware on essentially every
platform (like sin/cos/etc) are definitely best implemented as compiler
intrinsics.

sin/cos/etc are implemented in software, not hardware. x86 does have the
FSIN/FCOS instructions, but (almost) no one actually uses them to implement
the sin( ) and cos( ) functions; they are a legacy curiosity, both too slow
and too inaccurate for serious use today. There are no analogous
instructions on ARM or PPC.

– Steve

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

update:

I’ve managed to improve the algorithm to the point where it’s arguably more
accurate than Glibc.cos(_:), and runs just as fast. Removing one term makes
the Swift implementation faster than _cos(_:), but worses the divergence by
like 23% (137 ULPs from 0° ..< 90°, as opposed to 111 ULPs).

Relative time (lower is better)

_cos(_:) instrinsic : 3.096
pure Swift implementation : 3.165

Almost everywhere the pure Swift implementation is within ±1 ULP of the
Glibc/llvm implementation. Adding more terms to the approximation actually
worsens the divergence, so I guess we are in the range where we have to
start talking about error in the Glibc implementation as well. Here’s an output
dump
<https://github.com/kelvin13/swift-math/blob/d82e8b1df848879ba6ac6071883fde7f9a15c967/tests/output.txt&gt;
with input from −360° to +360°.

The _cos(_:) intrinsic seems to be asymmetric across the positive and
negative halves of the function, which causes the divergence to rise to
about 3–5 ULPs on the far side of the unit circle. This could be due to
rounding differences in the arguments, since π/2 and 3π/2 are impossible to
represent in floating point. However I don’t know which implementation is
“wrong” here. The Swift one gives the “right” output for all special
angles; i.e. cos(90°) == 0, cos(60°) == 0.5, etc , whereas _cos(_:) gives
slightly fuzzy values.

If anyone wants to try it, I put the cosine implementation in an actual
module on github; and the given benchmark numbers are for cross-module
calls. <https://github.com/kelvin13/swift-math&gt;

···

On Thu, Aug 3, 2017 at 7:32 PM, Taylor Swift <kelvin13ma@gmail.com> wrote:

On Thu, Aug 3, 2017 at 7:12 PM, Karl Wagner via swift-evolution < > swift-evolution@swift.org> wrote:

On 3. Aug 2017, at 13:04, Stephen Canon via swift-evolution < >> swift-evolution@swift.org> wrote:

On Aug 2, 2017, at 7:03 PM, Karl Wagner via swift-evolution < >> swift-evolution@swift.org> wrote:

It’s important to remember that computers are mathematical machines, and
some functions which are implemented in hardware on essentially every
platform (like sin/cos/etc) are definitely best implemented as compiler
intrinsics.

sin/cos/etc are implemented in software, not hardware. x86 does have the
FSIN/FCOS instructions, but (almost) no one actually uses them to implement
the sin( ) and cos( ) functions; they are a legacy curiosity, both too slow
and too inaccurate for serious use today. There are no analogous
instructions on ARM or PPC.

– Steve
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Hah that’s pretty cool; I think I learned in EE years ago that it was
implemented with a lookup table inside the CPU and never bothered to
question it.

The pure-Swift cosine implementation looks cool.

I’m pretty sure it can be improved greatly, at least for Double.
Unfortunately performance falls off a cliff for Float for some reason, i
don’t know why.

As for the larger discussion about a Swift maths library: in general,
it’s hard for any new Swift-only package to get off the ground without a
more comprehensive package manager. The current version doesn’t support
most of the Swift projects being worked on every day. Swift is also still a
relatively young language - the new integer protocols have never even
shipped in a stable release. Considering where we are, it’s not really
surprising that most of the Swift maths libraries are still a bit
rudimentary; I expect they will naturally evolve and develop in time, the
way open-source code does.

Most of the SPM’s limitations have workarounds, the problem is it’s just
not very convenient, i.e. local and non-git dependencies. Other features
like gyb, I’m not sure if it’s a good idea to bring to the SPM. gyb is a
band-aid over deeper limitations of the language.

It’s also worth considering that our excellent bridging with C removes
some of the impetus to rewrite all your battle-tested maths code in Swift.
The benefits are not obvious; the stage is set for pioneers to experiment
and show the world why they should be writing their maths code in Swift.

The glibc/llvm functions are not generic. You cannot use _cos(_:) on a
protocol type like BinaryFloatingPoint. A pure Swift implementation would
allow generic programming with trig and other math functions; right now
anything beyond sqrt() requires manual specialization.

- Karl

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Swift Breeze *was* on Github <https://github.com/swift-breeze&gt;,, i don’t
know whose argument that strengthens here :)

No, no, I mean, doesn't GitHub itself fit the roles you defined earlier?
And by implication, why would a project on GitHub do any better than GitHub
itself at being a collection of repositories and at facilitating
collaboration?

Here was the original thread

···

On Wed, Aug 2, 2017 at 21:55 Taylor Swift <kelvin13ma@gmail.com> wrote:

<https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20160125/008134.html&gt;
it was introduced in. It got a lot of attention and +1’s but never
attracted much involvement, possibly because the Swift FOSS community was
much smaller back then, possibly because people on mailing lists are by
nature all-talk, no-action. I’m also a library author so I understand the
reluctance to give up your children to a collective repo lol.

On Wed, Aug 2, 2017 at 10:47 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

Hmm, I'd never heard of Swift Breeze. Doesn't seem like it'd be a
successful model to follow. Is there a reason why GitHub itself doesn't
meet these criteria?

On Wed, Aug 2, 2017 at 21:42 Taylor Swift <kelvin13ma@gmail.com> wrote:

Trying to gather together a bunch of unpaid people won’t automatically
solve the problem. (We *can* agree that there *is* a problem, yes?) I
think Swift Breeze demonstrated that. (Incidentally, throwing a bunch of
money at the problem won’t automatically solve it either — look at the US
government.) But then again, it can still *help*, and while it sounds
cheesy, *how much* it can help depends entirely on the attitude of the
contributors; whether they see themselves as solo authors listed on a
package index, or as part of a bigger effort. I’m not really a fan of
waiting for Apple to save the day. One of the things I’ve argued for that
*can* be done without Apple’s help is setting up another Swift library
incubator like Breeze. Obviously it won’t magically lead to a Swift math
library but it does remove some of the obstacles I mentioned earlier:

- it links together disparate solo projects and provides discoverability
to users
- it provides a package index and serves as a dashboard to check up on
the “state of Swift library support”
- it gives a venue for interested people to discuss the general topic of
library support
- it helps network people who are working on similar things (a “soft
factor” but important!)

tldr; self-organization isn’t a panacea, but it helps.

On Wed, Aug 2, 2017 at 10:14 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

That's not what I'm saying at all. I'm responding to your contention
that no library without "backing" will see wide adoption: if, as you say,
you would like to have a math library with sufficient "backing," then
realize that you're arguing for someone to devote financial resources to
the problem. Your proposed solution of getting together a bunch of unpaid
people does not address your identified problem.

On Wed, Aug 2, 2017 at 21:07 Taylor Swift <kelvin13ma@gmail.com> wrote:

On Wed, Aug 2, 2017 at 9:18 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Aug 2, 2017 at 7:29 PM, Taylor Swift <kelvin13ma@gmail.com> >>>>>> wrote:

On Wed, Aug 2, 2017 at 7:54 PM, Xiaodi Wu <xiaodi.wu@gmail.com> >>>>>>> wrote:

On Wed, Aug 2, 2017 at 6:29 PM, Taylor Swift <kelvin13ma@gmail.com> >>>>>>>> wrote:

See, my problem with statements like this one, is that the answer
“should be supported as a third-party library” can also be interpreted as
“not my problem, go figure it out yourselves”. The idea that central entity
can only pay attention to what they want to, and the Community™ will
magically take care of the rest is one of the most pervasive, and untrue,
myths about open source. What’s worse, is that Swift has the benefit of
hindsight, in the form of many, many examples of languages that came before
and fell victim to this fallacy, and now have 15 competing “private”
classes for basic mathematical objects like *vectors*.

I agree that a core math library, for example, *could* in theory
be supported as a third-party library.

The core team has said that they're open to a core math library
being part of the Swift open source project; they just outlined that the
_process_ for doing so is best initiated with a third-party library as a
starting point.

But this will never happen on its own, for reasons that I will
reiterate here:

- no one influential enough has bothered to jump start any such
project

Karoly Lorentey has a wonderful, and quite mature, BigInt project: <
https://github.com/lorentey/BigInt&gt;\. Also, as I mentioned, I just
started a project for protocol-based additions to Swift's basic numeric
types. These are just two examples.

- there are no avenues to encourage members of the community to
come together and organize a project (look how this thread got derailed!)

You're welcome to join me in my endeavor to create a math library.
I'd bet Karoly feels the same way about his project.

You don’t know how happy reading that sentence just made me, i’d
assumed no one was willing to team up to build such a thing. In which case,
it’s a good idea to start an incubator organization on Github. I think
David Turnbull tried doing that 2 years ago, I’ll reach out to him if he
wants to be a part of something like this.

We should also maintain an index of promising pure swift libraries
so they are discoverable (like docs.rs does for Rust).

I believe there has been mention on this list that the core team
would like to revisit this idea at some point.

- there is no “soft” infrastructure in place to support such
collaboration (look at the fuss over discourse and mailing list spam!)

The GitHub environment has excellent tools to support such
collaboration, IMO. For example:

Based on my experience implementing a library, I wrote a Gist to
outline some lessons learned and suggestions for improvement. Not only did
the document find an audience, these suggestions were in turn used to
inform core team-driven revisions to the integer protocols. As a result of
these revisions, it became possible to implement some initializers that
could be useful for people writing generic numeric algorithms. Recently, I
submitted a PR to the Swift project on GitHub to implement these
initializers. Now, everyone will be able to use them. Collaboration,
positive feedback loop, win-win for all involved.

Likewise, Karoly used his experience updating BigInt for Swift 4 to
inform certain improvements to the integer protocols. He implemented these
improvements in a series of PRs. Now, as a result of these developments,
Karoly's library will be better designed *and* everyone else will benefit
from a better implementation of the integer protocols. Again,
collaboration, positive feedback loop, win-win for all involved.

Great!! can you link me to the gist?

Notes on the user experience of new integer protocols · GitHub

- there are no positive feedback loops whereby a promising project

can gain market share and mature
- because there is no organization backing these projects,
potential users are reluctant to depend on these libraries, since they will
logically bet that the library is more likely to fall out of maintenance
than reach maturity.

Addressing this point is clearly impossible. When Apple wishes to
commit its own resources to the maintenance of a Swift math library,
swift-corelibs-math will appear on GitHub. Suggestions such as opening an
empty repo and letting people contribute to it would either give the
illusion of organizational backing that doesn't exist or would in fact
commit Apple to support a repo that it doesn't wish to support. I fail to
see why the former is good for anybody; in fact, it's strictly inferior to
the same repo honestly representing itself as a third-party effort. And
asking for the latter is essentially asking Apple to create a Swift math
library--which, again, is not in the cards.

My point wasn’t really to exhort Apple to create a Swift math
library, just that people are more willing to depend on a library if the
library’s bus factor is greater than 1. A lot of great Swift packages in
one one guy or girl’s github repository who later disappeared. Turnbull’s
SGLOpenGL library is a good example of this; his library no longer compiles
which motivated me to write swift-opengl
<https://github.com/kelvin13/swift-opengl&gt;\. Then again, I’m sure
people feel the same way about depending on swift-opengl today as I felt
about depending on SGLOpenGL.

There just has so be some semblance of organization. That
organization doesn’t have to come from Apple or the swift core team. A
community initiative with sufficient momentum would be just as good. (The
problem of course is that it is rare for a community initiative to arise.)

Well, hang on now. There are plenty of products put out by even major
organizations that are unceremoniously and abruptly cut. There are plenty
of projects worked on by one or a few major people that are long-lived.
Projects that have longevity have some sort of financially sensible model
for their continued existence. Three, thirty, or even 300 unpaid people
working on an open-source project won't make it much more reliable (in the
eyes of others) than one unpaid person, and again I disagree that the
veneer of an organization is superior to presenting the status of the
project honestly. (Example--what is commonly thought to be a bigger threat
to Firefox's continued health: the possibility that there will be a
shortfall in unpaid contributors, or the possibility that there will be a
shortfall in funding?)

Rounding up all the goodwill on this list will not do you any good if
your goal is to convince users that a certain project will be maintained
into the future--because it won't rustle up a single dime. Whether or not
you explicitly equate these in your mind, "backing" == money, and if you
want this point addressed, you're claiming that someone somewhere should be
spending money on a Swift math library. I'm personally committed to making
sure that my code will work for the foreseeable future, but I fully accept
that there's simply no way for me to convince a sufficient number of people
of this fact without a credible showing of funding. In that sense, a
community initiative with "momentum" is decidedly not going to be a
just-as-good alternative to a core library.

Well that there is a rather defeatist attitude. If you are correct
that Apple-funded development is the only way to get core libraries built
(and maintained), and Apple has expressed they have no intention of doing
so, then we are all pretty much f****d.

There was a sign error due to switching to an underlying approximation of
sin() to evaluate cos(). Here’s the actual output
<https://github.com/kelvin13/swift-math/blob/5867fc8e151ab0dcc2460a24123fededc6d81f12/tests/output.txt&gt;

···

On Fri, Aug 4, 2017 at 4:56 PM, Taylor Swift <kelvin13ma@gmail.com> wrote:

update:

I’ve managed to improve the algorithm to the point where it’s arguably
more accurate than Glibc.cos(_:), and runs just as fast. Removing one term
makes the Swift implementation faster than _cos(_:), but worses the
divergence by like 23% (137 ULPs from 0° ..< 90°, as opposed to 111 ULPs).

Relative time (lower is better)

_cos(_:) instrinsic : 3.096
pure Swift implementation : 3.165

Almost everywhere the pure Swift implementation is within ±1 ULP of the
Glibc/llvm implementation. Adding more terms to the approximation actually
worsens the divergence, so I guess we are in the range where we have to
start talking about error in the Glibc implementation as well. Here’s an output
dump
<https://github.com/kelvin13/swift-math/blob/d82e8b1df848879ba6ac6071883fde7f9a15c967/tests/output.txt&gt;
with input from −360° to +360°.

The _cos(_:) intrinsic seems to be asymmetric across the positive and
negative halves of the function, which causes the divergence to rise to
about 3–5 ULPs on the far side of the unit circle. This could be due to
rounding differences in the arguments, since π/2 and 3π/2 are impossible to
represent in floating point. However I don’t know which implementation is
“wrong” here. The Swift one gives the “right” output for all special
angles; i.e. cos(90°) == 0, cos(60°) == 0.5, etc , whereas _cos(_:) gives
slightly fuzzy values.

If anyone wants to try it, I put the cosine implementation in an actual
module on github; and the given benchmark numbers are for cross-module
calls. <https://github.com/kelvin13/swift-math&gt;

On Thu, Aug 3, 2017 at 7:32 PM, Taylor Swift <kelvin13ma@gmail.com> wrote:

On Thu, Aug 3, 2017 at 7:12 PM, Karl Wagner via swift-evolution < >> swift-evolution@swift.org> wrote:

On 3. Aug 2017, at 13:04, Stephen Canon via swift-evolution < >>> swift-evolution@swift.org> wrote:

On Aug 2, 2017, at 7:03 PM, Karl Wagner via swift-evolution < >>> swift-evolution@swift.org> wrote:

It’s important to remember that computers are mathematical machines, and
some functions which are implemented in hardware on essentially every
platform (like sin/cos/etc) are definitely best implemented as compiler
intrinsics.

sin/cos/etc are implemented in software, not hardware. x86 does have the
FSIN/FCOS instructions, but (almost) no one actually uses them to implement
the sin( ) and cos( ) functions; they are a legacy curiosity, both too slow
and too inaccurate for serious use today. There are no analogous
instructions on ARM or PPC.

– Steve
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Hah that’s pretty cool; I think I learned in EE years ago that it was
implemented with a lookup table inside the CPU and never bothered to
question it.

The pure-Swift cosine implementation looks cool.

I’m pretty sure it can be improved greatly, at least for Double.
Unfortunately performance falls off a cliff for Float for some reason, i
don’t know why.

As for the larger discussion about a Swift maths library: in general,
it’s hard for any new Swift-only package to get off the ground without a
more comprehensive package manager. The current version doesn’t support
most of the Swift projects being worked on every day. Swift is also still a
relatively young language - the new integer protocols have never even
shipped in a stable release. Considering where we are, it’s not really
surprising that most of the Swift maths libraries are still a bit
rudimentary; I expect they will naturally evolve and develop in time, the
way open-source code does.

Most of the SPM’s limitations have workarounds, the problem is it’s just
not very convenient, i.e. local and non-git dependencies. Other features
like gyb, I’m not sure if it’s a good idea to bring to the SPM. gyb is a
band-aid over deeper limitations of the language.

It’s also worth considering that our excellent bridging with C removes
some of the impetus to rewrite all your battle-tested maths code in Swift.
The benefits are not obvious; the stage is set for pioneers to experiment
and show the world why they should be writing their maths code in Swift.

The glibc/llvm functions are not generic. You cannot use _cos(_:) on a
protocol type like BinaryFloatingPoint. A pure Swift implementation
would allow generic programming with trig and other math functions; right
now anything beyond sqrt() requires manual specialization.

- Karl

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Github is a big ocean, sis. Swift libraries would benefit from having a
smaller grouping that’s just Swift libraries. That’s not to say
cross-community collaboration isn’t beneficial — my library Noise
<https://github.com/kelvin13/noise&gt; benefited greatly from working with its
counterpart in the Rust community, but Swift libraries do need their own
space.

···

On Wed, Aug 2, 2017 at 10:58 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Aug 2, 2017 at 21:55 Taylor Swift <kelvin13ma@gmail.com> wrote:

Swift Breeze *was* on Github <https://github.com/swift-breeze&gt;,, i don’t
know whose argument that strengthens here :)

No, no, I mean, doesn't GitHub itself fit the roles you defined earlier?
And by implication, why would a project on GitHub do any better than GitHub
itself at being a collection of repositories and at facilitating
collaboration?

As long as this module was guarenteed to be there, I can accept this. I'm tired of writing clusterfuck code in linux/macOS just for generic algorithms to use pow,sin and friends.

func testPythagorianIdentity(_ x: Double) {
    assert(1 == (FloatingPointMath.pow(FloatingPointMath.sin(x), 2)
+ FloatingPointMath.pow(FloatingPointMath.cos(x), 2))
}

... gross. Compare to:

func testPythagorianIdentity(_ x: Double) {
    assert(1 == x.sin.pow(2) + x.cos.pow(2))
}

Ouch! No, that is not what I meant. I meant this: (Note the starting dot)

infix operator ** : BitwiseShiftPrecedence

protocol FloatingPointMath: FloatingPoint {
    static func sin(_ value: Self) -> Self
    static func cos(_ value: Self) -> Self
    static func pow(_ value: Self, _ exponent:Self) -> Self
    static func **(value: Self, exponent: Self) -> Self
}

extension FloatingPointMath {
    static func **(value: Self, exponent: Self) -> Self { return .pow(value, exponent) /*FIXME*/ }
}

extension Double: FloatingPointMath {
    static func sin(_ value: Double) -> Double { return value /*FIXME*/ }
    static func cos(_ value: Double) -> Double { return value /*FIXME*/ }
    static func pow(_ value: Double, _ exponent: Double) -> Double { return value /*FIXME*/ }
}

func testPythagorianIdentity(_ x: Double) {
    assert(1 == .sin(x)**2 + .cos(x)**2)
}

Sometimes you have to prefix the first function with the float type (Double) to disambiguate. As in Double.sin(x) for example.

Once you have the protocol, you can write a top-level free function as a trampoline:

func sin <T: FloatingPointMath> (_ x: T) -> T {
    return T.sin(x)
}

Yes, sure; but I usually don't create global functions for math. My global functions are usually overloaded factory functions that create a group of related types, such as a group of weak(...) functions to create different types of weak-reference-holding containers.

I agree with Alexander Momchilov's first comment/response, the math operation on the type (i.e. Double) looks better in my opinion.

You mean this?

I don't like when functions appear as properties like that. In extreme cases we can end up with something like x.log(10).sin.pow(3) which is just not very readable for me as a math expression. I am used to old school FORTRAN-style math expressions.

I guess I would prefer pow(sin(log(x)), 3), but all the type inference of using static members with prefixed . (requireing the base type to be inferred) adds a lot of type checker complexity for large expressions.

Would be nice to have something like C++'s scoped using, so that these functions can temporarily be made members of the global name space (there are way less members of the global name space then there are static members on all types)

Yes, that would be nice, but given how things stand now, it seems that static member functions cause less overhead compared to global functions.

By the way, I was responding to this:

I haven't found that to be exorbitant; since all of these are "homogeneous" (i.e., whatever type is the argument, that will be the type of the result), the type checker doesn't seem to have trouble at all. In the standard library, we already have .pi, .infinity, .nan, and other static members that are idiomatically written with the leading dot.

At the risk of some self-promotion (not that it really benefits me in any material way), I explored this design option in my protocol-based library NumericAnnex. The purpose was to see if various design options were practical and ergonomic to use. You'll see that I settled on the .sin(x) notation--it's pretty readable in the end, and it really does play well with the type checker.

1 Like

eww no pls

i think the spelling arguments right now are mostly bikeshedding until we have an actual implementation of generic, platform-independent sin and cos.

FYI, the standard library implementations of generic numeric functions sometimes implement fast-paths for known types. Perhaps something like that is also an option here?

For example (ignore the FIXME, we're talking about the "else" branch):

EDIT: Also, isn't that preview so awesome? I'm really loving these forums...

the fast-paths switch actually seems to incur some significant (~30%ish) overhead. i think @Slava_Pestov knows more about that than i do