Swift for numeric programming

If you read the original referenced article / blog post, one of the shortcomings presented is lack of C++ interop in Swift. If you are asking where is a good C++ library for advanced arithmetic, I would recommend here

http://crd-legacy.lbl.gov/~dhbailey/mpdist/

and here

http://speleotrove.com/decimal/

Boost adds some nice math support but also in general adds immense value to C++ and by necessity isn't easily back-ported to pure C.

https://www.boost.org/doc/libs/?view=category_math

What is in C++ that makes math easier is some of the core data structures such as STL 'map' and 'multimap' which is an ordered associative data structure, similar to a native implementation of an ordered dictionary, which is a nigh impossible to find data structure in most languages (usually internally implemented as a red-black balanced binary tree). I have also been advocating for a native Swift implementation of ordered dictionary to be added directly to the language. It isn't in the Objective-C or C spec. 3rd party Swift libraries do exist for ordered dictionary but it needs to be part of the Swift core or the Swift standard library (1st party).

For native Swift, even as a 3rd party library I don't know of anything like ~dhbailey's ARPREC or QD. If I am wrong let me know. For advanced math there is alot of stuff out there in C++, not as much in pure C. We are expected to on our own backport these libraries to pure C or potentially more arduous port them to native Swift. I need a little bit more help from the language itself. I've thought several times of attempting a manual port of QD to Swift but the language keeps changing enough every year that it discourages me. Put some more of this stuff in the language directly, or add interop support for native C++ to Swift, or I have to wait a few more years for Swift to be less churny and then I will attempt the port myself.

I have previously ported red-black balanced binary tree to Swift (back in 2015) but then the next version of Swift came out and it broke everything and I gave up at that point. If the implementation is a core part of the language or part of the Swift standard library then Apple has to deal with the churn of their own language breaking their library, instead of being maintained by a solitary person outside the company.

QD has (IIRC) a complete extern "C" interface provided by the library (defined in c_dd.cc and c_qd.cc). You could build it and use it out of the box with Swift. You could certainly make it nicer by wrapping those interfaces as operators on a Swift struct, but it's usable right away, and one can do this wrapping process piecemeal. I don't see any real reason to port it to Swift (though I also don't think that would be very difficult, since the whole library is only a few kLOC).

I don't mean to be dismissive of the concern, and I also would like to have C++ interop, but things are not nearly as dire as they might be, and C++ interop is pretty far down my priority list of things I want for better numerics.

For QD if you look closer at the code it is written in C++ not C, even though the core data structures are not wrapped in classes. You aren't going to be able to build the code as part of a Swift project. For pulling in C code into Swift ALL of the code needs to be pure C, not just C++ with pure C externs. I have previously ported the QD code to native classes in C++ and native C# implementations, I would loath to do another port to either pure C or native Swift.

Sure the overwhelming consensus for Swift is that interop with native C++ and advanced data structures like ordered dictionary have low priority. I might be waiting another 12 years for them. I understand that. I'll keep bringing it up every so often to test the water. Maybe in another 6 to 12 months.

What makes you say that? This certainly isn't true in general. Swift is only looking at the header files; the implementation could be anything with a C ABI. Tooling such as Xcode and SwiftPM are happy to build C++ code and link it with Swift code.

3 Likes

So you are telling me I can call into native C++ library functions in a Swift project, and the C++ standard library runtime will link in properly during execution, as long as the call into that C++ from Swift is using a middle-man pure C extern wrapper function? No this wasn't true back for Swift 2.0. Has this changed? Can someone else confirm or show me a Swift project that supports this?

Actually, you can go even a bit further; there are some C++-isms that are available as clang extensions in C and that Swift will happily import. (__attribute__((overloadable)) is one that I've personally used quite a bit).

AFAIK, this has always been the case. There may very well have been specific circumstances in which it didn't work, but they would have been bugs (which have hopefully been fixed).

2 Likes

I don't want to derail the thread. Does this forum support direct messages? If I can use real C++ in a Swift project then please someone contact me directly. Or if the forum doesn't support DM then maybe permit me to derail a bit longer till someone posts a link to an Xcode project showing real C++ classes being used in a Swift project as long as the Swift uses a C extern wrapper to call into them.

I agree supporting C++ interop directly can wait, if native C++ is already supported in Swift as long as there is a pure C wrapper between the Swift and C++. But I haven't seen it. Please someone show me. I am happy to be wrong in this case.

I sent DM to scanon and blangmuir thanks! If there is an issue I can start a new thread and add a link to it here in case anyone is interested.

1 Like

As a concrete example, this is a Swift wrapper around the ImGui library, using cimgui as an intermediate between the C++ and C code.

To me, enabling more things in native Swift is definitely the way to go. If I have a major gripe with the blog posts it would be that they embrace the dense, non-descriptive style often associated with C and C++ in favour of the legibility offered by Swift. As someone who finds LAPACK-style libraries intimidating, I'm very much in favour of implementing these tools in a more widely accessible way – something that Swift enables. If the way to get there is by wrapping C/C++/Fortran libraries so be it, but that should be no more than an implementation detail and should not inform the design.

10 Likes

Does Swift support use of the FPU by multiple cores simultaneously if one of the cores switches the global mode of the FPU to 'round-to-double'?

I was looking at porting the arprec (arbitrary precision) or qd (quad double) library from C++ to native Swift, or even a Swift wrapper, but already I hit a snag. arprec and qd both require exact 64-bit IEEE floating point representation, and on Intel or similar architecture which internally represents floating point values using 80-bit registers it requires one of those configuration bits on the FPU to be flipped, namely the 'round-to-double' flag. Numerical programming in general probably requires the number of bits of precision to be exactly known. Mixing multi-core multi-threaded C++ with Swift, when one or more threads is switching the global FPU mode, is probably bad. A single-core single-threaded application is not a viable solution.

For this to work without issue, the function doing the global FPU mode switch needs to either hold a lock on the entire FPU, or the kernel needs to automatically know how to handle multi-core multi-threaded access to the FPU when one of the cores or threads requires a different FPU mode than the others. That sounds like a lot of work. I am not even sure it is possible with the existing kernel implementation. The macOS kernel thread context switching mechanism may already handle save and restore of the FPU mode between cores and threads, but that still wouldn't work for multiple cores/threads accessing the FPU simultaneously. This is processor-dependent and kernel-dependent, since Swift is on x86_64 and ARM64, on macOS iOS and Linux.

It is possible I am woefully ignorant on two issues in a row if there is a separate FPU unit per core or if the FPU unit can only be accessed by one core/thread at a time.

The idea here is the community has to be responsible for taking the world's best libraries for doing 'numerical programming' and port them to Swift, and maintain them in the latest version of Swift, whether or not their efforts are eventually rewarded by direct integration into the Swift language (or as an official 1st party library external to the spec). An argument that Swift would be better for numerical programming, if only the native Swift libraries for numerical programming existed, falls flat. Even the wrappers don't exist. Issues like above is probably why the slow adoption of numerical programming into Swift, or at least one of the reasons.

1 Like

There's a lot going on in this question. First, some quick corrections:

  1. On all targets supported by Swift, floating-point is handled by the core itself. It is not a thing shared between multiple cores (and if it were shared, the OS would be responsible for making it appear as though it were not).
  2. FPU state is not currently modeled by Swift, but on all systems supported by Swift it is not global. It is thread-local.
  3. "round-to-double" is an artifact of legacy x87 instructions, which are not used by Swift (except for Float80 on platforms that support it). All Float and Double arithmetic is done using SSE and follow-on extensions, which always round to the precision of their arguments. So there is no notion of "round-to-double" in Swift.

Swift guarantees this. You don't need to do anything to get it.

As mentioned above, this hasn't been an issue for well over a decade. "Everyone"[1] uses SSE (and follow on extensions) now.

As mentioned above, in legacy programs that do use x87 instructions, the FPU control word is thread-local, so this is still not an issue.

If you care about these issue (and you seem to), it would be well-worth spending some time reading about the current state of affairs. I realize that this is somewhat difficult, because there's a lot of folklore about floating-point on the web, much of which is 10-20 years out of date, if it was ever right at all. The actual architecture reference manuals from ARM and Intel are probably the best source of accurate information on these points, but there's lots of distractions to wade through there.

If you want to make a port of, say, qd, I'd be happy to review your work and answer these questions for you as you go.

[1] There are some domains where people need to support x86 HW that predates the mid-2000s, but those are domains where Swift is not likely to be used.

7 Likes

That feedback is priceless. If IEEE floating point representation is strictly adhered to in Swift, including decreases in precision between 80 bit and 64 bit always rounding instead of truncating, and all FPU modeling is per core and per thread, then I shouldn't have any problem with a fully native Swift port! Sure @scanon I will run it by you after making some progress.

I've also implemented a realtime Kalman filter in fully native Swift. The intellectual property is owned by my day-job so can't share it at the moment. But that may be a possibility at some point in the future.

Also a lot of mentions for random numbers. I would recommend for anyone to do a better job of seeding their generator. If you don't have your own hardware-based quantom random number generator, then you can do a server pull of quantum data from various sources, such as

or
https://qrng.physik.hu-berlin.de/download

Where you get your random seed data from depends on the throughput you need. It should be fine to use a pseudo random generator re-seeded every so often with the more scarce higher-quality random data such as quantum. If you want to re-seed faster then get a hardware-based quantom random number generator. I got mine from here. In addition to the usual Windows support, the SDK offers a minimum of Mac and Linux support
http://www.micro-photon-devices.com/Products/Instrumentation/Quantum-Random-Number

If throughput requirements are near the threshold of the entire bandwidth of your computer (like 4 bytes of random data per-pixel per-frame for an 8K display at 120 FPS) then quantum is probably too slow so just re-seed with whatever built-in function for random data exists on your platform.

1 Like

@scanon this is pertinent for my port and related to general misconceptions. This fellow here with his BaseMath library is saying that passing around structures by value is slower than passing by pointer.

https://github.com/jph00/BaseMath

"faster than standard swift loops or maps, since they use pointers, which avoids the overhead of Swift's copy-on-write checking"

I have seen how ugly the assembly is for pointer dereferencing, and have trouble believing passing around an instance of a small structure by value (maybe 128 bytes), would be slower than passing the pointer, especially on native 64 bit architecture. Am I missing something here?

If the 'check' for copy-on-write is the slowest part, even slower than the cost of the copy (for small data structures), then why not simply do the copy every time and avoid the check all together. Does Swift account for this when optimizations are enabled for a non-debug build?

Also, for the life of me I have never been able to derive the FFT from scratch. This is more important for arprec, for muliplying huge floating point or huge integer numbers together. Especially for muliplying large integers together, each integer can be represented as a certain type of degenerate matrix, and the operation boils down to an integer modulo FFT rather than a traditional FFT whose terms are floating point. Even in the most degenerate case where I am muliplying a huge number by itself I have never seen a reasonable full derivation of the symbolic equations and math required to perform the computation optimally. If you could point me to the appropriate references or if you have your own derivation from scratch handy then send me a copy of your notes.

On a side note, the historical significance of IEEE 754-2008 for numerics was huge. In reference to that, this was a popular link for a while (though no Swift). Should give ideas for more stuff that needs to be ported to Swift.
http://speleotrove.com/decimal/

First, I'll just say that I agree with the post author's complaints about the package manager. I find it tries to be too strict about enforcing best practices when I want to be fast and experiment, and lacks important features.

COW is only a factor when mutating (clue is in the name). I'd guess he means reference counting rather than COW. Also that pointer stuff is totally not safe - if the Array doesn't have a native buffer (i.e. was bridged from ObjC), it will deallocate the memory after the closure is finished.

Since this thread is about numerics in Swift generally, what kinds of things would you say are closer to the top of that list?

1 Like

Reasonable people can have different priorities, and I may get redirected at any point, but here's a rough shortlist of math library stuff I personally am planning to work on, in no particular order:

  • Float16
  • Complex
  • Decimal64 and 128
  • Generic math functions
  • n-Dimensional arrays
  • BLAS and LAPACK bindings
  • Further SIMD enhancements
  • Multiprecision integer arithmetic
21 Likes

native swift port is underway. i am starting with qd. arprec will come later. one of the reasons for choosing these specific numeric libraries is the softness of the license. by comparison GIMPS / Prime95 is great but has stricter licensing terms (though I partially blame the unclaimed cash prizes for the stricter terms). i will link to the repository after the baseline port is ready.

website heavily advertises GIMPS / Prime95

https://www.mersenne.org/

claimed and unclaimed cash prizes related to discovering prime numbers (as in 'doing numeric programming stuff')

EFF Cooperative Computing Awards | Electronic Frontier Foundation

another reason for choosing qd / arprec is minimalist implementation. a baseline port would only take a few weeks of man-hour equivalent (more for me obviously because this is part-time work). Compared to possibly years to port GIMPS / Prime95 to native Swift.

i have verified some of the claims for floating point behavior wrt modern CPU architecture. continuing that discussion, the biggest confusion for what may be the only remaining issue, is that i said the qd / arprec libraries require internal floating point representation to be 64-bit and not 80-bit. the specific confusion there is that the FPU control word specifies the number of bits of the significand, not the number of total bits of the floating point representation. So if I wanted 80-bit internal representation I would specify the significand as 64-bit, for 64-bit representation I would specify the significand as 53-bits. The default value for the FPU control word is still zero for that mask, which sets it to 64-bit significand, 80-bit total size. Which is not going to work for qd / arprec. I want to set the bit at 0x10000 on the 0x30000 mask in the FPU control word to change significand to 53 bits. I don't quite believe the claim that Apple sets that bit by default (if in fact true please provide some form of proof). I was internally confusing the size of the significand with the size of the entire floating point value (64-bit significand vs 64-bit total size).

So yeah I may still have to worry about the FPU control word in the context of Swift code. That is going to be annoying.

"6.2 ARCHITECTURE OF THE FLOATING-POINT UNIT"

https://www.phatcode.net/res/254/files/241430_4.pdf

Extended precision - Wikipedia

_controlfp_s | Microsoft Learn

Swift does not codegen floating-point to use legacy x87 instructions, except when the Float80 type is explicitly used. We use SSE (and follow-on extensions) for all arithmetic on Float and Double on x86_64. The FPU control word does not effect SSE. SSE has it's own control register, MXSCR, but does not have a precision bit, because all SSE arithmetic is evaluated in the same precision as the inputs. There is no such thing as extended precision on SSE.

On arm, all floating-point arithmetic is evaluated in the precision of the inputs. There is no such thing as extended precision on ARM hardware.

This is a non-issue.

1 Like

The few times I evaluated SSE for usability it didn't seem to have native support for 64-bit floating point. Or I was reading the wrong documentation. I did a quick check and I do see SSE was originally only supporting 32-bit floating point and has expanded for native 64-bit floating point. I am excited for the improvement in SSE since I last evaluated it. Hard to keep up with all the technologies that were previously unusable and may have become usable since I gave up on them.

SSE2 added support for 64-bit floats a very long time ago.

2 Likes