Determinants seem to be wrong with simd_double4x4.determinant

I've been trying to do some 3D geometry things from this paper in Swift. Long story short, this requires calculating signs of determinants of 4x4 double-precision matrices. I've tried using simd_double4x4.determinant for this, to no avail - it returns wrong values.

For example, simd_double4x4(simd_double4(10732176, 0, 0, -17579304288), simd_double4(0, 0, 112710780, -184620257640), simd_double4(-107344692, -112710780, -112710780, 175830605496), simd_double4(0, 0, 10732176, -17579304288)).determinant (the numbers can seem intimidating, but they all fit into Doubles exactly) returns something about 3.687e+17, however, it should be 0.

This happens on both Swift 5.7.1 and latest main trunk (DEVELOPMENT-SNAPSHOT-2023-02-23), on both macos 13.2.1 and iPhone iOS 16.2. The same story happens with simd_determinant, and it seems to affect other geometric predicates that could be internally implemented through determinant (like here).

  1. Is it really wrong, or I'm missing something?
  2. Does simd_double4x4.determinant guarantee precision to the ULP?
  3. Are there any good alternatives for calculating determinant of 4x4 double matrix with precision to the sign in Swift? Ideally, this also should be adaptive, as I'd imagine calculating the 4x4 determinant precisely would be quite expensive, and I'm only interested in its sign.

Sorry if this is outside of the scope of this forum.

The mantissas of intermediate calculations overflow, hence the result. Try using smaller numbers (e.g. premultiply all matrix components by 1e-3, calculate determinant and then post multiply the result by 1e+3. Good question: could / should "determinant" do such an adjustment itself.

1 Like

Thank you for the answer, but unfortunately, I can't do that. I can change the expected range of the values that I have with no problem, but I cannot make them less precise, and by multiplying by 1e-3 that is not a power of 2 I will make my values less precise. I need to consider every bit of information that I have in mantissas, so that I could reliably say: this or that matrix's determinant is bigger, less or equal to 0.

The paper you quoted suggested to store the plane coefficients as single precision and use double precision for intermediate result.

1 Like

@tera's suggestion to scale by 1000 doesn't work either, but scaling by 1000000 does:

import simd

let a = simd_double4(10.732176, 0, 0, -17579.304288)
let b = simd_double4(0, 0, 112.710780, -184620.257640)
let c = simd_double4(-107.344692, -112.710780, -112.710780, 175830.605496)
let d = simd_double4(0, 0, 10.732176, -17579.304288)

let m = simd_double4x4(a, b, c, d)

print(m.determinant) // prints 0.0

Then just limit them to be less than, say, a million.


Is there a guarantee though? "float.max * float.max * 2" would still overflow. AFAIR determinant calculation of 4x4 matrix involves factors like a*b*c*d

Interestingly it works just fine in this form:

let n = 1e-3
let a = simd_double4(10732176.0 * n, 0 * n, 0 * n, -17579304288.0 * n)
let b = simd_double4(0 * n, 0 * n, 112710780.0 * n, -184620257640.0 * n)
let c = simd_double4(-107344692.0 * n, -112710780.0 * n, -112710780.0 * n, 175830605496.0 * n)
let d = simd_double4(0 * n, 0 * n, 10732176.0 * n, -17579304288.0 * n)

Even with n = 0.2

This question is out of scope for this forum, as it is a question about Apple's SDKs, rather than Swift. For follow-up, please either use feedback assistant or post on the developer forums under the [simd] tag.


  1. It's not "wrong"; simd's determinant operation uses the naive formulas, which are backwards-stable (i.e. suitable for most numerical use), but are not suitable for exact geometry primitives, as you have observed, because the sign of the determinant is numerically ill-conditioned.
  2. No; that's prohibitively expensive and not appropriate for most uses of the determinant.
  3. For boolean orientation tests, you don't actually need or want determinants; you need only to know the signum of the determinant. The simd headers provide exact implementations of these derived from Jonathan Shewchuk's work, which you should use instead. There is more extensive documentation available in the simd/geometry.h header (see simd_orient, simd_incircle, and simd_insphere, which are defined for 2- and 3-dimensional vectors of floats and doubles). E.g.:
/*! @abstract Test the orientation of three 3d vectors.
 *  @param __x The first vector.
 *  @param __y The second vector.
 *  @param __z The third vector.
 *  @result Positive if (x, y, z) are positively oriented, zero if they
 *  are coplanar, and negative if they are negatively oriented.
 *  @discussion For three-dimensional vectors, "positively oriented" is
 *  equivalent to the ordering (x, y, z) following the "right hand rule",
 *  or to the dot product of z with the cross product of x and y being
 *  positive.                                                                 */
static float SIMD_CFUNC simd_orient(simd_float3 __x, simd_float3 __y, simd_float3 __z);

Note that rescaling does not change this; the sign of the determinant is ill-conditioned, and these "errors" are not the result of overflow or underflow. Only using exact arithmetic or other numerical techniques (as used by the orientation primitives) can resolve the issue.


It seems that the OP is aware of the simd_orient function, but that has it's own prblems:

Copied into a Feedback assistant report (@kunitsyn I hope you don't mind?)

1 Like

Thanks for the report!

Interestingly if to calculate determinant manually there's no issue whatsoever in this method:

typealias Num = Double
typealias V2 = simd_double2
typealias V3 = simd_double3
typealias V4 = simd_double4

typealias M22 = simd_double2x2
typealias M33 = simd_double3x3
typealias M44 = simd_double4x4

extension M22 {
    var det: Num {
        let c = columns
        return c.0.x * c.1.y - c.0.y * c.1.x

extension M33 {
    var det: Num {
        let c = columns
        return c.0.x * M22(V2(c.1.y, c.1.z), V2(c.2.y, c.2.z)).det -
        c.1.x * M22(V2(c.0.y, c.0.z), V2(c.2.y, c.2.z)).det

extension M44 {
    var det: Num {
        let c = columns
        return c.0.x * M33(V3(c.1.y, c.1.z, c.1.w), V3(c.2.y, c.2.z, c.2.w), V3(c.3.y, c.3.z, c.3.w)).det -
        c.1.x * M33(V3(c.0.y, c.0.z, c.0.w), V3(c.2.y, c.2.z, c.2.w), V3(c.3.y, c.3.z, c.3.w)).det +
        c.2.x * M33(V3(c.0.y, c.0.z, c.0.w), V3(c.1.y, c.1.z, c.1.w), V3(c.3.y, c.3.z, c.3.w)).det -
        c.3.x * M33(V3(c.0.y, c.0.z, c.0.w), V3(c.1.y, c.1.z, c.1.w), V3(c.2.y, c.2.z, c.2.w)).det

let x = M44(
    V4(+10732176.0,      +0.0,              0.0,               -17579304288.0),
    V4(+0.0,             +0.0,              +112710780.0,      -184620257640.0),
    V4(-107344692.0,     -112710780.0,      -112710780.0,      +175830605496.0),
    V4(+0.0,             +0.0,              +10732176.0,       -17579304288.0)

print(x) // 0.0

Perhaps built-in determinant is using a different method (e.g. one that uses division and suffers when both numerator and denominator are both close to zero).

No, that's not what's happening here. Because the computation is numerically unstable, the specific order of rounding errors introduced in the computation matters quite a bit; reordering it slightly will make it "correct" or not by dumb luck.

For that matter, determinants never involve divisions, and dividing values close to zero cannot cause a loss of accuracy--computing values very close to zero can, but the values would have to be quite a bit smaller than anything encountered in this computation could be.

This turns out to be a very simple bug in the <simd/geometry.h> header, which can be worked around as follows:

import simd

func simd_orient_workaround12015022(
  _ a: simd_double3,
  _ b: simd_double3,
  _ c: simd_double3,
  _ d: simd_double3
) -> Double {
  let buf = simd_double4x3(a,b,c,d)
  return withUnsafePointer(to: buf) {
    $0.withMemoryRebound(to: Double.self, capacity: 16) {

Note that the above manual calculation is equivalent to:

a = 10732176.0
b = 112710780.0
c = -17579304288.0
d = 175830605496.0
e = -184620257640.0
f = -107344692.0

det = (((a*(0*(-b*c-d*a)+(b*(b*c-e*a)))-(0*((0*(-b*c-d*a))+b*(0*c-c*a))))+(f*((0*(b*c+e*a))-0*(0*c-c*a))))-(0*(0*(b*d-e*b)-0*(0*d+c*b))))

(Note that only + - and * are used, no division.

Here's the tree in a hierarchical form with no simplifications:

			a*(b*(b*c - e*a) - 0*(b*c + d*a))
			0*(b*(0*c - c*a) - 0*(b*c + d*a))
		f*(0*(b*c + e*a) - 0*(0*c - c*a))
			0*(b*d - e*b) - 0*(0*d + c*b)


		(a*(b*(b*c - e*a) - 0) - 0)
		f*(0 - 0)
	0*(0 - 0)

a*(b*(b*c - e*a))

The internal factor in parens is equivalent to:

184620257640.0 * 10732176.0 - 112710780.0 * 17579304288.0 → 0

and while it is "unstable" IRT whether that would give +0 or -0 or possibly some very small number, the overall result can never get to be something like "3.687e+17" of the original code.

The relevant part of the actual computation done by .determinant here ends up being:

x[0][0] (x[1][2] (x[2][3] x[3][1] - x[2][1] x[3][3]) +
         x[1][3] (x[2][1] x[3][2] - x[2][2] x[3][1]))

Substituting values and partially evaluating gives:

10732176 * (112710780 * (112710780 * -17579304288) - 184620257640 * (-112710780 * 10732176))

If evaluated exactly, this cancels to zero. But the scaled terms involved in the cancellation have magnitude of about 2.34E33, so we should expect rounding errors to be of the scale ulpOfOne * 2.34E33, which is 5.32E17, which is exactly in line with the observed error; this is a "very small number" relative to the scale of the problem, as expected.

1 Like

Not directly. One way to calculate determinate is to transform the matrix into the diagonal form – which involves division – and then multiply the diagonal elements.

You are right.

Ok, true. (You would generally reduce to triangular form, not diagonal, as it's much, much cheaper, but no one does either one for 4x4 determinants.)

And the Float variant of simd_orient does not have the same problem, right ?


Following-up here: the bug that I gave a workaround for above is fixed in macOS 14 / iOS 17 and the rest of the newly released SDK updates. Note that the change needed was in the header rather than the library, so simply recompiling with the new SDK will result in code that works correctly on older OSes as well.