Generic code that gives the absolute value for integers and reals, and the Euclidean norm for `Complex`

Hello, I'm new here. I just finished my first project using Swift Numerics, which is actually a tutorial about plotting the Mandelbrot set, and using it to compute πœ‹ very inefficiently, as reported by Dave Boll in this 1991 post to sci.math. There's also a detour into IEEE 754 floating point formats, but I won't give spoilers here. :grin:

I realized quite late in writing the second part that using the magnitude property from Numeric wasn't the right way to the check boundedness of f for complex z. As the documentation (which I read, but evidently didn't absorb :flushed:) clearly points out in multiple places, for Complex, magnitude is the :infinity:-norm. Of course, what I needed for M set boundedness checking is the Euclidean norm, which is available in length.

I didn't notice this for a while because the :infinity:-norm gives nearly the same plots of the M set and approximations of the digits of πœ‹β€”it just takes a few more iterations for f to blow up when z has non-zero real and imaginary components. But exact iteration counts are important for reproducing Boll's results.

I struggled to figure out how to write code that is generic over integers, reals, and complex numbers, and uses magnitude for integers and reals, and length for complex. What I ended up doing is writing a protocol Lengthy (sorry) that requires a length property, and giving it a default implementation that returned magnitude. Then everything that conforms to Numeric and Complex already conform to Lengthy as well, and length is the property I use in the generic code. In other words,

protocol Lengthy: Numeric {
    var length: Magnitude { get }
}

extension Lengthy {
    var length: Magnitude {
        magnitude
    }
}

extension Int: Lengthy {}
extension Double: Lengthy {}
extension Float80: Lengthy {}
extension ComplexModule.Complex: Lengthy {}

I haven't decided whether this is kludgy or not, and I'm interested to hear what more experienced Swift Numerics / Standard Library programmers think.

I'd also appreciate hearing any other suggestions for improvement on my tutorial.

Thanks!

1 Like

You don't need the Euclidean norm to generate the Mandelbrot set--all norms on a finite-dimensional vector space are compatible, so a sequence blows up in the Euclidean norm if and only if it blows up in the max norm as well. (I.e. it doesn't generate "nearly the same plots". Unless you have a bug in your implementation, it generates exactly the same plots.)

This may not actually matter for computing digits of Ο€, either, because requiring a few more or less iterations to blow up only changes the low-order digits in the iteration count, but the approximation to Ο€ is in the high-order digits (it would matter if the values required, say 10% more iterations). I haven't really studied the method in question, however, so I can't tell you off the top of my head what precisely will happen here.

edit
After thinking about it for a minute, the iteration counts are necessarily within O(1) of each other, so it also has no effect on the computation of the digits of Ο€. You'll see slightly different values, but the approximations must converge at the same rate, and actually for any reasonable implementation should only vary in the final digit.


All that said, if you really want a Euclidean length defined on anything conforming to Numeric, what you have is as reasonable a definition as anything, so long as you're clear that the default implementation will be incorrect for some third-party types that may conform.

1 Like

Thank you for the correction and education on the max norm. I'm interested in mathematics, but not trained much past undergraduate level courses, and I hadn't learned about the max norm prior to reading the Swift Numerics docs.

I did see a handful of plot points change when I switched implementations. Checking the diffs, it looks like they were are all points that belonged to the set when using the max norm, which then disappeared when I used the Euclidean norm. I saw these differences only on the set border, and when zoomed in to some degree. Also, I performed only 30 iterations. That all seems consistent with it taking a few more iterations for f to blow up using the max norm.

As you say, the differences I saw in the iteration counts approximating πœ‹ were all +/- 1 in the least significant digit. But I wanted to see if I could duplicate Boll's results exactly, and with the Euclidean norm, I didβ€”with the exception of getting 31415927 instead of 31415928 in the case of c = (-0.75, 1e-07).

1 Like

This is yet another reminder that I should always stop and think twice about what I'm saying whenever I write "of course". :man_facepalming: