As part of Swift for TensorFlow we have been working towards designing a set of protocols that more closely resemble mathematical concepts and can, for example, be used for conveniently implementing numerical optimizers. In trying to integrate our changes with the standard library we bumped into an issue related to Numeric. We have already designed a few protocols (e.g., VectorProtocol and PointwiseMultiplicative) for which we support automatic conformance derivation for aggregate types (e.g., structs containing tensors). We want to also support Numeric derivations for such aggregate structures and the current issue is that the Comparable constraint on Magnitude is causing some issues as it's not straightforward what the semantics should be. To that end we have two main questions:

Would removing the Comparable constraint for Magnitude be acceptable if we can make a good case for it? It seems to be used in very few places across the standard library and so removing it should not result in big changes. However, we are not sure what implications it would have for ABI stability.

Similarly, would it be acceptable to completely remove Magnitude and the related magnitude property from Numeric? It also seems to be used in very few places across the standard library, but we have the same concerns regarding this.

Any feedback/discussion around this would be greatly appreciated!

Swift clearly wasn't build for (or by) mathematicians, and people even requested to ignore established terms of art ([Amendment] SE-0240: Ordered Collection Diffing - #13 by GetSwifty) - but as you are saying you want to closely resemble mathematical concepts, I can hardly understand your request:
Being comparable is the key concept of magnitude. If you remove that, I'd really have to sit down and think for a while why you should have it at all…

That's a good point and that's the reason behind my second question. It's just that this specific constraint is making it hard to define Magnitude for aggregate structures. Ideally we would prefer if we could remove Magnitude altogether from Numeric.

It could help a lot if you would tell the details of your use case:
I would expect that there is either a way to make your MagnitudeComparable, or that the base type shouldn't be Numeric in the first place.

A simple example would actually be just a single tensor. Say you have a 2-dimensional tensor (i.e., a matrix) to make this concrete. How do you define magnitude in this case? Should it be the Frobenius norm of the matrix, the max norm, or something else? Namely, what are the semantics of magnitude?

As described in the standard library documentation, "the Numeric protocol provides a suitable basis for arithmetic on scalar values." The semantics of Numeric were not designed for the types you describe to conform to it; the protocol would have been named Number if Foundation didn't have NSNumber.

I suspect magnitude is just the tip of the iceberg here; you will find going forward that there are other constraints, not just those expressible in code but those that can't be enforced by the compiler but documented and relied upon by default implementations, etc., that do not fit.

You're really after an entirely different protocol than what Numeric is. I would urge you to try a different approach where protocols for vectors, etc., are rooted in an entirely separate hierarchy with appropriate associated type constraints for scalars.

A square matrix type with specified dimension (say 4x4 matrices) could conform to Numeric, but matrices, in general, are not a thing that can conform to Numeric--you can't even define a total multiplication on matrices of unspecified dimension.

For a square matrix type, any matrix norm would be an allowed under the semantics of magnitude. From some view, this makes it underspecified, but from a more abstract view it's completely fine, because all the matrix norms are equivalent--they induce the same topology. I would definitely not choose the 2-norm, since it's a pain to compute. Basically any of the other norms would be an appropriate choice so long as it is documented clearly.

I don't think that change can be made in an ABI-compatible manner at present. (I would very much like to have the ability to make changes like this, but I don't think we have it now).

Imposing a ring structure on vector spaces is weird. You can always do it pointwise (at least in the finite dimensional case), but that's frequently not the multiplication that you actually want. I don't think that would be appropriate for the Numeric protocol (e.g. someone can make a complex number type that would conform to Numeric, and they definitely do not want pointwise multiplication). It feels like you're trying to force Numeric to be something that it isn't here.

I brought this up because SIMD chose * as its elementwise multiplication operator, which indicates that we've defaulted to * for memberwise multiplication.

Would it be acceptable to introduce a separate protocol for pointwise multiplication (PointwiseMultiplicative or something) that requires * for pointwise multiplication? Or should we move to .* instead to avoid additional operator type checking overhead on *?

I think using .* to always mean pointwise multiplication is definitely an option, and that would let you define a DirectProductRing or PointwiseMultiplication or whatever protocol. SIMD uses * because there's no ambiguity for SIMD types; the only multiplication that exists for SIMD vectors is the pointwise one. So for SIMD, I would expect .* to just be another name for *.

For matrices or complex numbers or more general algebras, .* would be a distinct operation from the "special" ring structure *.