 # Check if a vector is normalized

Hello,

I am trying to add a precondition that a `SIMD3<Float>` be normalized.

What would be the best approach here?

I could try:

`(1 - length(vector)).isZero`

But I think this will be problematic if there is any amount of floating point error. I could also check isAlmostZero with some amount of tolerance, which would probably be acceptable. Should I just pull in SwiftNumerics and check approximate equality of the length with one?

How are the vectors produced? What are you going to do with them (i.e. what is the precondition really checking for?)

It's the direction of a ray, but I don't necessarily want to normalize in the initializer since conceivably one could provide an already normalized direction. So ideally I could use a precondition to check if the provided direction is normalized.

It has to be normalized because there is functionality that assumes the direction is normalized (like the parametric ray equation for example)

Edit: Usually the directions are normalized when they are created, but it's possible that I could accidentally use a non-normalized direction and then be confused by the bugs I would be seeing. So really the precondition is just to help my future self As @scanon said, to compute value of tolerance you need to understand how error affects rest of your system. For example, if you use parametric ray equation to compute points belonging to a world with coordinates in the range of ±10000 with a precision of 0.001, then maximum value of ray equation parameter can be 20000√3 (length of the diagonal of the 3D cube), and if length of the ray direction vector is in range of 1±0.001/20000√3, then you can be sure that coordinates of the points have required precision.

Another approach would be to use a separate type for a normalized vector, such that instance of such type can be obtained only by normalizing a vector. Then you can control at compile time if vector is normalized or not. Somewhat similar to optional types.

2 Likes

@Nickolas_Pohilets

Yes thanks for the input, I can tolerate the floating point error in the system, the real purpose of this check would simply be to avoid doing work to normalize something that has already been normalized.

A normalized vector type seems like a better solution than the precondition, thanks!

You can also go one step short of a full type and just provide some overloads:

``````func doSomething(withVector v: SIMD3<Float>) {
doSomething(withNormalizedVector: v.normalized())
}

func doSomething(withNormalizedVector v: SIMD3<Float>) {
// implementation assumes v is normal, perhaps has a debug assert check with tolerance
}
``````

I find this solution especially nice when I can keep the "assumes normalized" version an internal implementation detail.

You might find normalising them all is faster, unless the overhead of writing them all back is an issue. By the time you've calculated the length you have done most of the work of normalising, and doing the additional divide-by-the-length is probably quicker than testing and branching for each.

Note, in particular, that division is pipelined on recent Intel and Apple Silicon CPUs; it's still fairly high latency (10+ cycles), but if you're normalizing a bunch of vectors, throughput becomes more important than latency and just doing the division is pretty cheap in that context.

Good idea, I'll probably mock-up both solutions to see which I like better. Any obvious pitfalls to introducing a type?

It's just a lot of duplication, unless you can keep its use very focused.

1 Like