Integer divide could return a tuple

Integer divide uses the same symbol as floating point divide, even though it bears only a passing resemblance to the mathematical divide operation. In particular...

a/2 + b/2 == (a+b)/2

...is approximately true for floating point operands but often not true for integer operands. Also...

(a * b)/c == a * (b/c)

...is approximately true for floating point operands but often not true for integer operands.

Since division is an operation we have all been using since early childhood, we have an instinctive knowledge of how equations with division can be manipulated and simplified, and it is very unfortunate that the visual similarity between integer division and division tricks us into incorrectly "simplifying" equations which use integer division.

Furthermore, integer division and the remainder operator are often used with the same operands, which an optimiser may turn into a single division instruction with the quotient being read from one register and the remainder being read from a different register. But this assumes that the division operation and remainder operation are expressed in a similar enough way that the optimiser can pick up the match. If a single operator calculated both, this would enable coders to ensure that only a single divide was used for the integer divide and remainder operation.

I propose that ...

% should be the remainder operator, as now
%% should be the integer divide operator, and that use of / for integer division should generate a warning.
%%% should be a new operator which returns a tuple whose first member ( .q) is the integer quotient and whose second member ( .r) is the remainder. This would be guaranteed to compile to a single machine code divide operation. In practice the names of the members would rarely be used, as the typical use would be ...

(myIntQuotient, myIntRemainder) = myIntDividend %%% myIntDivisor

1 Like

Are you familiar with the quotientAndRemainder method for integers?

You are free to declare your own operator which calls that method if you like. I might suggest /% as the spelling.

There is no chance that Swift would remove or change the behavior of its existing / and % operators.

11 Likes

Leaving aside the thrust of your proposal, this is not possible. The vast majority of architectures do not have a single machine-code operation that produces quotient and remainder. Many do not even have a division instruction. Even on x86, which is presumably what you are familiar with, this is only possible for word-sized integers. Getting the quotient and remainder of an Int128 division, for example, requires multiple operations.

That said, we already have this operation in the form of quotientAndRemainder(dividingBy:). And even if you donā€™t use it, when you are working with basic types like Int, you can write separate / and % operators and the compiler will fuse them into a single instruction for you in the cases where doing so is possible and advantageous.

6 Likes

I am curious about the fusing bit. :slight_smile:

How does it work? Could you elaborate?

This optimization happens at the target back-end level in LLVM; if the target has a divide that produces both quotient and remainder, it will opportunistically generate that if it sees a [u,s]rem [u,s]div pair with the same operands (roughly: if you see one, look for the other and combine them if you find it). This happens very late in the pipeline after all the middle-end optimizations have taken place, so it generally doesn't matter too much exactly how the operands were originally spelled, they'll likely be equivalent after optimization if they're really the same.

There is some logic in the middle-end to move div/rem pairs into the same basic block so that the later optimization can fire. See, the comments in LLVM: lib/Transforms/Scalar/DivRemPairs.cpp Source File for some details.

(It's worth noting that quotientAndRemainder(dividingBy:) depends on this optimization as well--there's no divrem IR node in LLVM, so the quotientAndRemainder function gets split into separate divide and remainder nodes in IR, and then the optimizer recombines them in the back-end if it judges that to be worthwhile. At the IR level for machine-native integer types, a.quotientAndRemainder(dividingBy: b) is exactly the same as just writing (a/b, a%b).)

17 Likes

Thanks to everyone who replied. I was not familiar with quotientAndRemainder, thank you very much. I have also realised that I can add my own method to Int...

extension Int
{
func div(_ divisor: Int) -> Int
{
self / divisor
}
}

I think this will compile to the same code as just using / .

However, any language that allows...

let PI_APPROX = 22/7

and gives it the value 3 with no warning is just WRONG.

22 / 7 results in 3 in almost every programming language. This is just how integers work on computers.

If you want the fractional result, you need to use a floating point literal.

let pi = 355.0 / 113 // best rational approximation of pi
print(pi)
5 Likes

4.0 * atan (1.0) looks better. :slight_smile:

import Foundation

let pi_best = 355.0 / 113 // best rational approximation of pi
let pi_atan = 4.0 * atan (1.0)

print (pi_best)
print (pi_atan)
3.1415929203539825
3.141592653589793

2 Likes

If you're using trig anyways, might as well spell it Double.pi.

6 Likes

Thank you.

import Foundation

let pi_best = 355.0 / 113 // best rational approximation of pi
let pi_atan = 4.0 * atan (1.0)

print (pi_best)
print (pi_atan)
print (Double.pi)
3.1415929203539825
3.141592653589793
3.141592653589793

I've since figured out that you can actually stop "integer /" from compiling and replace it with a two parameter function. I don't understand how this can work... if the overload disables "/" from working in the rest of my program, why is the "/" in the iDiv able to work?

inline(_always) func iDiv( left: Int, _ right:Int) -> Int
{
left / right
}

extension Int
{
static func / (left: Int, right: Int) {
// doesn't get called, just stops compiler from using inbuilt operation
}
}

Please enclose the code fragments into triple back-ticks brackets for readability.

In your static func / you are not returning Int:

static func / (left: Int, right: Int) {
    // doesn't get called, just stops compiler from using inbuilt operation
}

The following "works" best for me (to disable built-in division op by making it ambiguous):

// global, not static
func / (left: Int, right: Int) -> Int {
    fatalError()
}

I do remember some old languages rigorously required "div/rem" for integer division.


Edit: updated the quoted example above.

Swift now allows single-expression functions to omit the return keyword, just like single-expression closure bodies and getters.

Arrow was missing

1 Like

Quick note about why this comparison works, because it is actually not that obvious.

Converting float to a string is a very complicated topic. Years of work went into this (example: github.com/ulfjack/ryu). The goal is to find the representation that converts back to the initial value. Via IEEE 754 -> 5.12 (this is the standard that defines float and double):

Implementations shall provide conversions between each supported binary format and external decimal character sequences such that, under roundTiesToEven, conversion from the supported format to external decimal character sequence and back recovers the original floating-point representation, except that a signaling NaN might be converted to a quiet NaN.

The actual goal is to find the shortest representation that obeys this rule. For example:

  • we are converting "1.1" and our floating point system has 1.1 -> exact conversion, everything is well
  • we are converting "1.1" and our floating point system has 1.0 and 1.2, there is no 1.1. Ideally the user should provide more digits. In this case floating point library can:
    • abort
    • return the closest value under the current rounding mode

In some numerical context you will see people using hexadecimal strings to specify float literals. This side-steps the problem entirely.

Anyway, comparing floats using their string representation is correct in this case, as we are looking for equality. If the value was different then the string representation would have to be different. We all knew this, but it is nice to know that this is guaranteed.

print(Double.pi.nextDown) // 3.1415926535897927
print(Double.pi)          // 3.141592653589793
print(Double.pi.nextUp)   // 3.1415926535897936

You can also compare them directly (== operator), or use bitPattern:

print(pi_best == Double.pi) // false
print(pi_atan == Double.pi) // true
print(pi_best.bitPattern == Double.pi.bitPattern) // false
print(pi_atan.bitPattern == Double.pi.bitPattern) // true

Note that the bitPattern method depends on the floating point library. For example IEEE 754 which specifies double and float also specifies a vide range of decimal floating points. You are not allowed to use decimal.bitPattern to test for equality, as the same value may be specified in a few different ways. For example 10 = 10 * 10^0, but also 10 = 1 * 10^1.

How far off is 355.0 / 113?

People tried to approximate :pie: for ages. I guess this where 355.0 / 113 comes from. How bad is it?

assert(pi_atan < pi_best)

var d = pi_atan
var counter = 0

while d != pi_best {
  d = d.nextUp
  counter += 1
}

print(counter)                                 // 600699552
print(pi_best.bitPattern - pi_atan.bitPattern) // 600699552, you can to this!

355.0 / 113 was invented/discovered by Chinese mathematician and astronomer Zu Chongzhi in the 5th century. Wikipedia states that:

ā  is the best rational approximation of Ļ€ with a denominator of four digits or fewer, being accurate to six decimal places.

@Nobody1707 used it because it is cute and short. But it is not that precise.

Note that the subtraction trick is not guaranteed to work in every floating point library, but it will work on IEEE 754 binary floating points.

Is Double.pi even correct?

Kind of. Maybe. I guess it is "ok".

:pie: is an irrational number, meaning that it cannot be expressed as a ratio of two integers, for example 22/7. This makes it difficult to represent in a floating point system.

In reality :pie: sits between 2 representable numbers and the library authors choose which one to take. Double.pi documentation says (emphasis mine):

This value is rounded toward zero to keep user computations with angles from inadvertently ending up in the wrong quadrant. A type that conforms to the FloatingPoint protocol provides the value for pi at its best possible precision.

print("pi", Double.pi)               // 3.141592653589793
//                    The actual value: 3.1415926535897932384ā€¦
print("pi.nextUp", Double.pi.nextUp) // 3.1415926535897936

Is this correct? IDK.

Step count

When talking about the 355.0 / 113 we calculated the number of "steps" to get from the calculation result to the "correct" value. This is actually done in real life. Any reputable math library will state how accurate the result are.

For example IEEE 754 requires all of the basic operations (+-*/, remNear, fma) to calculate infinitely precise result and then round using the current rounding mode. This way the result requires 0 steps to get to the correct value, which makes it as close as it could be.

For some more complicated functions it may be computationally infeasible to calculate the correct result. In such cases the tables are used. Or the library widens the margin of error, for example by stating: all of the results are correct within 2 steps (as in 2 calls to next[Up/Down]). The lingo may be a bit different, but it boils down to this.

The reason why the "steps" are used is the distribution of the exactly representable numbers. I don't really want to go into that. Please never ever use hard-coded epsylon to compare floating points.

3 Likes

ā€¦gotta be careful if the signs are different though:

let b = 0.0
let a = b.nextDown
let d1 = b.bitPattern - a.bitPattern    // crash
let d2 = b.bitPattern &- a.bitPattern   // (1 << 63) - 1

We can find the error in .pi, to double-precision, like so:

let Ļ€MinusDotPi: Double = sin(.pi)     // Ļ€ - .pi