Swift: check increments of 10 in 100 and apply formula

I would like to automatically check against a default value to change another, here I can check for anything less than 10 to set a value. However, after that I would like to update the previous value by multiplying by 2 each ten increments, of 100 batches:

func calculate(for balm: balm, in inventory: Dictionary<balm, Double>, fromOriginal: Double) -> Double {
    
    
    var returnValue: Double = 0
    let interval = 10
    
    
    for (checkKey, iterValue) in inventory {
        if checkKey == balm {
            if (iterValue < 10) {
            } else {

                for decaLeap in stride(from: 11, to: 100, by: interval) {
                    if (iterValue == Double(decaLeap)) {
                returnValue = Double(fromOriginal * 2)
                    }
                }
            }
        }
    }
    return (returnValue)
}

Presently the first loop check is easy enough, but the Stride function (which I have never used before) outputs 22, 32, 42.. and that is where I have the problem as I would like anything between 11-20 to multiply the previous value by two.

However, if the number is 30, then I don't want the loop to cut out at 20 because it found a positive, but I'm unsure how it works?

e.g.

<10 = default
11-20 = default *2 (aka "part2")
21-30 = part2 * 2 (aka "part3)
31-40 = part3 * 2..

I am not sure if Stride is the best thing to use, but didn't want a whole load of IF statements :slight_smile:

Many thanks!

I don’t know what you mean by “of 100 batches”.

Anyhow, if I understand correctly, you’re trying to do

iterValue < 10; return default
10 <= iterValue < 20; return default * 2
20 <= iterValue < 30; return default * 2 * 2
30 <= iterValue < 40; return default * 2 * 2 * 2
40 <= iterValue < 50; return default * 2 * 2 * 2 * 2
...

If that’s the case:

func calculate<Balm>(for balm: Balm, in inventory: Dictionary<Balm, Double>, fromOriginal: Double) -> Double where Balm: Hashable {
    var returnValue: Double = 0
    let interval = 10
    
    guard let value = inventory[balm] else {
        // inventory doesn't contain `balm`, what do we do? :( 
        fatalError("`inventory` must contain `balm`")
    }
    precondition(0...100 ~= value) // valid value is between 0 and 100?
    let factor = pow(2, (value / 10).rounded(.down)) 
    
    return fromOriginal * factor
}

Note:

  • inventory is a dictionary, so you can just look up the corresponding iterValue for the balm; no need to iterate through it.
  • You’re comparing == on floating-point type Double. That raise a red flag in a lot of ways. Unless you know what that means, try not to do it. This applies to most, if not all, programming languages, not just Swift

Lantua: Thank you very much for the code, that was just what I needed, it works as designed. Interestingly, after the range limit of 100 the calculation keeps doubling. Until now I have never come across the Preconditioning function, useful thing to know.

I think you're right about the use of Integer over Double Floating Point values, I think it's because anything after the decimal point could throw off the condition?

The code should keep doubling at every interval, you can try to write down the math equation to see what’s going on, factor should be accurate up to around 128 iterations (that is upto value == 1280).

precondition/assert/fatalError are pretty useful to do the internal checking of your programming logic; if the condition inside is false, it crashes the program.
The difference between them is how early they get optimized out when you tell compiler to aim for speed. See this link for more detail.

That is more or less true, that’s why it can be hard to reason with. Take this famous example

let a = 0.1, b = 0.2, c = 0.3
(a + b) == c // false

let difference = ((a + b) - c)
difference.sign // plus
difference.significand // 1.0
difference.exponent // -54
type(of: difference).radix // 2
// So difference is +1.0 * 2^(-54)

This rounding error is due to the fact that computer can only do computation at certain accuracy. In fact, the difference is so small that most print command will blurt out 0.0, and I need to resort to inspecting each component separately and put the number back together myself. Nevertheless, difference is not 0.0 and so a + b and c are not equal.

And if you’re not using arithmatic operation, usually you’re better off using other types altogether.

P.S. The original code and what I proposed are very different, most of the result doesn’t seem to match, but I used what I thought would be what you wanted.

Converting all values in use to Integer from Double I ran into an issue because of the pow function, but found the following to convert:

let powtoInt = pow(2, (iterValue / 10))

returnValue = (powtoInt as NSDecimalNumber).intValue

I tried the following, but it didn’t work:

returnValue = Int(pow(2, (iterValue / 10)))

returnValue = Int(ilogb(iterValue / 10))

reference: https://stackoverflow.com/questions/39731265/swift-3-decimal-to-int

Is there another more concise way to do this please?

pow(_:_:) only takes and returns Double, so convert the arguments to Double, then convert the return value back to Int:

let exponent: Double = Double(iterValue / 10)
let base: Double = 2
let power: Double = pow(base, exponent)
let returnValue: Int = Int(power)

Disregard the following if it is too complicated for you, but since you are using Int and the base is 2, bit shifting would be better:

// Provided base == 2:
let power = 1 << exponent

He shouldn't need to tediously specify all the types like that. He can simply type let returnValue = Int(pow(2, Double(iterValue) / 10)).

He's only hitting the Decimal overload of pow because it's the only one that takes an Int as the second argument. Left shifting is almost certainly better for this, though.

The extra explicit variable names and types were for the sake of the poster’s understanding, not for the compiler. His confusion was entirely because he was unaware which types he was using compared to which ones he needed. Once understood, parts of it can certainly be left to the compiler to infer.

(If “Ben” is not actually a “he”, I apologize.)