I would like to automatically check against a default value to change another, here I can check for anything less than 10 to set a value. However, after that I would like to update the previous value by multiplying by 2 each ten increments, of 100 batches:
func calculate(for balm: balm, in inventory: Dictionary<balm, Double>, fromOriginal: Double) -> Double {
var returnValue: Double = 0
let interval = 10
for (checkKey, iterValue) in inventory {
if checkKey == balm {
if (iterValue < 10) {
} else {
for decaLeap in stride(from: 11, to: 100, by: interval) {
if (iterValue == Double(decaLeap)) {
returnValue = Double(fromOriginal * 2)
}
}
}
}
}
return (returnValue)
}
Presently the first loop check is easy enough, but the Stride function (which I have never used before) outputs 22, 32, 42.. and that is where I have the problem as I would like anything between 11-20 to multiply the previous value by two.
However, if the number is 30, then I don't want the loop to cut out at 20 because it found a positive, but I'm unsure how it works?
func calculate<Balm>(for balm: Balm, in inventory: Dictionary<Balm, Double>, fromOriginal: Double) -> Double where Balm: Hashable {
var returnValue: Double = 0
let interval = 10
guard let value = inventory[balm] else {
// inventory doesn't contain `balm`, what do we do? :(
fatalError("`inventory` must contain `balm`")
}
precondition(0...100 ~= value) // valid value is between 0 and 100?
let factor = pow(2, (value / 10).rounded(.down))
return fromOriginal * factor
}
Note:
inventory is a dictionary, so you can just look up the corresponding iterValue for the balm; no need to iterate through it.
You’re comparing == on floating-point type Double. That raise a red flag in a lot of ways. Unless you know what that means, try not to do it. This applies to most, if not all, programming languages, not just Swift
Lantua: Thank you very much for the code, that was just what I needed, it works as designed. Interestingly, after the range limit of 100 the calculation keeps doubling. Until now I have never come across the Preconditioning function, useful thing to know.
I think you're right about the use of Integer over Double Floating Point values, I think it's because anything after the decimal point could throw off the condition?
The code should keep doubling at every interval, you can try to write down the math equation to see what’s going on, factor should be accurate up to around 128 iterations (that is upto value == 1280).
precondition/assert/fatalError are pretty useful to do the internal checking of your programming logic; if the condition inside is false, it crashes the program.
The difference between them is how early they get optimized out when you tell compiler to aim for speed. See this link for more detail.
That is more or less true, that’s why it can be hard to reason with. Take this famous example
let a = 0.1, b = 0.2, c = 0.3
(a + b) == c // false
let difference = ((a + b) - c)
difference.sign // plus
difference.significand // 1.0
difference.exponent // -54
type(of: difference).radix // 2
// So difference is +1.0 * 2^(-54)
This rounding error is due to the fact that computer can only do computation at certain accuracy. In fact, the difference is so small that most print command will blurt out 0.0, and I need to resort to inspecting each component separately and put the number back together myself. Nevertheless, difference is not 0.0 and so a + b and c are not equal.
And if you’re not using arithmatic operation, usually you’re better off using other types altogether.
P.S. The original code and what I proposed are very different, most of the result doesn’t seem to match, but I used what I thought would be what you wanted.
He shouldn't need to tediously specify all the types like that. He can simply type let returnValue = Int(pow(2, Double(iterValue) / 10)).
He's only hitting the Decimal overload of pow because it's the only one that takes an Int as the second argument. Left shifting is almost certainly better for this, though.
The extra explicit variable names and types were for the sake of the poster’s understanding, not for the compiler. His confusion was entirely because he was unaware which types he was using compared to which ones he needed. Once understood, parts of it can certainly be left to the compiler to infer.