I don't think the argument was that it couldn't be done but that it would make for more readable code or less boilerplate.
@dennisvennink's example is rather concise and easy to read. Your's is a bit harder to grasp (but of course just as correct) don't you think? Highly subjective of course.
I rather think the original version is the most concise and easy to read; since Swift.max is inlinable, I see no reason why the value has to be computed twice if the compiler is smart enough. Keep in mind as well that lazy desugars to something with its own performance cost.
Initially, this and other algorithms from the first versions of Zip2Collection made heavy use of switch, until I checked if they would scale to Zip3Collection. It turns out they didn't.
In your example for instance, you'll end up with 2n + 1cases where n is the arity of the zip. So Zip3Collection would end up with 16 cases. I don't find this particularly readable, or maintainable.
I am at a loss to see the advantage of a lazy var here over a straightforward let:
if start < end
{
let max = Swift.max(distance1, distance2)
return Swift.min(defaultElement1 == nil ? distance1 : max,
defaultElement2 == nil ? distance2 : max)
}
let max = Swift.max(distance1, distance2) is executed precisely once and the result is used twice without the need for calling it a second time within its scope, which is the if block.
Even if it were lazily evaluated, max falls out of scope at the end of the block and would be released anyway, just as with a let.
I'm sorry but, unless I'm missing something profound here, I certainly couldn't use this example as justification for the pitch.
I think the point is that there is a case where neither will need the value max and thus the lazy version avoids even executing it a single time. Not really a big deal with max, but it might matter for a very expensive computation.
I'm very much +1 on support for "lazy" on local variables. Local variables should have the same capabilities as properties in structs and classes, global variables, etc. I consider this specific missing feature an engineering limitation based on how the compiler (at least used to) work, not something that designed to be this way.
You can nest computed properties inside other functions, and the getter and setter bodies are proper closures which can capture values from the outer scope, for example:
func f(x: Int) {
var p: Int { return x * x }
print(p)
}
Chris, pardon me for being a bit thick here but, so far, with the specific example given, I really can't see the need for a lazy local variable in that context.
Could you give us some idea of where it is genuinely useful ?
@Joanna_Carter your example here is more verbose than the example in this comment. It also violates DRY because, rather than declaring or computing max twice, you're now checking defaultElement1 == nil and defaultElement2 == nil twice. You could imagine if those checks were expensive, we'd be in exactly the same position we wanted to avoid.
I wasn't making a utility based argument, I was arguing that the language is simpler and more consistent with support for these. There are reasons why this is useful (as others have described above) but I will not claim that the use cases are "important enough" to justify the feature. I'm merely observing that the reason Swift does not support this now is due to internal implementation issues, not intentional omission.
I would agree with you on the consistency argument. I guess my beef was really with the lack of justification in the examples given. From a framework designer's point of view, there a great many "non-sugar" additions to the language that are far more desperately needed
Iâm trying to come up with a nice example that clearly outlines the requirement. I, too, have been stung by the lack of lazy in methods and functions.
I think the core example is the convolution of optional requirement if you want to delay, and possibly avoid entirely, heavy work.
func myFunc() {
var heavyWeightObject: MyHeavyObject?
if *insert case here* {
heavyWeightObject = MyHeavyObject()
heavyWeightObject!.property = // configure
myOtherFunc(heavyWeightObject!)
} else if *other case here* {
heavyWeightObject = MyHeavyObject()
heavyWeightObject!.property = // configure
heavyWeightObject!.additionalProperty = // configure also
myOtherFunc(heavyWeightObject!)
}
// additional further actions here
}
The core issues I see here are:
Optionality gets in the way. Each use case that causes initialisation of the heavy object can assume after it does so that the value is no longer optional. Itâs only optional to delay initialisation, and to handle cases where it will never be initialised.
Cases where the object should be initialised the same way have to be repeated. Any configuration work you need to do that is shared either needs to be repeated, or abstracted out into a local function where you further have to handle the optional cases.
It can tend to push devs to always do the heavy work to avoid the workarounds. Itâs easier in code to just initialise it upfront - why not? Because performance suffers if you donât need to do that work. But developer experience in the language improves and it becomes easier to reason about.
I think the language should support the user in making performant code.
In this case, the fact the optional is required locally makes the developer experience poor if you know youâll always initialise and then access it, and you want a common initialisation routine. It pushes the programmer to just do the work always, which we avoid throughout the standard library with paradigms like lazy collections etc.
The Lazy struct you describe does the job. And it is right away available for whoever is in need for such a pattern.
But...
It could also replace the lazy properties we have today (you'd just use a nil check in its implementation, instead of a lazy var). So I'm not sure it is a tremendous argument against lazy local vars.
Besides, lazy vars are currently not thread-safe. But as all compiler-provided constructs, they are subject to the evolution process, and may get better with time. On top of that, all the current work on ARC, moveable types, borrow checks, etc, etc (which are far far away above my skills), do need extremely precise compiler semantics: a "lazy" compiler variable is a much more self-contained and future-proof concept than a general struct which happens to implement a lazy call to a closure.
Iâm certainly not arguing itâs not possible.
Nor am I arguing that it is a high priority. But the solution you are suggesting seems rather convoluted to me - even more so than the workaround I stated might be the impetus for this call.
As @gwendal.roue commented, you could argue this as a reason against lazy properties too. Itâs donât think your examples showing itâs possible support an argument that it is optimum.
Iâd say that, in my opinion, the optimum design all around would be to not have to write that boilerplate or understand a new wrapper workaround type, and be consistent with lazy properties.
In any case, I agree that if this change doesnât really seem very high priority.
I totally agree that lazy vars could/should be available anywhere a normal var is found.
My example was to demonstrate that it is not the end of the world were that not to happen; after all, it's only less than a dozen lines of one-off code and the need to pass all calls through lazyVar.value.doSomething() instead of simply lazyVar.doSomething()
What's more, changing the declaring var and calling code when we do get true lazy vars is not exactly onerous, mainly involving punctuation