To be clear, the equivalence only applies between constructing a new value with a memberwise initializer and modifying stored properties directly.
name
is no longer a simple stored property in that case with the didSet
.
To be clear, the equivalence only applies between constructing a new value with a memberwise initializer and modifying stored properties directly.
name
is no longer a simple stored property in that case with the didSet
.
Lets compare some sample code (mostly from earlier but all adapted to use the same style/solution) without the use of non-native operators (will still use things like +
or keypaths).
let prependWorkersHomeStreet =
\User.Lens.workplace.workers.traverse.home.street.modify { "home " + $0 }
extension User {
mutating func prependWorkersHomeStreet() {
for index in workplace.workers.indices {
workplace.workers[index].home.street = "home \(workplace.workers[index].home.street)"
}
}
}
\X.Lens.some.nested.field.modify(change)
{ x in
var x = x
x.some.nested.field = change(x.some.nested.field)
return x
}
\User.Lens.name.set(newName)
vs
{ user in
var user = user
user.name = newName
return user
}
I'd say describing these types of functions, (T) -> T
is much more clear with optics, even without operators, and it's universally shorter. This can be helpful even when using functions from the Swift standard library:
users.map {
var user = $0
user.name = "new name"
return user
}
vs
users.map(\User.Lens.name.set("new name"))
I don't think the comparison is fair. You're using unnamed closure parameters in lenses code, while spell out longer variable names in code that you claim is idiomatic in Swift. This snippet is not idiomatic either.
You'd just write this instead
user.name = newName
and not create needless copies of User
in the first place.
For the rest of the examples counterarguments were posted in this thread multiple times and I don't see a point in rehashing that.
Overall, measuring character for character and line for line is not highly relevant in my opinion, unless you'd claim there's an order of magnitude difference. Clarity is preferred over brevity in Swift, with longer variable names instead of single-character names that Haskell devs seem to find idiomatic.
I think the more idiomatic approach is @ExFalsoQuodlibet's modifyEach
operator above, which was pitched many years ago by the author of the Functional Swift book: In-place map for MutableCollection
Your above example would be written in even fewer characters than the lens approach:
users.modifyEach { $0.name = "new name" }
And using the key path subscript traversal mentioned above, you could technically even do something like:
users[forEach: \.name] = "new name"
No, the lenses don't construct inline closures, an operation on a lens like .~
(set) or %~
(modify) actually returns a function.
The comparison is exactly fair because I'm literally constructing identical values. I'll explicitly annotate the type for clarity. I also assume by "spell out longer variable names" you meant I name the closure parameter in some of my native swift examples (not all though, look at the last one). Anyway, I'll omit that if you prefer:
let _: (User) -> User = {
var user = $0
user.name = newName
return user
}
vs
let _: (User) -> User =
\User.Lens.name.set(newName)
your version is not identical to these snippets, as in this doesn't compile:
let _: (User) -> User =
user.name = newName
because that's not an expression of that type.
That's not the point though. You've specified that (User) -> User
is how the function would look like in both cases, and that's what I don't find fair. In idiomatic Swift you wouldn't create a separate closure with this type in the first place, you'd just modify the value with direct property assignment and be done with it.
If you claim that you lose purity that way, my counterargument is that value types allow preserving overall purity, keeping mutation side effects localized to code explicitly marked with var
, mutating
, and inout
. This is what has been shown in multiple posts in this thread and I feel like we're going in circles. I don't find that rehashing the same arguments over and over is productive.
To be fair, 1 fewer character, not several. It doesn't look like it in your snippets, seems codeblocks aren't monospaced in these forums which is weird.
But the difference between 41 characters and 42 characters isn't a huge deal IMO, while the standard Swift would be 64 characters. I've also outlined before why I prefer the ergonomics of (T) -> T
over (inout T) -> ()
(it doesn't compose well with function application amongst other things).
I did give an exact usecase of where you'd do this exact kind of code construction in normal Swift code. I use map
all the time on collections. Do you abstain from map
ping over arrays and such always? Or do you find that you're never doing a transformation of some type to itself in those maps? I find myself doing it often enough in real code I write for apps.
users.map {
var user = $0
user.name = "new name"
return user
}
users.map(\User.Lens.name.set("new name"))
users.modifyEach { $0.name = "new name" }
(also I can tell that these are monospaced now, idk what kind of optical illusion it was before where it seemed like the third was like 3 or 4 characters shorter than the second)
All 3 of these do roughly the same thing. The first two technically do the exact same thing (but have different requirements for the construction of User
) and the third one would require a different style of coding around its usage to accommodate it, but if you're absolutely set on using var
s and mutating
non-pure imperative functions in your code, then that seems like a good HOF to use.
I prefer the use of pure functions, so I like the lens more than modifyEach
, but can see how someone would prefer modifyEach
over the vanilla swift.
I can only echo the sentiment of others that you will be going against the grain of the language and fighting it. I understand the appeal of T -> T
generally, but Swift has specific language-level features that should steer you towards (inout T) -> Void
instead.
Yes you can construct pure functions with impure functions, this is something I already agreed with, so I don't think there's a disagreement here. As I've said before, it seems most people aren't as convinced on the merits of FP as I am, and that's fine. It's a separate discussion outside the topic of this post, so I'm fine ending the post here as well.
Yeah it's something I'll keep in mind, I didn't expect to get this much pushback on using pure functions from the Swift community, I do most my programming in either the Scala typelevel ecosystem or Haskell, and I expected Swift developers to have similar views on purity. Of course, people here still value purity in some contexts, (often you can find developers in Python or Java ecosystems that don't see its merit at all), but the passionate argument against the use of pure functional optics in favor of impure, mutating functions was a surprise to me.
We're definitely not pushing back on purity. On the contrary, we consider (inout T) -> Void
to be pure. The issues other languages have with mutation and spooky action at a distance do not apply here.
For what it's worth, many of the folks in this thread are very convinced on the merits of FP
Someone else also made the claim that (inout T) -> Void
is pure, and I already explained why that's technically incorrect (that was about mutating
, but I just verified that the same is true for inout
, which was my understanding) but I get the general sentiment you are echoing.
Yeah, I was careful to say "as I am" in the original quote. I always reach for pure functions if I can (and in some languages I can always use pure functions), while there has been advocacy here to drop pure functions in favor of impure functions in some contexts, even when pure functions are technically an option.
The general definitions you link to do not considering the specifics of inout
and value semantics in Swift. This definition:
if it modifies some state variable value(s) outside its local environment, which is to say if it has any observable effect other than its primary effect of returning a value to the invoker of the operation
…does not apply to inout
in Swift. There is no spooky action at a distance: the local environment is the synchronous scope of the inout
parameter. And there is no observable effect other than returning an updated binding to the invoker of the operation. You may lose access to the old value at that point of the scope, but that would be true in a chain of state monad operations in Haskell, as well.
Again, there's no such advocacy. We consider (T) -> T
and (inout T) -> Void
to be isomorphic, and if the first is pure, the translation to the second is pure.
Exactly, inout
functions (operating on true value types) are as impure as a function with a monadic type would be in Haskell. Showing that reordering or removing calls to inout
functions change the result in Swift is as fair as reordering or removing monadic function calls in Haskell's do
chain.
Side note as I figure people already know this, but when something like this is said I just want to be explicit, no function is verifiably pure in Swift afaik just through types alone, you have to inspect what the function is actually doing, as any random function could fire some network request or hit a db if it wanted to. This is not true of Haskell (the language spec, GHC's implementation of Haskell does have escape-hatches not built into the spec like unsafePerformIO
, but they're essentially never used and technically not part of the core language, so another compiler could implement Haskell without it and it'd still be valid Haskell2010).
when you say the local environment, you do mean the entire scope? I'll admit I don't have a ton of experience with inout
and mutating
, but afaik you could have a variable at a higher scope (say at the struct field level or even global scope) and pass that into an (inout T) -> Void
, right? And then you could have some other function in some far away place in the app that has access to that same variable (say because it's at the global scope), and it would then see the modifications you made originally?
Obviously goes without saying, global mutable variables are a bad idea, I'm sure you all agree, just trying to understand the scope of this "implicit State monad" if that's how we're going to view mutating
/inout
/etc.
If my understanding here is correct, then that's an important difference between the State
monad and inout
/var
/mutating
/etc. The State
monad is exactly scoped only to the function that is operating in the State
, it's not going to change some random value in a higher scope that would impact other isolated State
monads.
If I'm wrong, and you can't pull any var
in scope into an inout
, then you could look at each function definition as implicitly operating in its own State
monad. I wouldn't want literally 100% of my functions to operate in this monad, but I could at least pretend that they're not by refusing to use the corresponding operations for the State monad.
once the compiler bugs with consuming func
have been ironed out, i imagine T -> T
will be a lot easier to implement performantly, and i would start writing (inout T) -> Void
in terms of T -> T
instead of the other way around.
Swift's inout argument is a syntax sugar essentially. The following two examples have the same meaning:
func foo(_ v: inout Int) { ...; v = value }
var x = 42
foo(&x)
func foo(_ v: Int) -> Int { ...; return value }
var x = foo(42)
Note that there could be other potentially important differences between "do it in a functional style" and "modify in place" approaches, for example with this setup:
struct User {
var name: String
}
var users = (1...10_000_000).map { _ in
User(name: UUID().uuidString)
}
the following two fragments:
users = users.map { user in
if user.name.last! == "0" {
usleep(20)
return User(name: user.name + "!")
}
return user
}
and:
for i in users.indices {
var user = users[i]
if user.name.last! == "0" {
usleep(20)
user.name = user.name + "!"
users[i] = user
}
}
result into quite different memory curves:
With other examples it could be not (just) memory but runtime differences as well.
Yes, that's why mutating a (global) variable via inout
in two distinct scopes at once will trigger exclusivity violation error.
A scope is introduced by a var
binding to which a value is assigned. The scope ends on returning from a function in which this var
was bound. The scope can be extended to callees by passing this var
binding via inout
to them.
And then
mutating func f()
is syntax sugar for
func f(inout self)
That's right. I think we're all on the same page here.
It would see the new value yes, just as if the function had instead returned it. The scope is encoded at every boundary, so every step in the chain needs to be marked inout
and called with &
for the code to compile. This is essentially the same as if you had encoded the operation as (T) -> T
all the way down.
I believe it's the same, since inout
is scoped in the exact same way.
i am surprised that the first one uses much more memory than the second one; i had always assumed map
was fully inlinable, and Array
could optimize Array.map
to perform the transformation in-place.
i don’t remember if there is a formal guarantee that the array element deinit
s can’t run until after the array itself is ready to be deinitialized.