What about garbage collection?

Please keep discussion of iPhone design on the iphone-evolution list. (j/k) No matter how much RAM your device has, though, it's still advantageous at the macro level to optimize for memory usage—you get better cache utilization, can run more processes, swap less, etc., and your hot paths that deserve to be greedy with memory will be more likely to have the space they need to be fast.

-Joe

···

On Feb 8, 2016, at 12:26 PM, Goffredo Marocchi <panajev@gmail.com> wrote:

Devices being able to house a compact amount of RAM is a device vendor concern not a Swift one although the former kind of owns the latter, but that is being a bit cheeky I will admit that ;).

Still, iPhone 6 Plus only having 1 GB of RAM... Grrrr... :P. Seriously, multitasking suffers greatly on that device as well as having multiple opened tabs on Safari...

+1 for removing mark-sweep GC from other languages. Oh, wait, was that not
the topic?

-david GitHub - AE9RB/SwiftGL: This project has moved.

···

On Mon, Feb 8, 2016 at 12:28 PM, Vanderlei Martinelli via swift-evolution < swift-evolution@swift.org> wrote:

-1 for any kind of GC in Swift and any other language/platform.

A key difference between Swift/ObjC ARC and Microsoft's experiments is that Apple has been introducing these improvements incrementally on top of an established platform. You can successfully develop and ship an app using ARC or Swift today.

-Joe

···

On Feb 8, 2016, at 1:16 PM, Goffredo Marocchi via swift-evolution <swift-evolution@swift.org> wrote:

On a similar note though, I think if we are taking the point of view that simplifying the memory model and making it safer is worth lots of complexity trade offs I would go all the way in that direction and look real hard at the outcome of MS's project Midori rather than stopping at a GC, but it is kind of getting very off topic so I will cut this short.

-1 for any kind of GC in Swift and any other language/platform.

···

On Mon, Feb 8, 2016 at 6:26 PM, Goffredo Marocchi via swift-evolution < swift-evolution@swift.org> wrote:

Devices being able to house a compact amount of RAM is a device vendor
concern not a Swift one although the former kind of owns the latter, but
that is being a bit cheeky I will admit that ;).

Still, iPhone 6 Plus only having 1 GB of RAM... Grrrr... :P. Seriously,
multitasking suffers greatly on that device as well as having multiple
opened tabs on Safari...

Sent from my iPhone

On 8 Feb 2016, at 20:11, Joe Groff via swift-evolution < > swift-evolution@swift.org> wrote:

On Feb 8, 2016, at 11:56 AM, Félix Cloutier via swift-evolution < > swift-evolution@swift.org> wrote:

Has there been a garbage collection thread so far? I understand that
reference counting vs. garbage collection can be a heated debate, but it
might be relevant to have it.

It seems to me that the two principal upsides of reference counting are
that destruction is (essentially) deterministic and performance is more
easily predicted. However, it comes with many downsides:

   - object references are expensive to update
   - object references cannot be atomically updated
   - heap fragmentation
   - the closure capture syntax uses up an unreasonable amount of
   mindshare just because of [weak self]

Since Swift doesn't expose memory management operations outside of
`autoreleasepool`, it seems to me that you could just drop in a garbage
collector instead of reference counting and it would work (for most
purposes).

While true in theory, code that relies on destructors to clean up
unmanaged resources does not port cleanly to GC as-is in practice. GC of
course has its own drawbacks. Heap scanning is expensive, thrashing cache
and burning battery. GCs also require higher memory ceiling proportional to
the amount of heap actively being used, and GCs suitable for interactive
use tend to increase responsiveness at the cost of higher memory use, which
has its own second-order energy costs—devices need more RAM, and spend more
time swapping or killing, and thus need bigger batteries to refresh all
that RAM. ARC interoperates better with unmanaged resources, both
non-memory resources like sockets and files and also C-level memory
resources. The ARC optimizer and Swift's calling convention also optimize
toward reclaiming resources closer to their last use, keeping resource
usage low.

-Joe

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Then the support for debugging and sorting reference cycles has got to improve in ease and effectiveness a lot. If you are not making a game, GC pauses with modern garbage collectors do not really impede developers a lot, do they?

···

Sent from my iPhone

On 8 Feb 2016, at 20:28, Vanderlei Martinelli <vmartinelli@alecrim.com> wrote:

-1 for any kind of GC in Swift and any other language/platform.

On Mon, Feb 8, 2016 at 6:26 PM, Goffredo Marocchi via swift-evolution <swift-evolution@swift.org> wrote:
Devices being able to house a compact amount of RAM is a device vendor concern not a Swift one although the former kind of owns the latter, but that is being a bit cheeky I will admit that ;).

Still, iPhone 6 Plus only having 1 GB of RAM... Grrrr... :P. Seriously, multitasking suffers greatly on that device as well as having multiple opened tabs on Safari...

Sent from my iPhone

On 8 Feb 2016, at 20:11, Joe Groff via swift-evolution <swift-evolution@swift.org> wrote:

On Feb 8, 2016, at 11:56 AM, Félix Cloutier via swift-evolution <swift-evolution@swift.org> wrote:

Has there been a garbage collection thread so far? I understand that reference counting vs. garbage collection can be a heated debate, but it might be relevant to have it.

It seems to me that the two principal upsides of reference counting are that destruction is (essentially) deterministic and performance is more easily predicted. However, it comes with many downsides:

object references are expensive to update
object references cannot be atomically updated
heap fragmentation
the closure capture syntax uses up an unreasonable amount of mindshare just because of [weak self]

Since Swift doesn't expose memory management operations outside of `autoreleasepool`, it seems to me that you could just drop in a garbage collector instead of reference counting and it would work (for most purposes).

While true in theory, code that relies on destructors to clean up unmanaged resources does not port cleanly to GC as-is in practice. GC of course has its own drawbacks. Heap scanning is expensive, thrashing cache and burning battery. GCs also require higher memory ceiling proportional to the amount of heap actively being used, and GCs suitable for interactive use tend to increase responsiveness at the cost of higher memory use, which has its own second-order energy costs—devices need more RAM, and spend more time swapping or killing, and thus need bigger batteries to refresh all that RAM. ARC interoperates better with unmanaged resources, both non-memory resources like sockets and files and also C-level memory resources. The ARC optimizer and Swift's calling convention also optimize toward reclaiming resources closer to their last use, keeping resource usage low.

-Joe

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Sorry Joe, was not meant to cause a stir or to inflame the discussion. I found it an experiment with exciting potential not because it had a particular vendor attached. I do applaud Apple for pushing innovation to the market very often :).

···

Sent from my iPhone

On 8 Feb 2016, at 21:49, Joe Groff <jgroff@apple.com> wrote:

On Feb 8, 2016, at 1:16 PM, Goffredo Marocchi via swift-evolution <swift-evolution@swift.org> wrote:

On a similar note though, I think if we are taking the point of view that simplifying the memory model and making it safer is worth lots of complexity trade offs I would go all the way in that direction and look real hard at the outcome of MS's project Midori rather than stopping at a GC, but it is kind of getting very off topic so I will cut this short.

A key difference between Swift/ObjC ARC and Microsoft's experiments is that Apple has been introducing these improvements incrementally on top of an established platform. You can successfully develop and ship an app using ARC or Swift today.

-Joe

Isn’t this essentially what the ‘leaks’ tool is?

Charles

···

On Feb 8, 2016, at 10:20 PM, Michel Fortin via swift-evolution <swift-evolution@swift.org> wrote:

Le 8 févr. 2016 à 16:00, Chris Lattner via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> a écrit :

I’m personally not interested in requiring a model that requires us to throw away a ton of perfectly good RAM to get an “simpler" programming model - particularly on that adds so many tradeoffs.

Me neither. I certainly prefer ARC. Hence one reason I like Swift more than D.

I'd just like to point out that a "fake" GC that stops short of actually freeing objects to instead print a warning on the console listing objects to be collected would be a great help to detect and fix cycles early in development.

Particularly I think the strong/weak dance, after understood and properly
applied, is a benefit and not a bad thing.

We can write things like:

doSomeLongRunningTaskThatCannotBeCancelled { [weak self] someText in
    self?.someLabel.text = someText
}

This is a very common case in Cocoa [Touch], I think. Using GC how could we
say: “hey, if the result comes in and I'm no longer here, just ignore it,
okay?”

I really like that RC and ARC are alive in Swift. Perhaps in the future we
have something better, yet I would still like to be able to write code like
the above. About GC: I do not know any good reasons to bring it to Swift.

-Van

···

On Tue, Feb 9, 2016 at 2:54 AM, Paul Ossenbruggen via swift-evolution < swift-evolution@swift.org> wrote:

On Feb 8, 2016, at 1:00 PM, Chris Lattner via swift-evolution < > swift-evolution@swift.org> wrote:

   -
   the closure capture syntax uses up an unreasonable amount of mindshare
   just because of [weak self]

I think that this specific point is solvable in others ways, but I’ll
interpret this bullet as saying that you don’t want to worry about
weak/unowned pointers. I completely agree that we strive to provide a
simple programming model, and I can see how "not having to think about
memory management" seems appealing.

I actually like RC, and don’t find it particularly problematic except in
this one case…the closure case. If we could solve this one problem then I
think the rest of it is fine. I don’t mind remembering to put a weak
reference in for back pointers etc. The benefits of RC are great, and the
introduction of ARC was a great mental burden lifted of developer’s minds.
Just trying to explain and remember all the tricky rules around the closure
capture would be nice to solve and would love to discuss that rather than
garbage collection.

- Paul

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

+1. GC is bad.

···

On Mon, Feb 8, 2016 at 4:07 PM David Turnbull via swift-evolution < swift-evolution@swift.org> wrote:

On Mon, Feb 8, 2016 at 12:28 PM, Vanderlei Martinelli via swift-evolution > <swift-evolution@swift.org> wrote:

-1 for any kind of GC in Swift and any other language/platform.

+1 for removing mark-sweep GC from other languages. Oh, wait, was that not
the topic?

-david GitHub - AE9RB/SwiftGL: This project has moved.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I think so. Except you rarely use the `leaks` tool early in development. If this was baked into debug builds (with a way to opt out, of course), retain cycles would be caught much earlier in the development process and you'd see your leaks almost immediately after writing the offending code.

In fact, I'm pretty sure most people here don't check for leaks even in the final build of their app unless there's an obvious leakage problem. I know I don't, generally.

···

Le 8 févr. 2016 à 23:34, Charles Srstka <cocoadev@charlessoft.com> a écrit :

On Feb 8, 2016, at 10:20 PM, Michel Fortin via swift-evolution <swift-evolution@swift.org> wrote:

Le 8 févr. 2016 à 16:00, Chris Lattner via swift-evolution <swift-evolution@swift.org> a écrit :

I’m personally not interested in requiring a model that requires us to throw away a ton of perfectly good RAM to get an “simpler" programming model - particularly on that adds so many tradeoffs.

Me neither. I certainly prefer ARC. Hence one reason I like Swift more than D.

I'd just like to point out that a "fake" GC that stops short of actually freeing objects to instead print a warning on the console listing objects to be collected would be a great help to detect and fix cycles early in development.

Isn’t this essentially what the ‘leaks’ tool is?

--
Michel Fortin
https://michelf.ca

Just to add my thoughts, but I definitely don’t want garbage collection back; firstly because when Apple announced ARC for Objective-C, I couldn’t believe we hadn’t been using it all along, as it just seemed like such an obvious solution that makes garbage collection largely obsolete.

But more importantly, any time spent on supporting garbage collection is time that could be spent on making ARC even better, and really the only area where there is trouble right now is detecting cycles, but that’s a problem that should be solved by making the debugging and profiling tools as good as possible so that issues can be found and corrected with the right combination of weak references etc.

So a -1 from me. While some of the disadvantages might be valid, I think on the whole ARC is a better mechanism overall, and there may be other ways to address any of the issues it may have, which will be harder to do if the team has to dedicate time to a garbage collector.

···

On 9 Feb 2016, at 06:01, Colin Cornaby via swift-evolution <swift-evolution@swift.org> wrote:

I thought I’d add my opinion to this thread even though it’s been well covered in different posts, just to put more weight on the -1 argument…

I spend most my time dealing with performance sensitive code, and Swift’s predictability is a very strong pro for it against languages like Java. Java’s garbage collector provides too much uncertainty and instability to performance.

There certainly are tradeoffs. ARC won’t catch things like circular retain loops. But as mentioned, tagged pointers on the modern architecture take care of the retain/release overhead.

I’ll put it a different way that sums up why I felt the need to reply: ARC has it’s inconveniences, but moving to garbage collection would likely lead us to abandon any plans to adopt Swift for many of our performance sensitive projects. If someone solves the “pausing” problem in a garbage collected language I’d reconsider. But until then it would take Swift out of consideration for a lot of projects.

On Feb 8, 2016, at 11:56 AM, Félix Cloutier via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Has there been a garbage collection thread so far? I understand that reference counting vs. garbage collection can be a heated debate, but it might be relevant to have it.

It seems to me that the two principal upsides of reference counting are that destruction is (essentially) deterministic and performance is more easily predicted. However, it comes with many downsides:

object references are expensive to update
object references cannot be atomically updated
heap fragmentation
the closure capture syntax uses up an unreasonable amount of mindshare just because of [weak self]

Since Swift doesn't expose memory management operations outside of `autoreleasepool`, it seems to me that you could just drop in a garbage collector instead of reference counting and it would work (for most purposes).

Has a GC been considered at all?

Félix

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Yes, deterministic destruction is a major feature. Not having to explain what finalizers are (and why they shouldn’t generally be used) is a pretty huge. Keep in mind that Swift interops with C, so deinit is unavoidable for certain types.

More pointedly, not relying on GC enables Swift to be used in domains that don’t want it - think boot loaders, kernels, real time systems like audio processing, etc.

I agree that you don't want a GC in these, but on the other hand, from what I know of the current RC implementation, I find it hard to believe that it would be much more acceptable.

Why not? ARC provides determinism with a “light” runtime, which is all these really need.

Beyond any cycle efficiency concerns, interrupting a retain/release operation could cause a real carnage, and disabling interrupts while it happens won't necessarily be acceptable either. (Which begs the question: how well does Swift react to signals right now?)

I don’t understand what you mean here. If you’re talking about unix signals, then it is exactly the same as C: you can only do "async signal safe” operations in a signal handler. This means you can’t do much of anything. I don’t see how ARC or GC are related at all to signal handling, since neither would allow you to allocate memory in a signal.

object references cannot be atomically updated

This is true, but Swift currently has no memory model and no concurrency model, so it isn’t clear that this is actually important (e.g. if you have no shared mutable state).

The fact that references can't be updated atomically will necessarily influence any decision that is taken regarding the concurrency model. The concurrency model pre-proposal <https://github.com/apple/swift/blob/master/docs/proposals/Concurrency.rst&gt; already uses it

That isn’t a pre-proposal. That is a random unendorsed idea, which will not necessarily lead to a specific swift design.

I will be shocked if the same argument doesn't come up again when these talks actually start.

Sure, but in any case, GC doesn’t solve race conditions or any of the other problems of that come up with that form of concurrency, so claiming that it really moves the needle on concurrency model doesn’t seem particularly useful.

Also, if you take into account the memory model of the hardware, providing atomic pointer updates in a GC setting requires use of synchronizing store instructions that are more expensive than normal stores (at least on most architectures, not including X86).

the closure capture syntax uses up an unreasonable amount of mindshare just because of [weak self]

I think that this specific point is solvable in others ways, but I’ll interpret this bullet as saying that you don’t want to worry about weak/unowned pointers. I completely agree that we strive to provide a simple programming model, and I can see how "not having to think about memory management" seems appealing.

What I actually meant is that a lot of energy is being spent right now on how to address these issues, which wouldn't be issues at all if Swift relied on a garbage collector.

It is hard to read emotion through email so I can’t tell if you intend this to be snarky or not, but I assure you that the effort we are expending on this is a *tiny* fraction of the effort that would be required to successfully switch to GC and make it as good as what we have today.

My opinion is yes: while I think it is silly to micromanage memory, I do think thinking about it some is useful. I think that expressing that intention directly in the code adds value in terms of maintenance of the code over time and communication to other people who work on it.

I don't agree with this. People will very often use `weak` or `unowned` only to break a cycle, not because it really doesn't matter if the object suddenly disappears. The whole weak self capture is a good example of that. The unpopularity of weak references in garbage-collected languages is generally a demonstration that most applications don't need to bother with that.

Sure, I can respect that opinion. As I said, it was just MHO.

I’m personally not interested in requiring a model that requires us to throw away a ton of perfectly good RAM to get an “simpler" programming model - particularly on that adds so many tradeoffs.

The way I have to reason about cycles isn't my favorite part of Swift, but I'm not necessarily in favor of having a GC either. I was very surprised that it hadn't come up at all since RC causes actual issues that are currently discussed. (It's also nice that you can ask and have informed people reply to you.)

Makes sense, thank you for providing a list of perceived advantages of GC (allowing a detailed response), instead of just saying “someone else does this, we should too!” :-)

-Chris

···

On Feb 8, 2016, at 3:13 PM, Félix Cloutier <felixcca@yahoo.ca> wrote:

1 Like

I should check because it's been a while, but in Objective-C loops that quickly modified references, I've measured objc_retain make up to 15% of the loop's cost.

In my experience it is definitely possible to run into such situations with Swift, however while a GC might solve that particular issue you'll just get a different crop of issues from a GC.

Also, the compiler has already made huge strides in this area in my experience, doubly so if you use whole module optimization. I don't think this is a big issue for most people and applications.

And ARC has the advantage that this can be identified (by profiling) and worked around pretty well using tools such as:

* inout
* unsafe(unowned)
* UnsafePointer
* Unmanaged/UnsafeReference

depending on the circumstances. It's not trivial and there might be a bit of a learning curve but ultimately it's very predictable and abstractable. Working around GC issues seems a lot trickier to me.

I also think that for example Rusts 'borrowed references' would be a big help in this area – and those have been discussed on this list previously with the Swift team mentioning that they would be interested to explore this direction.

- Janosch

···

On 08 Feb 2016, at 21:39, Félix Cloutier via swift-evolution <swift-evolution@swift.org> wrote:

My understanding is that most of that cost comes from tracking references in a global hash map, which I can't imagine being particularly kind to memory use or cache performance either.

Le 8 févr. 2016 à 15:11:52, Joe Groff <jgroff@apple.com <mailto:jgroff@apple.com>> a écrit :

On Feb 8, 2016, at 11:56 AM, Félix Cloutier via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Has there been a garbage collection thread so far? I understand that reference counting vs. garbage collection can be a heated debate, but it might be relevant to have it.

It seems to me that the two principal upsides of reference counting are that destruction is (essentially) deterministic and performance is more easily predicted. However, it comes with many downsides:

object references are expensive to update
object references cannot be atomically updated
heap fragmentation
the closure capture syntax uses up an unreasonable amount of mindshare just because of [weak self]

Since Swift doesn't expose memory management operations outside of `autoreleasepool`, it seems to me that you could just drop in a garbage collector instead of reference counting and it would work (for most purposes).

While true in theory, code that relies on destructors to clean up unmanaged resources does not port cleanly to GC as-is in practice. GC of course has its own drawbacks. Heap scanning is expensive, thrashing cache and burning battery. GCs also require higher memory ceiling proportional to the amount of heap actively being used, and GCs suitable for interactive use tend to increase responsiveness at the cost of higher memory use, which has its own second-order energy costs—devices need more RAM, and spend more time swapping or killing, and thus need bigger batteries to refresh all that RAM. ARC interoperates better with unmanaged resources, both non-memory resources like sockets and files and also C-level memory resources. The ARC optimizer and Swift's calling convention also optimize toward reclaiming resources closer to their last use, keeping resource usage low.

-Joe

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

My understanding (from what I’ve seen in the literature, but I am in no way an expert) is that RC has worse worse case behaviour than GC regarding pauses.

Also arguments regarding RAM use (and perhaps even battery use), as all hardware resource-based arguments, have always been proven wrong in the past as hardware has evolved to more and better.

The usual argument is RAM is cheap, programmer’s time, especially debugging time, is expensive.

I find it interesting that the commonly accepted wisdom is that GC is the right thing to do. To quote but one blog post I’ve read:

It’s a long since resolved dispute, and GC won. I don’t want Steve Jobs to reach out from the grave and drag us back to the 70s. There’s nothing special about mobile phones: they are more powerful than the computers that GC won on in the first place.

So I’d be interested in understanding why we are about alone on our Apple island with our opinion that RC is better than GC? Are they all collectively wrong in the rest of the universe? (not that I condone argument from majority)

I can only state that my experience with GC has been with mostly with Macintosh Common Lisp a rather long time ago, and I did really love it.

So for me, GC would be a +1, but not a very strong one, as I find RC adequate.

Jean-Denis

···

On 09 Feb 2016, at 07:01, Colin Cornaby via swift-evolution <swift-evolution@swift.org> wrote:

I thought I’d add my opinion to this thread even though it’s been well covered in different posts, just to put more weight on the -1 argument…

I spend most my time dealing with performance sensitive code, and Swift’s predictability is a very strong pro for it against languages like Java. Java’s garbage collector provides too much uncertainty and instability to performance.

There certainly are tradeoffs. ARC won’t catch things like circular retain loops. But as mentioned, tagged pointers on the modern architecture take care of the retain/release overhead.

I’ll put it a different way that sums up why I felt the need to reply: ARC has it’s inconveniences, but moving to garbage collection would likely lead us to abandon any plans to adopt Swift for many of our performance sensitive projects. If someone solves the “pausing” problem in a garbage collected language I’d reconsider. But until then it would take Swift out of consideration for a lot of projects.

On Feb 8, 2016, at 11:56 AM, Félix Cloutier via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Has there been a garbage collection thread so far? I understand that reference counting vs. garbage collection can be a heated debate, but it might be relevant to have it.

It seems to me that the two principal upsides of reference counting are that destruction is (essentially) deterministic and performance is more easily predicted. However, it comes with many downsides:

object references are expensive to update
object references cannot be atomically updated
heap fragmentation
the closure capture syntax uses up an unreasonable amount of mindshare just because of [weak self]

Since Swift doesn't expose memory management operations outside of `autoreleasepool`, it seems to me that you could just drop in a garbage collector instead of reference counting and it would work (for most purposes).

Has a GC been considered at all?

Félix

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

It does break the make it easy to debug rule though. In any non trivial system worked on by a moderate to large team, manually hunting cyclic references and appropriately breaking them down without introducing bugs can still be very painful. GC is one of those technologies where you trade the efficiency you mention for a safer and easier to use memory model where you would spend a lot less time to debug for correctness.

There are some system where memory safety, type safety, and thread safety rule over pure performance concerns... Reasons why C# and Java still have this much traction. Just as much as the small tax of reference counting (and having so much accumulated knowledge that is able to exploit the vast ways it allows you to go about it) is still greater than not 0... see why C++ still has this enormous presence on all platforms where performance really matters... Games on iOS too.

Still, if GC never comes to Swift, say not even a cycle detector exclusively say in Debug builds on the iOS Simulator just to restrict things a bit (Address Sanitiser does make a precedence), but spotting them and hunting them down using Xcode becomes easier then it is a much much better overall win.

···

Sent from my iPhone

On 10 Feb 2016, at 06:14, Thorsten Seitz via swift-evolution <swift-evolution@swift.org> wrote:

Strong -1 for GC.

I think the advantages of ARC over GC as given by Chris, Joe and Dave with regards to energy efficiency, memory efficiency, caching behavior, predictability, value types with copy on write and declarative memory management on the object graph level are essential and indispensable.

And yes, I see it as an advantage having to think about memory management dependencies on the object graph level and deciding how to break cycles. I think this leads to much cleaner object models.

-Thorsten

Am 08.02.2016 um 22:00 schrieb Chris Lattner via swift-evolution <swift-evolution@swift.org>:

On Feb 8, 2016, at 11:56 AM, Félix Cloutier via swift-evolution <swift-evolution@swift.org> wrote:
Has there been a garbage collection thread so far? I understand that reference counting vs. garbage collection can be a heated debate, but it might be relevant to have it.

Technically speaking, reference counting is a form of garbage collection, but I get what you mean. Since there are multiple forms of GC, I'll assume that you mean a generational mark and sweep algorithm like you’d see in a Java implementation.

It seems to me that the two principal upsides of reference counting are that destruction is (essentially) deterministic and performance is more easily predicted.

Yes, deterministic destruction is a major feature. Not having to explain what finalizers are (and why they shouldn’t generally be used) is a pretty huge. Keep in mind that Swift interops with C, so deinit is unavoidable for certain types.

More pointedly, not relying on GC enables Swift to be used in domains that don’t want it - think boot loaders, kernels, real time systems like audio processing, etc.

We have discussed in the passed using hybrid approaches like introducing a cycle collector, which runs less frequently than a GC would. The problem with this is that if you introduce a cycle collector, code will start depending on it. In time you end up with some libraries/packages that works without GC, and others that leak without it (the D community has relevant experience here). As such, we have come to think that adding a cycle collector would be bad for the Swift community in the large.

However, it comes with many downsides:

object references are expensive to update

Most garbage collectors have write barriers, which execute extra code when references are updated. Most garbage collectors also have safe points, which means that extra instructions get inserted into loops.

object references cannot be atomically updated

This is true, but Swift currently has no memory model and no concurrency model, so it isn’t clear that this is actually important (e.g. if you have no shared mutable state).

heap fragmentation

This is at best a tradeoff depending on what problem you’re trying to solve (e.g. better cache locality or smaller max RSS of the process). One thing that I don’t think is debatable is that the heap compaction behavior of a GC (which is what provides the heap fragmentation win) is incredibly hostile for cache (because it cycles the entire memory space of the process) and performance predictability.

Given that GC’s use a lot more memory than ARC systems do, it isn’t clear what you mean by GC’s winning on heap fragmentation.

the closure capture syntax uses up an unreasonable amount of mindshare just because of [weak self]

I think that this specific point is solvable in others ways, but I’ll interpret this bullet as saying that you don’t want to worry about weak/unowned pointers. I completely agree that we strive to provide a simple programming model, and I can see how "not having to think about memory management" seems appealing.

On the other hand, there are major advantages to the Swift model. Unlike MRR, Swift doesn’t require you to micromanage memory: you think about it at the object graph level when you’re building out your types. Compared to MRR, ARC has moved memory management from being imperative to being declarative. Swift also puts an emphasis on value types, so certain problems that you’d see in languages like Java are reduced.

That said, it is clear that it takes time and thought to use weak/unowned pointers correctly, so the question really becomes: does reasoning about your memory at the object graph level and expressing things in a declarative way contribute positively to your code?

My opinion is yes: while I think it is silly to micromanage memory, I do think thinking about it some is useful. I think that expressing that intention directly in the code adds value in terms of maintenance of the code over time and communication to other people who work on it.

Since Swift doesn't expose memory management operations outside of `autoreleasepool`, it seems to me that you could just drop in a garbage collector instead of reference counting and it would work (for most purposes).

Has a GC been considered at all?

GC also has several *huge* disadvantages that are usually glossed over: while it is true that modern GC's can provide high performance, they can only do that when they are granted *much* more memory than the process is actually using. Generally, unless you give the GC 3-4x more memory than is needed, you’ll get thrashing and incredibly poor performance. Additionally, since the sweep pass touches almost all RAM in the process, they tend to be very power inefficient (leading to reduced battery life).

I’m personally not interested in requiring a model that requires us to throw away a ton of perfectly good RAM to get an “simpler" programming model - particularly on that adds so many tradeoffs.

-Chris

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Based on this as the summary of arguments I read so far on this thread, I tend to increase my +1. Here is what I read:

- arguments regarding energy efficiency, memory efficiency, or more generally hardware limitations seem to me rather short-sighted, as in “640 KB or RAM should be enough for all times to come” (original design decision by IBM for the PC). Is Swift a language designed for todays hardware? Or is it supposed to outlast it a wee bit?

- arguments that say that managing [some aspect of] memory is good because it will lead to better programs could be equally be brought forward against any optimisation. With or without GC, with or without RC, you can expect many programmer to make bad design decisions for lack of understanding of the technology, hardware or software. I really question whether making the programmer responsible for managing the object graph is “essential”. This thread has claimed that, but not supported it.

- arguments claimed elsewhere as myths that have been debunked (better efficiency and predictability come to mind) in comparisons with state of the art GC systems.

- arguments that seem to me a bit disingenuous, such as blaming GC for Java’s failure on the desktop

- arguments which I am not competent enough to understand, e.g. the write and read barriers topic, on which I would like to get better educated

Of course some ideal would be a language where GC is an option. I am not sure whether that is possible, though the one argument I saw against that, namely that libraries would be compatible with one of the options only, can be solved with the fat binary idea. We used to have libraries compatible with both PowerPC and Intel processors. That sounds a lot more complex.

Overall, my opinion at this time is that precluding GC is too short sighted, and any decision made today should not be incompatible with the possible introduction of GC tomorrow. While RC is adequate, GC is better on the long run, simply because higher level constructs make [good] programmers more productive, and would be a good longer term goal. I have still to meet a more productive environment than what the Lisp machines proposed in the 90’s. While I do not claim that GC was responsible for that, it was certainly part of the deal. Or perhaps my memory is fading, making them better than they actually were, that’s possible too.

Jean-Denis

···

On 10 Feb 2016, at 07:14, Thorsten Seitz via swift-evolution <swift-evolution@swift.org> wrote:

Strong -1 for GC.

I think the advantages of ARC over GC as given by Chris, Joe and Dave with regards to energy efficiency, memory efficiency, caching behavior, predictability, value types with copy on write and declarative memory management on the object graph level are essential and indispensable.

And yes, I see it as an advantage having to think about memory management dependencies on the object graph level and deciding how to break cycles. I think this leads to much cleaner object models.

-Thorsten

It is good to explain that in the example I gave we do not have to declare
self as weak (completion handlers do not need this), but it was declared
this way to not retain the view controller (or view) while the user is
already seeing something else on the screen.

-Van

···

On Tue, Feb 9, 2016 at 3:04 AM, Vanderlei Martinelli < vmartinelli@alecrim.com> wrote:

Particularly I think the strong/weak dance, after understood and properly
applied, is a benefit and not a bad thing.

We can write things like:

doSomeLongRunningTaskThatCannotBeCancelled { [weak self] someText in
    self?.someLabel.text = someText
}

This is a very common case in Cocoa [Touch], I think. Using GC how could
we say: “hey, if the result comes in and I'm no longer here, just ignore
it, okay?”

I really like that RC and ARC are alive in Swift. Perhaps in the future we
have something better, yet I would still like to be able to write code like
the above. About GC: I do not know any good reasons to bring it to Swift.

-Van

On Tue, Feb 9, 2016 at 2:54 AM, Paul Ossenbruggen via swift-evolution < > swift-evolution@swift.org> wrote:

On Feb 8, 2016, at 1:00 PM, Chris Lattner via swift-evolution < >> swift-evolution@swift.org> wrote:

   -
   the closure capture syntax uses up an unreasonable amount of
   mindshare just because of [weak self]

I think that this specific point is solvable in others ways, but I’ll
interpret this bullet as saying that you don’t want to worry about
weak/unowned pointers. I completely agree that we strive to provide a
simple programming model, and I can see how "not having to think about
memory management" seems appealing.

I actually like RC, and don’t find it particularly problematic except in
this one case…the closure case. If we could solve this one problem then I
think the rest of it is fine. I don’t mind remembering to put a weak
reference in for back pointers etc. The benefits of RC are great, and the
introduction of ARC was a great mental burden lifted of developer’s minds.
Just trying to explain and remember all the tricky rules around the closure
capture would be nice to solve and would love to discuss that rather than
garbage collection.

- Paul

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I don't think it's just the Apple island—mark-and-sweep GC languages have consistently been tried and failed on the client side. Java applets and client-side apps died quickly; Microsoft tried and failed several times to reinvent their stack on top of .NET, and has since retreated to a refcounting-based foundation for WinRT; Apple too has tried to bring GC to the client several times, with the Java bridge, MacRuby, and ObjC GC. Javascript in the browser is an exception, though even there people are trying really hard to supplant it with a more primitive foundation like WebAssembly or asm.js.

-Joe

···

On Feb 9, 2016, at 6:35 AM, Jean-Denis Muys via swift-evolution <swift-evolution@swift.org> wrote:

My understanding (from what I’ve seen in the literature, but I am in no way an expert) is that RC has worse worse case behaviour than GC regarding pauses.

Also arguments regarding RAM use (and perhaps even battery use), as all hardware resource-based arguments, have always been proven wrong in the past as hardware has evolved to more and better.

The usual argument is RAM is cheap, programmer’s time, especially debugging time, is expensive.

I find it interesting that the commonly accepted wisdom is that GC is the right thing to do. To quote but one blog post I’ve read:

It’s a long since resolved dispute, and GC won. I don’t want Steve Jobs to reach out from the grave and drag us back to the 70s. There’s nothing special about mobile phones: they are more powerful than the computers that GC won on in the first place.

So I’d be interested in understanding why we are about alone on our Apple island with our opinion that RC is better than GC? Are they all collectively wrong in the rest of the universe? (not that I condone argument from majority)

I can only state that my experience with GC has been with mostly with Macintosh Common Lisp a rather long time ago, and I did really love it.

So for me, GC would be a +1, but not a very strong one, as I find RC adequate.

When programming C and C++, I had a strong requirement that my testing tools were clean, so that I could run tools similar to leaks over my unit/integration test suite.

-DW

···

On Feb 9, 2016, at 5:16 AM, Michel Fortin via swift-evolution <swift-evolution@swift.org> wrote:

Le 8 févr. 2016 à 23:34, Charles Srstka <cocoadev@charlessoft.com> a écrit :

On Feb 8, 2016, at 10:20 PM, Michel Fortin via swift-evolution <swift-evolution@swift.org> wrote:

Le 8 févr. 2016 à 16:00, Chris Lattner via swift-evolution <swift-evolution@swift.org> a écrit :

I’m personally not interested in requiring a model that requires us to throw away a ton of perfectly good RAM to get an “simpler" programming model - particularly on that adds so many tradeoffs.

Me neither. I certainly prefer ARC. Hence one reason I like Swift more than D.

I'd just like to point out that a "fake" GC that stops short of actually freeing objects to instead print a warning on the console listing objects to be collected would be a great help to detect and fix cycles early in development.

Isn’t this essentially what the ‘leaks’ tool is?

I think so. Except you rarely use the `leaks` tool early in development. If this was baked into debug builds (with a way to opt out, of course), retain cycles would be caught much earlier in the development process and you'd see your leaks almost immediately after writing the offending code.

In fact, I'm pretty sure most people here don't check for leaks even in the final build of their app unless there's an obvious leakage problem. I know I don't, generally.

--
Michel Fortin
https://michelf.ca

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

<snip>

I find it interesting that the commonly accepted wisdom is that GC is the right thing to do. To quote but one blog post I’ve read:

It’s a long since resolved dispute, and GC won. I don’t want Steve Jobs to reach out from the grave and drag us back to the 70s. There’s nothing special about mobile phones: they are more powerful than the computers that GC won on in the first place.

I looked up that article (แทงบอล UFABET เว็บพนันบอล เดิมพันขั้นต่ำ 10 บาท บริการ 24 ชม.) and it has several logical fallacies, including the obvious one that reference counting is not a form of GC!

In addition, the point that using a generational GC saves you from having a cost to object allocation is false, that web browsers could not do object compaction because they are written in C/C++ is false (JNI is a perfectly fine, if aged, example of interacting with a compacting GC from C++ code). There is in fact nothing preventing one from doing compaction in a reference counted system which is safe, nor using a reference counted system as part of a javascript implementation (although you would need cycle elimination via tracing as well).

While I didn’t look further, the article is probably correct in terms of the benchmark being complained about but for more meta reasons. It is very difficult to compare GC performance semantics, fundamentally because their designs prioritize different aspects of performance. For instance, mark and sweep GCs generally can fall back to having long pauses as all of memory is checked - which is why modern implementations have generational GCs and concurrent passes to attempt to reduce the frequency of “stop the world” GC pauses (anecdotally, I’ve actually seen a 12 minute stop-the-world GC pause in a production Java 7 application without proper tuning - and some of that tuning was simply setting a lower ceiling of memory usage for the Java process such that when it eventually hit a stop-the-world GC event, it would have less memory to check).

For reference counting, the cost of GC is amortized across usage, meaning you cannot get ‘stop the world’ pauses. Compacting GC systems will require objects to go through write or even read barriers, so such systems don’t operate for ‘free’ at execution time, and likewise still have an effect on things like loop performance. There have been clever attempts to solve the needs of in-code barrier checks using the MMU and page faulting, but also push-back on the trade-offs these have as well. Nobody has ‘won’ the title of ideal garbage collector implementation yet, and I suggest that such concepts as ideal are simply not possible.

So I’d be interested in understanding why we are about alone on our Apple island with our opinion that RC is better than GC? Are they all collectively wrong in the rest of the universe? (not that I condone argument from majority)

The promise of garbage collection systems in general is that they eliminate the need to think about object lifetime and ownership to produce safe code. With reference counted GC, this is obviously not true in the cases where you can have cycles. With tracing GCs like compacting or mark and sweep, you get closer to achieving this benefit, but with the side effect that a significant source of memory and performance impacts becomes outside of your control.

There is apparently a difference in opinion of whether such a trade-off is worth it ;-)

Objective C had a tracing (but I believe non-compacting) garbage collector for several years; I don’t know if the reasoning behind replacing it with ARC has been published. I can only guess that one of the reasons was the complexity of supporting two GC mechanisms - which, since Swift has to maintain compatibility with reference-counted Objective C code, would be one of the concerns for adding a tracing GC to the language.

-DW

···

On Feb 9, 2016, at 7:35 AM, Jean-Denis Muys via swift-evolution <swift-evolution@swift.org> wrote: