What about garbage collection?

I’d disagree with that, for one rather important reason; the computers that GC won on had virtual memory. On mobile, when your RAM runs out, it’s out. So RAM constraints are somewhat more serious on mobile.

Charles

···

On Feb 9, 2016, at 8:35 AM, Jean-Denis Muys via swift-evolution <swift-evolution@swift.org> wrote:

It’s a long since resolved dispute, and GC won. I don’t want Steve Jobs to reach out from the grave and drag us back to the 70s. There’s nothing special about mobile phones: they are more powerful than the computers that GC won on in the first place.

An***id lagged behind iOS for years in smoothness and responsiveness at comparable hardware — my understanding is that this is largely because of GC pauses.

I love GC. I’d love GC in Swift, too. But it really is a real performance problem in _interactive_ applications (much less of an issue for server side stuff), on memory-constrained devices.

— Radek

···

On 09 Feb 2016, at 15:35, Jean-Denis Muys via swift-evolution <swift-evolution@swift.org> wrote:

My understanding (from what I’ve seen in the literature, but I am in no way an expert) is that RC has worse worse case behaviour than GC regarding pauses.

Also arguments regarding RAM use (and perhaps even battery use), as all hardware resource-based arguments, have always been proven wrong in the past as hardware has evolved to more and better.

The usual argument is RAM is cheap, programmer’s time, especially debugging time, is expensive.

I find it interesting that the commonly accepted wisdom is that GC is the right thing to do. To quote but one blog post I’ve read:

It’s a long since resolved dispute, and GC won. I don’t want Steve Jobs to reach out from the grave and drag us back to the 70s. There’s nothing special about mobile phones: they are more powerful than the computers that GC won on in the first place.

So I’d be interested in understanding why we are about alone on our Apple island with our opinion that RC is better than GC? Are they all collectively wrong in the rest of the universe? (not that I condone argument from majority)

I can only state that my experience with GC has been with mostly with Macintosh Common Lisp a rather long time ago, and I did really love it.

So for me, GC would be a +1, but not a very strong one, as I find RC adequate.

Jean-Denis

On 09 Feb 2016, at 07:01, Colin Cornaby via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

I thought I’d add my opinion to this thread even though it’s been well covered in different posts, just to put more weight on the -1 argument…

I spend most my time dealing with performance sensitive code, and Swift’s predictability is a very strong pro for it against languages like Java. Java’s garbage collector provides too much uncertainty and instability to performance.

There certainly are tradeoffs. ARC won’t catch things like circular retain loops. But as mentioned, tagged pointers on the modern architecture take care of the retain/release overhead.

I’ll put it a different way that sums up why I felt the need to reply: ARC has it’s inconveniences, but moving to garbage collection would likely lead us to abandon any plans to adopt Swift for many of our performance sensitive projects. If someone solves the “pausing” problem in a garbage collected language I’d reconsider. But until then it would take Swift out of consideration for a lot of projects.

On Feb 8, 2016, at 11:56 AM, Félix Cloutier via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Has there been a garbage collection thread so far? I understand that reference counting vs. garbage collection can be a heated debate, but it might be relevant to have it.

It seems to me that the two principal upsides of reference counting are that destruction is (essentially) deterministic and performance is more easily predicted. However, it comes with many downsides:

object references are expensive to update
object references cannot be atomically updated
heap fragmentation
the closure capture syntax uses up an unreasonable amount of mindshare just because of [weak self]

Since Swift doesn't expose memory management operations outside of `autoreleasepool`, it seems to me that you could just drop in a garbage collector instead of reference counting and it would work (for most purposes).

Has a GC been considered at all?

Félix

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Counter-pedantry: Reference counting *with an automatic cycle collector* is GC. ARC-style reference counting is not GC.

···

On Feb 9, 2016, at 9:45 AM, David Waite via swift-evolution <swift-evolution@swift.org> wrote:

On Feb 9, 2016, at 7:35 AM, Jean-Denis Muys via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

I find it interesting that the commonly accepted wisdom is that GC is the right thing to do. To quote but one blog post I’ve read:

It’s a long since resolved dispute, and GC won. I don’t want Steve Jobs to reach out from the grave and drag us back to the 70s. There’s nothing special about mobile phones: they are more powerful than the computers that GC won on in the first place.

I looked up that article (แทงบอล UFABET เว็บพนันบอล เดิมพันขั้นต่ำ 10 บาท บริการ 24 ชม.) and it has several logical fallacies, including the obvious one that reference counting is not a form of GC!

--
Greg Parker gparker@apple.com <mailto:gparker@apple.com> Runtime Wrangler

I generally agree that refcounting is not a solution taken by the language and tools developers at Apple for laziness, but if we are quoting the failures at using GC for UI heavy interactive apps... I do disagree with an example, but again that probably could only be sorted by a huge huge thread with better informed people: .NET vs WinRT was not a historically purely just performance and fully merits driven battle, politics between the two different divisions that developed each alternative have to be considered (and lo and behold the solution pushed by the Windows team won ;)).

Also... we are ignoring a large green electronic elephant in the room using Java and not doing. A bad job with very similarly priced handsets... :). Sorry for the noise though and thanks for all these informative posts :).

···

Sent from my iPhone

On 9 Feb 2016, at 17:23, Joe Groff via swift-evolution <swift-evolution@swift.org> wrote:

Microsoft tried and failed several times to reinvent their stack on top of .NET, and has since retreated to a refcounting-based foundation for WinRT

This will be then more fun when we have page file support as virtual memory is already there.

···

Sent from my iPhone

On 9 Feb 2016, at 18:18, Charles Srstka via swift-evolution <swift-evolution@swift.org> wrote:

On Feb 9, 2016, at 8:35 AM, Jean-Denis Muys via swift-evolution <swift-evolution@swift.org> wrote:

It’s a long since resolved dispute, and GC won. I don’t want Steve Jobs to reach out from the grave and drag us back to the 70s. There’s nothing special about mobile phones: they are more powerful than the computers that GC won on in the first place.

I’d disagree with that, for one rather important reason; the computers that GC won on had virtual memory. On mobile, when your RAM runs out, it’s out. So RAM constraints are somewhat more serious on mobile.

Charles

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

And this is replaced with analyzing performance profiles to understand why sporadic slowdowns are happening. Then you find that too many allocations of objects are being created causing the GC sweep to take longer than is acceptable. Now you need to try and preempt the GC sweep by triggering it sooner or pausing it all together. The other option is trying to fix what is most likely an architectural issue with the allocation graph. One way to do that is to go about creating pools of objects that no longer have real-lifetimes according to the GC because they are always alive and just marked as inactive.

That's just a sample of real-world performance analysis I used to do on the Visual Studio team when we starting using a bunch of managed code (C#) in it.

Another is simply not understanding what types of things hold onto objects so that they are never collected. C# has a problem (or did at least when I was still using it) with this and its eventing model. If you forget to unregister the events, then good luck getting that memory back. Or if one thing somewhere in your code is holding a reference to your giant data structures - nope, not going to get collected.

The point is, even with a GC, you still need to be conscious about what's going on.

It's simply not been my experience that debugging the type of issues you run into with a GC are significantly easier than a non-GC system. In fact, in many ways they are harder as often times the solution is to architecture around the limitations of the GC.

-David

···

On Feb 9, 2016, at 11:50 PM, Goffredo Marocchi via swift-evolution <swift-evolution@swift.org> wrote:

It does break the make it easy to debug rule though. In any non trivial system worked on by a moderate to large team, manually hunting cyclic references and appropriately breaking them down without introducing bugs can still be very painful. GC is one of those technologies where you trade the efficiency you mention for a safer and easier to use memory model where you would spend a lot less time to debug for correctness.

Based on this as the summary of arguments I read so far on this thread, I tend to increase my +1. Here is what I read:

- arguments regarding energy efficiency, memory efficiency, or more generally hardware limitations seem to me rather short-sighted, as in “640 KB or RAM should be enough for all times to come” (original design decision by IBM for the PC). Is Swift a language designed for todays hardware? Or is it supposed to outlast it a wee bit?

Planning for the future is a Good Thing, but if Swift isn’t useful today, it won’t last long enough for “the future” to get here. There are limitations in the current hardware (VM thrashing the NAND in iDevices, IIRC) that preclude other options.

Of course some ideal would be a language where GC is an option. I am not sure whether that is possible, though the one argument I saw against that, namely that libraries would be compatible with one of the options only, can be solved with the fat binary idea. We used to have libraries compatible with both PowerPC and Intel processors. That sounds a lot more complex.

We did it once before… Obj-C switched from manual memory management to ARC back in Xcode 4.2. Personally, I’d *much* rather stick with ARC for now, and worry about switching to some other scheme a few years down the road when supporting devices with write limitations starts becoming less of an issue.

- Dave Sweeris

···

On Feb 10, 2016, at 00:57, Jean-Denis Muys via swift-evolution <swift-evolution@swift.org> wrote:

- arguments regarding energy efficiency, memory efficiency, or more generally hardware limitations seem to me rather short-sighted, as in “640 KB or RAM should be enough for all times to come” (original design decision by IBM for the PC). Is Swift a language designed for todays hardware? Or is it supposed to outlast it a wee bit?

For the foreseeable future - energy efficiency is a very real and important factor. Battery technology (as compared to other technology) is not moving quickly, so being efficient is important. (and one of the items that is listed as a plus for iDevices over the competion) The focus of the language (even though open sourced) is primarily focused on a programming language for “desktop” and iDevices. A majority of the computers sold by Apple rely on battery (laptops, iDevices, etc.).

The java type GC has a big advantage is that incompetent programmers don’t have to worry about memory management (and they make up a majority of programmers). And yes, I have seen cases where someone wrote a server (microsoft based) component that had so many cylindrical references that people assumed would just clean up — that to make it work they had to write another monitor application to watch for the application memory usage growing above a certain size based on memory leaks then force a “reboot” of it :p

In the non trivial systems I have worked on which require loading complex object graphs or parts thereof from a database I have to think about cyclic references and ownership as well to allow loading parts of object graphs (for performance and memory reasons and we are talking about a server with lots of RAM here) without accidentally loading the whole graph. That’s quite the same problem and an important design aspect of the domain model IMO.

-Thorsten

···

Am 10.02.2016 um 08:50 schrieb Goffredo Marocchi <panajev@gmail.com>:

It does break the make it easy to debug rule though. In any non trivial system worked on by a moderate to large team, manually hunting cyclic references and appropriately breaking them down without introducing bugs can still be very painful. GC is one of those technologies where you trade the efficiency you mention for a safer and easier to use memory model where you would spend a lot less time to debug for correctness.

There are some system where memory safety, type safety, and thread safety rule over pure performance concerns... Reasons why C# and Java still have this much traction. Just as much as the small tax of reference counting (and having so much accumulated knowledge that is able to exploit the vast ways it allows you to go about it) is still greater than not 0... see why C++ still has this enormous presence on all platforms where performance really matters... Games on iOS too.

Still, if GC never comes to Swift, say not even a cycle detector exclusively say in Debug builds on the iOS Simulator just to restrict things a bit (Address Sanitiser does make a precedence), but spotting them and hunting them down using Xcode becomes easier then it is a much much better overall win.

Sent from my iPhone

On 10 Feb 2016, at 06:14, Thorsten Seitz via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Strong -1 for GC.

I think the advantages of ARC over GC as given by Chris, Joe and Dave with regards to energy efficiency, memory efficiency, caching behavior, predictability, value types with copy on write and declarative memory management on the object graph level are essential and indispensable.

And yes, I see it as an advantage having to think about memory management dependencies on the object graph level and deciding how to break cycles. I think this leads to much cleaner object models.

-Thorsten

Am 08.02.2016 um 22:00 schrieb Chris Lattner via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>>:

On Feb 8, 2016, at 11:56 AM, Félix Cloutier via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Has there been a garbage collection thread so far? I understand that reference counting vs. garbage collection can be a heated debate, but it might be relevant to have it.

Technically speaking, reference counting is a form of garbage collection, but I get what you mean. Since there are multiple forms of GC, I'll assume that you mean a generational mark and sweep algorithm like you’d see in a Java implementation.

It seems to me that the two principal upsides of reference counting are that destruction is (essentially) deterministic and performance is more easily predicted.

Yes, deterministic destruction is a major feature. Not having to explain what finalizers are (and why they shouldn’t generally be used) is a pretty huge. Keep in mind that Swift interops with C, so deinit is unavoidable for certain types.

More pointedly, not relying on GC enables Swift to be used in domains that don’t want it - think boot loaders, kernels, real time systems like audio processing, etc.

We have discussed in the passed using hybrid approaches like introducing a cycle collector, which runs less frequently than a GC would. The problem with this is that if you introduce a cycle collector, code will start depending on it. In time you end up with some libraries/packages that works without GC, and others that leak without it (the D community has relevant experience here). As such, we have come to think that adding a cycle collector would be bad for the Swift community in the large.

However, it comes with many downsides:

object references are expensive to update

Most garbage collectors have write barriers, which execute extra code when references are updated. Most garbage collectors also have safe points, which means that extra instructions get inserted into loops.

object references cannot be atomically updated

This is true, but Swift currently has no memory model and no concurrency model, so it isn’t clear that this is actually important (e.g. if you have no shared mutable state).

heap fragmentation

This is at best a tradeoff depending on what problem you’re trying to solve (e.g. better cache locality or smaller max RSS of the process). One thing that I don’t think is debatable is that the heap compaction behavior of a GC (which is what provides the heap fragmentation win) is incredibly hostile for cache (because it cycles the entire memory space of the process) and performance predictability.

Given that GC’s use a lot more memory than ARC systems do, it isn’t clear what you mean by GC’s winning on heap fragmentation.

the closure capture syntax uses up an unreasonable amount of mindshare just because of [weak self]

I think that this specific point is solvable in others ways, but I’ll interpret this bullet as saying that you don’t want to worry about weak/unowned pointers. I completely agree that we strive to provide a simple programming model, and I can see how "not having to think about memory management" seems appealing.

On the other hand, there are major advantages to the Swift model. Unlike MRR, Swift doesn’t require you to micromanage memory: you think about it at the object graph level when you’re building out your types. Compared to MRR, ARC has moved memory management from being imperative to being declarative. Swift also puts an emphasis on value types, so certain problems that you’d see in languages like Java are reduced.

That said, it is clear that it takes time and thought to use weak/unowned pointers correctly, so the question really becomes: does reasoning about your memory at the object graph level and expressing things in a declarative way contribute positively to your code?

My opinion is yes: while I think it is silly to micromanage memory, I do think thinking about it some is useful. I think that expressing that intention directly in the code adds value in terms of maintenance of the code over time and communication to other people who work on it.

Since Swift doesn't expose memory management operations outside of `autoreleasepool`, it seems to me that you could just drop in a garbage collector instead of reference counting and it would work (for most purposes).

Has a GC been considered at all?

GC also has several *huge* disadvantages that are usually glossed over: while it is true that modern GC's can provide high performance, they can only do that when they are granted *much* more memory than the process is actually using. Generally, unless you give the GC 3-4x more memory than is needed, you’ll get thrashing and incredibly poor performance. Additionally, since the sweep pass touches almost all RAM in the process, they tend to be very power inefficient (leading to reduced battery life).

I’m personally not interested in requiring a model that requires us to throw away a ton of perfectly good RAM to get an “simpler" programming model - particularly on that adds so many tradeoffs.

-Chris

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

-1
GC does not have performance problems on server side??
Please tell that to my colleges who are spending 50% of their time
analysing and tweaking GC in java after every release of new versions.
It becoming such a pain sometimes, that we are discussing re-writing
some servers from Java to C++ (just to avoid GC issues)
Ondrej B.

···

On Tue, Feb 9, 2016 at 3:40 PM, Radosław Pietruszewski <swift-evolution@swift.org> wrote:

An***id lagged behind iOS for years in smoothness and responsiveness at
comparable hardware — my understanding is that this is largely because of GC
pauses.

I love GC. I’d love GC in Swift, too. But it really is a real performance
problem in _interactive_ applications (much less of an issue for server side
stuff), on memory-constrained devices.

— Radek

On 09 Feb 2016, at 15:35, Jean-Denis Muys via swift-evolution > <swift-evolution@swift.org> wrote:

My understanding (from what I’ve seen in the literature, but I am in no way
an expert) is that RC has worse worse case behaviour than GC regarding
pauses.

Also arguments regarding RAM use (and perhaps even battery use), as all
hardware resource-based arguments, have always been proven wrong in the past
as hardware has evolved to more and better.

The usual argument is RAM is cheap, programmer’s time, especially debugging
time, is expensive.

I find it interesting that the commonly accepted wisdom is that GC is the
right thing to do. To quote but one blog post I’ve read:

It’s a long since resolved dispute, and GC won. I don’t want Steve Jobs to
reach out from the grave and drag us back to the 70s. There’s nothing
special about mobile phones: they are more powerful than the computers that
GC won on in the first place.

So I’d be interested in understanding why we are about alone on our Apple
island with our opinion that RC is better than GC? Are they all collectively
wrong in the rest of the universe? (not that I condone argument from
majority)

I can only state that my experience with GC has been with mostly with
Macintosh Common Lisp a rather long time ago, and I did really love it.

So for me, GC would be a +1, but not a very strong one, as I find RC
adequate.

Jean-Denis

On 09 Feb 2016, at 07:01, Colin Cornaby via swift-evolution > <swift-evolution@swift.org> wrote:

I thought I’d add my opinion to this thread even though it’s been well
covered in different posts, just to put more weight on the -1 argument…

I spend most my time dealing with performance sensitive code, and Swift’s
predictability is a very strong pro for it against languages like Java.
Java’s garbage collector provides too much uncertainty and instability to
performance.

There certainly are tradeoffs. ARC won’t catch things like circular retain
loops. But as mentioned, tagged pointers on the modern architecture take
care of the retain/release overhead.

I’ll put it a different way that sums up why I felt the need to reply: ARC
has it’s inconveniences, but moving to garbage collection would likely lead
us to abandon any plans to adopt Swift for many of our performance sensitive
projects. If someone solves the “pausing” problem in a garbage collected
language I’d reconsider. But until then it would take Swift out of
consideration for a lot of projects.

On Feb 8, 2016, at 11:56 AM, Félix Cloutier via swift-evolution > <swift-evolution@swift.org> wrote:

Has there been a garbage collection thread so far? I understand that
reference counting vs. garbage collection can be a heated debate, but it
might be relevant to have it.

It seems to me that the two principal upsides of reference counting are that
destruction is (essentially) deterministic and performance is more easily
predicted. However, it comes with many downsides:

object references are expensive to update
object references cannot be atomically updated
heap fragmentation
the closure capture syntax uses up an unreasonable amount of mindshare just
because of [weak self]

Since Swift doesn't expose memory management operations outside of
`autoreleasepool`, it seems to me that you could just drop in a garbage
collector instead of reference counting and it would work (for most
purposes).

Has a GC been considered at all?

Félix

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Unlikely with current hardware technology. NAND storage on mobile devices has limited lifetime based on the number of writes. Adding a swap file would burn through that lifetime awfully fast.

···

On Feb 9, 2016, at 2:21 PM, Goffredo Marocchi via swift-evolution <swift-evolution@swift.org> wrote:

On 9 Feb 2016, at 18:18, Charles Srstka via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Feb 9, 2016, at 8:35 AM, Jean-Denis Muys via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

It’s a long since resolved dispute, and GC won. I don’t want Steve Jobs to reach out from the grave and drag us back to the 70s. There’s nothing special about mobile phones: they are more powerful than the computers that GC won on in the first place.

I’d disagree with that, for one rather important reason; the computers that GC won on had virtual memory. On mobile, when your RAM runs out, it’s out. So RAM constraints are somewhat more serious on mobile.

This will be then more fun when we have page file support as virtual memory is already there.

--
Greg Parker gparker@apple.com <mailto:gparker@apple.com> Runtime Wrangler

I don’t really want to get into a terminology debate, but by pretty much any well accepted definition, ARC is an algorithm for GC. As one example:

-Chris

···

On Feb 9, 2016, at 4:29 PM, Greg Parker via swift-evolution <swift-evolution@swift.org> wrote:

On Feb 9, 2016, at 9:45 AM, David Waite via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Feb 9, 2016, at 7:35 AM, Jean-Denis Muys via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

I find it interesting that the commonly accepted wisdom is that GC is the right thing to do. To quote but one blog post I’ve read:

It’s a long since resolved dispute, and GC won. I don’t want Steve Jobs to reach out from the grave and drag us back to the 70s. There’s nothing special about mobile phones: they are more powerful than the computers that GC won on in the first place.

I looked up that article (แทงบอล UFABET เว็บพนันบอล เดิมพันขั้นต่ำ 10 บาท บริการ 24 ชม.) and it has several logical fallacies, including the obvious one that reference counting is not a form of GC!

Counter-pedantry: Reference counting *with an automatic cycle collector* is GC. ARC-style reference counting is not GC.

1 Like

Your experiences may vary, but at the last large mobile company I worked for our Android engineers spent far more time wrestling with the GC than our iOS developers spent on similar memory issues. (A lot of this had to do with media handling or rendering complex UIs containing many high resolution image assets.)

That's not to say that Android apps can't be performant or smooth, nor that GC on Android hasn't greatly improved over time, but UI application development on a GC based resource constrained platform comes with its own tradeoffs. I would personally rather deal with breaking cycles (which can be worked around with the proper foresight and architecture) than placating a GC.

···

Sent from my iPhone

On Feb 9, 2016, at 2:16 PM, Goffredo Marocchi via swift-evolution <swift-evolution@swift.org> wrote:

I generally agree that refcounting is not a solution taken by the language and tools developers at Apple for laziness, but if we are quoting the failures at using GC for UI heavy interactive apps... I do disagree with an example, but again that probably could only be sorted by a huge huge thread with better informed people: .NET vs WinRT was not a historically purely just performance and fully merits driven battle, politics between the two different divisions that developed each alternative have to be considered (and lo and behold the solution pushed by the Windows team won ;)).

Also... we are ignoring a large green electronic elephant in the room using Java and not doing. A bad job with very similarly priced handsets... :). Sorry for the noise though and thanks for all these informative posts :).

Sent from my iPhone

On 9 Feb 2016, at 17:23, Joe Groff via swift-evolution <swift-evolution@swift.org> wrote:

Microsoft tried and failed several times to reinvent their stack on top of .NET, and has since retreated to a refcounting-based foundation for WinRT

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I think those changes can be overcome... but yes, it is not something that would happen tomorrow... More like five years from now ;). *Crossing fingers for memristors or some new nonvolatile storage solution to overcome the current flash issue with frequent read/write cycles....*

Maybe enough redundancy and better controllers can allow the same flexibility on mobile's that we enjoy on any SSD on laptops and desktops nowadays and similar I/O throughout though.

···

Sent from my iPhone

On 10 Feb 2016, at 00:32, Greg Parker <gparker@apple.com> wrote:

On Feb 9, 2016, at 2:21 PM, Goffredo Marocchi via swift-evolution <swift-evolution@swift.org> wrote:

On 9 Feb 2016, at 18:18, Charles Srstka via swift-evolution <swift-evolution@swift.org> wrote:

On Feb 9, 2016, at 8:35 AM, Jean-Denis Muys via swift-evolution <swift-evolution@swift.org> wrote:

It’s a long since resolved dispute, and GC won. I don’t want Steve Jobs to reach out from the grave and drag us back to the 70s. There’s nothing special about mobile phones: they are more powerful than the computers that GC won on in the first place.

I’d disagree with that, for one rather important reason; the computers that GC won on had virtual memory. On mobile, when your RAM runs out, it’s out. So RAM constraints are somewhat more serious on mobile.

This will be then more fun when we have page file support as virtual memory is already there.

Unlikely with current hardware technology. NAND storage on mobile devices has limited lifetime based on the number of writes. Adding a swap file would burn through that lifetime awfully fast.

--
Greg Parker gparker@apple.com Runtime Wrangler

Mark-and-sweep GC seems to be a non-starter as long as Swift's collections
remain copy-on-write value types. The way an Array (or other dynamically
sized stdlib collection) knows to write to its existing buffer, rather than
creating a whole new copy, is by looking at the buffer object's reference
count (via the `isUniquelyReferenced` API). Mike Ash has a good summary of
how this process works here:
https://www.mikeash.com/pyblog/friday-qa-2015-04-17-lets-build-swiftarray.html
.

The RemObjects Silver Swift-alike compiler, which targets the JVM and CLR,
is forced to treat its collections as reference types because of how COW is
implemented:
http://docs.elementscompiler.com/Silver/DifferencesAndLimitations/\.

The most pertinent questions with regards to Swift GC are: whether or not
some acceptably efficient form of efficient copy-on-write mechanism for
value type collections in a tracing GC environment exists, and if not
whether the enormous semantic change that turning collections into
reference types would entail would be worthwhile. (This in turn would have
a whole set of knock-on effects; for example, generic stdlib collections
could no longer be properly covariant.)

···

On Wed, Feb 10, 2016 at 1:08 AM, Craig Cruden via swift-evolution < swift-evolution@swift.org> wrote:

>
> - arguments regarding energy efficiency, memory efficiency, or more
generally hardware limitations seem to me rather short-sighted, as in “640
KB or RAM should be enough for all times to come” (original design decision
by IBM for the PC). Is Swift a language designed for todays hardware? Or is
it supposed to outlast it a wee bit?

For the foreseeable future - energy efficiency is a very real and
important factor. Battery technology (as compared to other technology) is
not moving quickly, so being efficient is important. (and one of the items
that is listed as a plus for iDevices over the competion) The focus of the
language (even though open sourced) is primarily focused on a programming
language for “desktop” and iDevices. A majority of the computers sold by
Apple rely on battery (laptops, iDevices, etc.).

The java type GC has a big advantage is that incompetent programmers don’t
have to worry about memory management (and they make up a majority of
programmers). And yes, I have seen cases where someone wrote a server
(microsoft based) component that had so many cylindrical references that
people assumed would just clean up — that to make it work they had to write
another monitor application to watch for the application memory usage
growing above a certain size based on memory leaks then force a “reboot” of
it :p

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

It's within the realm of possibility to have a hybrid approach, where value type buffers have reference counts maintained by compiler-inserted retain/release operations when semantic copies occur in an otherwise GC environment.

-Joe

···

On Feb 10, 2016, at 4:31 AM, Austin Zheng via swift-evolution <swift-evolution@swift.org> wrote:

Mark-and-sweep GC seems to be a non-starter as long as Swift's collections remain copy-on-write value types. The way an Array (or other dynamically sized stdlib collection) knows to write to its existing buffer, rather than creating a whole new copy, is by looking at the buffer object's reference count (via the `isUniquelyReferenced` API). Mike Ash has a good summary of how this process works here: https://www.mikeash.com/pyblog/friday-qa-2015-04-17-lets-build-swiftarray.html\.

The RemObjects Silver Swift-alike compiler, which targets the JVM and CLR, is forced to treat its collections as reference types because of how COW is implemented: http://docs.elementscompiler.com/Silver/DifferencesAndLimitations/\.

The most pertinent questions with regards to Swift GC are: whether or not some acceptably efficient form of efficient copy-on-write mechanism for value type collections in a tracing GC environment exists, and if not whether the enormous semantic change that turning collections into reference types would entail would be worthwhile. (This in turn would have a whole set of knock-on effects; for example, generic stdlib collections could no longer be properly covariant.)

Efficiency is always important, and if you haven’t noticed, computers aren’t getting significantly faster anymore.

-Chris

···

On Feb 10, 2016, at 1:08 AM, Craig Cruden via swift-evolution <swift-evolution@swift.org> wrote:

- arguments regarding energy efficiency, memory efficiency, or more generally hardware limitations seem to me rather short-sighted, as in “640 KB or RAM should be enough for all times to come” (original design decision by IBM for the PC). Is Swift a language designed for todays hardware? Or is it supposed to outlast it a wee bit?

For the foreseeable future - energy efficiency is a very real and important factor. Battery technology (as compared to other technology) is not moving quickly, so being efficient is important.

Haven’t noticed…. but then I am still using my Mac Pro 2008 as my primary computer (upgraded with 10GB ram, 2 x 5770 ATI cards, and a boot SSD using one of the internal SATA connections on the motherboard) :p

···

On 2016-02-11, at 4:42:30, Chris Lattner <clattner@apple.com> wrote:

On Feb 10, 2016, at 1:08 AM, Craig Cruden via swift-evolution <swift-evolution@swift.org> wrote:

- arguments regarding energy efficiency, memory efficiency, or more generally hardware limitations seem to me rather short-sighted, as in “640 KB or RAM should be enough for all times to come” (original design decision by IBM for the PC). Is Swift a language designed for todays hardware? Or is it supposed to outlast it a wee bit?

For the foreseeable future - energy efficiency is a very real and important factor. Battery technology (as compared to other technology) is not moving quickly, so being efficient is important.

Efficiency is always important, and if you haven’t noticed, computers aren’t getting significantly faster anymore.

-Chris

I also believe efficiency is always important.

Computers tends to get smaller rather than faster. Smartphones, smartwatches, internet of things…

···

--
Pierre

Le 10 févr. 2016 à 22:42, Chris Lattner via swift-evolution <swift-evolution@swift.org> a écrit :

On Feb 10, 2016, at 1:08 AM, Craig Cruden via swift-evolution <swift-evolution@swift.org> wrote:

- arguments regarding energy efficiency, memory efficiency, or more generally hardware limitations seem to me rather short-sighted, as in “640 KB or RAM should be enough for all times to come” (original design decision by IBM for the PC). Is Swift a language designed for todays hardware? Or is it supposed to outlast it a wee bit?

For the foreseeable future - energy efficiency is a very real and important factor. Battery technology (as compared to other technology) is not moving quickly, so being efficient is important.

Efficiency is always important, and if you haven’t noticed, computers aren’t getting significantly faster anymore.

-Chris

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Audio systems too. 60fps video has a hard limit of 16.66ms per frame. Real
time audio needs 3ms buffers, imo, but can be less strict depending on the
application.

I think that Swift can become the first serious alternative to C++ for game
development and creative coding. For all platforms, not just Avalon (the
Isle of Apples, wokka wokka). If Swift becomes a mark-sweep GC language I'm
going to get very angry and abandon my work from the last two months:

-david

···

On Tue, Feb 9, 2016 at 9:04 AM, Paul Cantrell via swift-evolution < swift-evolution@swift.org> wrote:

That island has other inhabitants: video game developers.