What're the Swift team's thoughts on Go's concurrency?


(Dan Stenmark) #1

I'd like to inquire as to what the Swift team thoughts on Go's concurrency model are? I'm not referring to convenience of the 'go' keyword and nor am I referring to how the language handles Channels, both of which being what most folks associate with it. Rather, I'd like to ask about the language's use of Green Threads and how the runtime handles the heavy lifting of multiplexing and scheduling them. What are some of the strengths and weaknesses the Swift team sees to Go's approach?

Dan

(DISCLAIMER: I'm posting this for academic reasons, not as a pitch. While the Swift team's responses may inform opinions on the matter, I do not want this to turn into a 'this is how I think Swift should do concurrency' debate. That discussion will come when it comes.)


(Lily Ballard) #2

The Rust language used to use a green thread model like Go (actually it exposed a configurable threading interface so you could choose green threads or OS threads). It also used segmented stacks like Go did. Over time, Rust ended up dropping the segmented stacks because it significantly complicated FFI without providing much, if any, benefit (and IIRC Go followed suite and dropped segmented stacks somewhere around version 1.5), and then a little while later Rust dropped green threads entirely. If you can find them, there are lots of discussions of the pros and cons that were documented during this process (on mailing lists, in IRC, possibly on Discourse, there's probably at least one post about it in the Rust subreddit, etc). But ultimately, it was determined that keeping this ability significantly complicated the Rust runtime and it provided almost no benefit. The OS is already really good at scheduling threads, and there's no memory savings without segmented stacks (though the OS will map virtual pages for the stack and only allocate the backing physical pages as the memory is touched, so even if you have a 2MB stack, a new thread will only actually allocate something like 8kb). And there are some pretty big downsides to green threads, such as the fact that it significantly complicates the runtime since all I/O everywhere has to be nonblocking and it has to be transparent to the code, and FFI ends up as a major problem (even without segmented stacks), because you have no idea if an FFI call will block. Green threading libraries end up having to allocate extra OS threads just to continue servicing the green threads when the existing threads are potentially blocked in FFI.

So ultimately, green threads really only make sense when you control the entire ecosystem, so you can ensure the whole stack is compatible with green threads and won't ever issue blocking calls, and even there there's not much benefit and there's a lot of complexity involved.

-Kevin Ballard

···

On Tue, Aug 9, 2016, at 12:04 PM, Dan Stenmark via swift-evolution wrote:

I'd like to inquire as to what the Swift team thoughts on Go's concurrency model are? I'm not referring to convenience of the 'go' keyword and nor am I referring to how the language handles Channels, both of which being what most folks associate with it. Rather, I'd like to ask about the language's use of Green Threads and how the runtime handles the heavy lifting of multiplexing and scheduling them. What are some of the strengths and weaknesses the Swift team sees to Go's approach?

Dan

(DISCLAIMER: I'm posting this for academic reasons, not as a pitch. While the Swift team's responses may inform opinions on the matter, I do not want this to turn into a 'this is how I think Swift should do concurrency' debate. That discussion will come when it comes.)
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


(Slava Pestov) #3

Hi Dan,

As I understand it, a big weakness of Go's model is that it does not actually prevent data races. There's nothing preventing you from sharing pointers to mutable values between tasks, but I could be wrong about this.

Slava

···

On Aug 9, 2016, at 12:04 PM, Dan Stenmark via swift-evolution <swift-evolution@swift.org> wrote:

I'd like to inquire as to what the Swift team thoughts on Go's concurrency model are? I'm not referring to convenience of the 'go' keyword and nor am I referring to how the language handles Channels, both of which being what most folks associate with it. Rather, I'd like to ask about the language's use of Green Threads and how the runtime handles the heavy lifting of multiplexing and scheduling them. What are some of the strengths and weaknesses the Swift team sees to Go's approach?

Dan

(DISCLAIMER: I'm posting this for academic reasons, not as a pitch. While the Swift team's responses may inform opinions on the matter, I do not want this to turn into a 'this is how I think Swift should do concurrency' debate. That discussion will come when it comes.)
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


(Chris Lattner) #4

Hi Dan,

There are many folks interested in concurrency topics related to this, but we need to stay focused on finishing Swift 3 and then moving on to Swift 4 stage 1 goals. As that work is cresting, we’ll start discussions of concurrency, and may even be so bold as to start a new mailing list dedicated to the topic, since it is such a wide reaching topic.

Until we get to that point, please resist the urge to jump ahead :-)

-Chris

···

On Aug 9, 2016, at 12:04 PM, Dan Stenmark via swift-evolution <swift-evolution@swift.org> wrote:

I'd like to inquire as to what the Swift team thoughts on Go's concurrency model are? I'm not referring to convenience of the 'go' keyword and nor am I referring to how the language handles Channels, both of which being what most folks associate with it. Rather, I'd like to ask about the language's use of Green Threads and how the runtime handles the heavy lifting of multiplexing and scheduling them. What are some of the strengths and weaknesses the Swift team sees to Go's approach?


(Joe Groff) #5

In addition to FFI, there's also no way for memory-mapped IO to be non-blocking (a page fault can only be handled by the kernel, after all).

-Joe

···

On Aug 9, 2016, at 1:28 PM, Kevin Ballard via swift-evolution <swift-evolution@swift.org> wrote:

The Rust language used to use a green thread model like Go (actually it exposed a configurable threading interface so you could choose green threads or OS threads). It also used segmented stacks like Go did. Over time, Rust ended up dropping the segmented stacks because it significantly complicated FFI without providing much, if any, benefit (and IIRC Go followed suite and dropped segmented stacks somewhere around version 1.5), and then a little while later Rust dropped green threads entirely. If you can find them, there are lots of discussions of the pros and cons that were documented during this process (on mailing lists, in IRC, possibly on Discourse, there's probably at least one post about it in the Rust subreddit, etc). But ultimately, it was determined that keeping this ability significantly complicated the Rust runtime and it provided almost no benefit. The OS is already really good at scheduling threads, and there's no memory savings without segmented stacks (though the OS will map virtual pages for the stack and only allocate the backing physical pages as the memory is touched, so even if you have a 2MB stack, a new thread will only actually allocate something like 8kb). And there are some pretty big downsides to green threads, such as the fact that it significantly complicates the runtime since all I/O everywhere has to be nonblocking and it has to be transparent to the code, and FFI ends up as a major problem (even without segmented stacks), because you have no idea if an FFI call will block. Green threading libraries end up having to allocate extra OS threads just to continue servicing the green threads when the existing threads are potentially blocked in FFI.

So ultimately, green threads really only make sense when you control the entire ecosystem, so you can ensure the whole stack is compatible with green threads and won't ever issue blocking calls, and even there there's not much benefit and there's a lot of complexity involved.


(David Sweeris) #6

Is that bad? Sharing pointers seems like a cheap way to share data, and as long as you know what you’re doing, why should the language get in the way? Now, if said code really does have performance advantages over the “safer” methods, and it really is safe because for whatever reason the race condition can’t actually happen, the language (or library) ought to have a way to express that without having to write “unsafe” code. In the meantime, though, you’ve gotta ship something that runs and meets performance requirements.

- Dave Sweeris

···

On Aug 10, 2016, at 4:48 PM, Slava Pestov via swift-evolution <swift-evolution@swift.org> wrote:

As I understand it, a big weakness of Go's model is that it does not actually prevent data races. There's nothing preventing you from sharing pointers to mutable values between tasks, but I could be wrong about this.


(Dan Stenmark) #7

Hi Dan,

There are many folks interested in concurrency topics related to this, but we need to stay focused on finishing Swift 3 and then moving on to Swift 4 stage 1 goals. As that work is cresting, we’ll start discussions of concurrency, and may even be so bold as to start a new mailing list dedicated to the topic, since it is such a wide reaching topic.

Until we get to that point, please resist the urge to jump ahead :slight_smile:

-Chris

Chris, many apologies if this came across the wrong way! As I attempted to explain in the opening email, I'm inquiring for purely academic reasons and to better my understanding of concurrency as part of language design in general, not to pitch anything for Swift. In retrospect, perhaps -users would've been a better fit for this question rather than -evolution.

···

On Aug 10, 2016, at 10:24 PM, Chris Lattner <clattner@apple.com> wrote:

On Aug 9, 2016, at 1:59 PM, Joe Groff via swift-evolution <swift-evolution@swift.org> wrote:

On Aug 9, 2016, at 1:28 PM, Kevin Ballard via swift-evolution <swift-evolution@swift.org> wrote:

The Rust language used to use a green thread model like Go (actually it exposed a configurable threading interface so you could choose green threads or OS threads). It also used segmented stacks like Go did. Over time, Rust ended up dropping the segmented stacks because it significantly complicated FFI without providing much, if any, benefit (and IIRC Go followed suite and dropped segmented stacks somewhere around version 1.5), and then a little while later Rust dropped green threads entirely. If you can find them, there are lots of discussions of the pros and cons that were documented during this process (on mailing lists, in IRC, possibly on Discourse, there's probably at least one post about it in the Rust subreddit, etc). But ultimately, it was determined that keeping this ability significantly complicated the Rust runtime and it provided almost no benefit. The OS is already really good at scheduling threads, and there's no memory savings without segmented stacks (though the OS will map virtual pages for the stack and only allocate the backing physical pages as the memory is touched, so even if you have a 2MB stack, a new thread will only actually allocate something like 8kb). And there are some pretty big downsides to green threads, such as the fact that it significantly complicates the runtime since all I/O everywhere has to be nonblocking and it has to be transparent to the code, and FFI ends up as a major problem (even without segmented stacks), because you have no idea if an FFI call will block. Green threading libraries end up having to allocate extra OS threads just to continue servicing the green threads when the existing threads are potentially blocked in FFI.

So ultimately, green threads really only make sense when you control the entire ecosystem, so you can ensure the whole stack is compatible with green threads and won't ever issue blocking calls, and even there there's not much benefit and there's a lot of complexity involved.

In addition to FFI, there's also no way for memory-mapped IO to be non-blocking (a page fault can only be handled by the kernel, after all).

This right here is what I was looking for. Thanks, guys!

Dan


(Lily Ballard) #8

For anyone interested in reading more about Rust's decisions, here's two links:

The email about abandoning segmented stacks: https://mail.mozilla.org/pipermail/rust-dev/2013-November/006314.html

The RFC to remove green threading, with motivation: https://github.com/aturon/rfcs/blob/remove-runtime/active/0000-remove-runtime.md

-Kevin Ballard

···

On Tue, Aug 9, 2016, at 01:28 PM, Kevin Ballard wrote:

The Rust language used to use a green thread model like Go (actually it exposed a configurable threading interface so you could choose green threads or OS threads). It also used segmented stacks like Go did. Over time, Rust ended up dropping the segmented stacks because it significantly complicated FFI without providing much, if any, benefit (and IIRC Go followed suite and dropped segmented stacks somewhere around version 1.5), and then a little while later Rust dropped green threads entirely. If you can find them, there are lots of discussions of the pros and cons that were documented during this process (on mailing lists, in IRC, possibly on Discourse, there's probably at least one post about it in the Rust subreddit, etc). But ultimately, it was determined that keeping this ability significantly complicated the Rust runtime and it provided almost no benefit. The OS is already really good at scheduling threads, and there's no memory savings without segmented stacks (though the OS will map virtual pages for the stack and only allocate the backing physical pages as the memory is touched, so even if you have a 2MB stack, a new thread will only actually allocate something like 8kb). And there are some pretty big downsides to green threads, such as the fact that it significantly complicates the runtime since all I/O everywhere has to be nonblocking and it has to be transparent to the code, and FFI ends up as a major problem (even without segmented stacks), because you have no idea if an FFI call will block. Green threading libraries end up having to allocate extra OS threads just to continue servicing the green threads when the existing threads are potentially blocked in FFI.

So ultimately, green threads really only make sense when you control the entire ecosystem, so you can ensure the whole stack is compatible with green threads and won't ever issue blocking calls, and even there there's not much benefit and there's a lot of complexity involved.

-Kevin Ballard

On Tue, Aug 9, 2016, at 12:04 PM, Dan Stenmark via swift-evolution wrote:
> I'd like to inquire as to what the Swift team thoughts on Go's concurrency model are? I'm not referring to convenience of the 'go' keyword and nor am I referring to how the language handles Channels, both of which being what most folks associate with it. Rather, I'd like to ask about the language's use of Green Threads and how the runtime handles the heavy lifting of multiplexing and scheduling them. What are some of the strengths and weaknesses the Swift team sees to Go's approach?
>
> Dan
>
> (DISCLAIMER: I'm posting this for academic reasons, not as a pitch. While the Swift team's responses may inform opinions on the matter, I do not want this to turn into a 'this is how I think Swift should do concurrency' debate. That discussion will come when it comes.)
> _______________________________________________
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution


(Goffredo Marocchi) #9

Talking about green threads, are they similar to fibers? http://www.gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine

···

Sent from my iPhone

On 9 Aug 2016, at 21:59, Joe Groff via swift-evolution <swift-evolution@swift.org> wrote:

On Aug 9, 2016, at 1:28 PM, Kevin Ballard via swift-evolution <swift-evolution@swift.org> wrote:

The Rust language used to use a green thread model like Go (actually it exposed a configurable threading interface so you could choose green threads or OS threads). It also used segmented stacks like Go did. Over time, Rust ended up dropping the segmented stacks because it significantly complicated FFI without providing much, if any, benefit (and IIRC Go followed suite and dropped segmented stacks somewhere around version 1.5), and then a little while later Rust dropped green threads entirely. If you can find them, there are lots of discussions of the pros and cons that were documented during this process (on mailing lists, in IRC, possibly on Discourse, there's probably at least one post about it in the Rust subreddit, etc). But ultimately, it was determined that keeping this ability significantly complicated the Rust runtime and it provided almost no benefit. The OS is already really good at scheduling threads, and there's no memory savings without segmented stacks (though the OS will map virtual pages for the stack and only allocate the backing physical pages as the memory is touched, so even if you have a 2MB stack, a new thread will only actually allocate something like 8kb). And there are some pretty big downsides to green threads, such as the fact that it significantly complicates the runtime since all I/O everywhere has to be nonblocking and it has to be transparent to the code, and FFI ends up as a major problem (even without segmented stacks), because you have no idea if an FFI call will block. Green threading libraries end up having to allocate extra OS threads just to continue servicing the green threads when the existing threads are potentially blocked in FFI.

So ultimately, green threads really only make sense when you control the entire ecosystem, so you can ensure the whole stack is compatible with green threads and won't ever issue blocking calls, and even there there's not much benefit and there's a lot of complexity involved.

In addition to FFI, there's also no way for memory-mapped IO to be non-blocking (a page fault can only be handled by the kernel, after all).

-Joe
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


(Slava Pestov) #10

The Rust language used to use a green thread model like Go (actually it exposed a configurable threading interface so you could choose green threads or OS threads). It also used segmented stacks like Go did. Over time, Rust ended up dropping the segmented stacks because it significantly complicated FFI without providing much, if any, benefit (and IIRC Go followed suite and dropped segmented stacks somewhere around version 1.5), and then a little while later Rust dropped green threads entirely. If you can find them, there are lots of discussions of the pros and cons that were documented during this process (on mailing lists, in IRC, possibly on Discourse, there's probably at least one post about it in the Rust subreddit, etc). But ultimately, it was determined that keeping this ability significantly complicated the Rust runtime and it provided almost no benefit. The OS is already really good at scheduling threads, and there's no memory savings without segmented stacks (though the OS will map virtual pages for the stack and only allocate the backing physical pages as the memory is touched, so even if you have a 2MB stack, a new thread will only actually allocate something like 8kb). And there are some pretty big downsides to green threads, such as the fact that it significantly complicates the runtime since all I/O everywhere has to be nonblocking and it has to be transparent to the code, and FFI ends up as a major problem (even without segmented stacks), because you have no idea if an FFI call will block. Green threading libraries end up having to allocate extra OS threads just to continue servicing the green threads when the existing threads are potentially blocked in FFI.

So ultimately, green threads really only make sense when you control the entire ecosystem, so you can ensure the whole stack is compatible with green threads and won't ever issue blocking calls, and even there there's not much benefit and there's a lot of complexity involved.

In addition to FFI, there's also no way for memory-mapped IO to be non-blocking (a page fault can only be handled by the kernel, after all).

Even buffered file I/O via read(2) and write(2) is blocking on most *nix platforms. AFAIK there's some work being done on non-blocking buffered reads on Linux, but it appears to be a completely new API distinct from the existing epoll for sockets or aio_* for direct file I/O, and of course Darwin doesn't have an equivalent.

Slava

···

On Aug 9, 2016, at 1:59 PM, Joe Groff via swift-evolution <swift-evolution@swift.org> wrote:

On Aug 9, 2016, at 1:28 PM, Kevin Ballard via swift-evolution <swift-evolution@swift.org> wrote:

-Joe
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution


(Slava Pestov) #11

As I understand it, a big weakness of Go's model is that it does not actually prevent data races. There's nothing preventing you from sharing pointers to mutable values between tasks, but I could be wrong about this.

Is that bad? Sharing pointers seems like a cheap way to share data, and as long as you know what you’re doing, why should the language get in the way? Now, if said code really does have performance advantages over the “safer” methods, and it really is safe because for whatever reason the race condition can’t actually happen, the language (or library) ought to have a way to express that without having to write “unsafe” code. In the meantime, though, you’ve gotta ship something that runs and meets performance requirements.

Well, ideally, the type system would be able to enforce that values passed across thread boundaries are immutable. Rust's model allows this, I believe.

The possibility of mutating shared state in an unprincipled manner is "bad" in the same sense that being able to call free() in C is "bad" -- it's an abstraction violation if you get it wrong. Compared to languages with automatic memory management, there are advantages (control over memory management) and disadvantages (fewer static guarantees).

···

On Aug 10, 2016, at 3:22 PM, David Sweeris <davesweeris@mac.com> wrote:

On Aug 10, 2016, at 4:48 PM, Slava Pestov via swift-evolution <swift-evolution@swift.org> wrote:

- Dave Sweeris


(Slava Pestov) #12

Would this prevent an object from being “known" to multiple threads? …multiple queues? If so, it would be overly restrictive for a general-purpose language. I assume that the plan includes a way to allow “unsafe” behavior to support other concurrency models.

To be clear I'm not presenting any ideas for Swift here, just critiquing Go's model.

Yes, I'm just talking about 'safe' language features for passing immutable data between threads. This would not preclude other forms of concurrency from existing in the language, such as locks, atomics, etc. But I think if a user writes code with only message passing, the language should ensure that the result is free from data races. Go does not do that, which is unfortunate.

Slava

···

On Aug 10, 2016, at 3:34 PM, Christopher Kornher <ckornher@me.com> wrote:

On Aug 10, 2016, at 4:24 PM, Slava Pestov via swift-evolution <swift-evolution@swift.org> wrote:

On Aug 10, 2016, at 3:22 PM, David Sweeris <davesweeris@mac.com> wrote:

On Aug 10, 2016, at 4:48 PM, Slava Pestov via swift-evolution <swift-evolution@swift.org> wrote:

As I understand it, a big weakness of Go's model is that it does not actually prevent data races. There's nothing preventing you from sharing pointers to mutable values between tasks, but I could be wrong about this.

Is that bad? Sharing pointers seems like a cheap way to share data, and as long as you know what you’re doing, why should the language get in the way? Now, if said code really does have performance advantages over the “safer” methods, and it really is safe because for whatever reason the race condition can’t actually happen, the language (or library) ought to have a way to express that without having to write “unsafe” code. In the meantime, though, you’ve gotta ship something that runs and meets performance requirements.

Well, ideally, the type system would be able to enforce that values passed across thread boundaries are immutable. Rust's model allows this, I believe.

The possibility of mutating shared state in an unprincipled manner is "bad" in the same sense that being able to call free() in C is "bad" -- it's an abstraction violation if you get it wrong. Compared to languages with automatic memory management, there are advantages (control over memory management) and disadvantages (fewer static guarantees).

- Dave Sweeris

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


(Christopher Kornher) #13

Would this prevent an object from being “known" to multiple threads? …multiple queues? If so, it would be overly restrictive for a general-purpose language. I assume that the plan includes a way to allow “unsafe” behavior to support other concurrency models.

···

On Aug 10, 2016, at 4:24 PM, Slava Pestov via swift-evolution <swift-evolution@swift.org> wrote:

On Aug 10, 2016, at 3:22 PM, David Sweeris <davesweeris@mac.com> wrote:

On Aug 10, 2016, at 4:48 PM, Slava Pestov via swift-evolution <swift-evolution@swift.org> wrote:

As I understand it, a big weakness of Go's model is that it does not actually prevent data races. There's nothing preventing you from sharing pointers to mutable values between tasks, but I could be wrong about this.

Is that bad? Sharing pointers seems like a cheap way to share data, and as long as you know what you’re doing, why should the language get in the way? Now, if said code really does have performance advantages over the “safer” methods, and it really is safe because for whatever reason the race condition can’t actually happen, the language (or library) ought to have a way to express that without having to write “unsafe” code. In the meantime, though, you’ve gotta ship something that runs and meets performance requirements.

Well, ideally, the type system would be able to enforce that values passed across thread boundaries are immutable. Rust's model allows this, I believe.

The possibility of mutating shared state in an unprincipled manner is "bad" in the same sense that being able to call free() in C is "bad" -- it's an abstraction violation if you get it wrong. Compared to languages with automatic memory management, there are advantages (control over memory management) and disadvantages (fewer static guarantees).

- Dave Sweeris

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


(David Sweeris) #14

Oh, *that* I’ll agree with… I was just talking about situations where there is no “safe” way to do it (for whatever your language’s/compiler’s idea of “safe” is). For example, maybe you really do need to interpret a 64-bit chunk of data as an Int64, even though the compiler is convinced it’s a Double. We can do that in Swift through the various “unsafe” functions, which is where they belong because 99.99% of the time that’s a bad idea. That 0.01% though…

- Dave Sweeris

···

On Aug 10, 2016, at 5:36 PM, Slava Pestov <spestov@apple.com> wrote:

On Aug 10, 2016, at 3:34 PM, Christopher Kornher <ckornher@me.com> wrote:

Would this prevent an object from being “known" to multiple threads? …multiple queues? If so, it would be overly restrictive for a general-purpose language. I assume that the plan includes a way to allow “unsafe” behavior to support other concurrency models.

To be clear I'm not presenting any ideas for Swift here, just critiquing Go's model.

Yes, I'm just talking about 'safe' language features for passing immutable data between threads. This would not preclude other forms of concurrency from existing in the language, such as locks, atomics, etc. But I think if a user writes code with only message passing, the language should ensure that the result is free from data races. Go does not do that, which is unfortunate.


(Goffredo Marocchi) #15

Thanks Kevin, I think they have accepted that they do not need to enter
every segment of computing so the extra performance they could get on some
devices is not worth the risk and the complexity it brings. Not everyone is
trying to cram complex 3D experiences at 60-90+ FPS on a console like
constrained devices and I guess Rust is not targeting that right now :).

···

On Thu, Aug 11, 2016 at 6:12 PM, Kevin Ballard via swift-evolution < swift-evolution@swift.org> wrote:

For anyone interested in reading more about Rust's decisions, here's two
links:

The email about abandoning segmented stacks: https://mail.mozilla.org/
pipermail/rust-dev/2013-November/006314.html

The RFC to remove green threading, with motivation:
https://github.com/aturon/rfcs/blob/remove-runtime/
active/0000-remove-runtime.md

-Kevin Ballard

On Tue, Aug 9, 2016, at 01:28 PM, Kevin Ballard wrote:
> The Rust language used to use a green thread model like Go (actually it
exposed a configurable threading interface so you could choose green
threads or OS threads). It also used segmented stacks like Go did. Over
time, Rust ended up dropping the segmented stacks because it significantly
complicated FFI without providing much, if any, benefit (and IIRC Go
followed suite and dropped segmented stacks somewhere around version 1.5),
and then a little while later Rust dropped green threads entirely. If you
can find them, there are lots of discussions of the pros and cons that were
documented during this process (on mailing lists, in IRC, possibly on
Discourse, there's probably at least one post about it in the Rust
subreddit, etc). But ultimately, it was determined that keeping this
ability significantly complicated the Rust runtime and it provided almost
no benefit. The OS is already really good at scheduling threads, and
there's no memory savings without segmented stacks (though the OS will map
virtual pages for the stack and only allocate the backing physical pages as
the memory is touched, so even if you have a 2MB stack, a new thread will
only actually allocate something like 8kb). And there are some pretty big
downsides to green threads, such as the fact that it significantly
complicates the runtime since all I/O everywhere has to be nonblocking and
it has to be transparent to the code, and FFI ends up as a major problem
(even without segmented stacks), because you have no idea if an FFI call
will block. Green threading libraries end up having to allocate extra OS
threads just to continue servicing the green threads when the existing
threads are potentially blocked in FFI.
>
> So ultimately, green threads really only make sense when you control the
entire ecosystem, so you can ensure the whole stack is compatible with
green threads and won't ever issue blocking calls, and even there there's
not much benefit and there's a lot of complexity involved.
>
> -Kevin Ballard
>
> On Tue, Aug 9, 2016, at 12:04 PM, Dan Stenmark via swift-evolution wrote:
> > I'd like to inquire as to what the Swift team thoughts on Go's
concurrency model are? I'm not referring to convenience of the 'go'
keyword and nor am I referring to how the language handles Channels, both
of which being what most folks associate with it. Rather, I'd like to ask
about the language's use of Green Threads and how the runtime handles the
heavy lifting of multiplexing and scheduling them. What are some of the
strengths and weaknesses the Swift team sees to Go's approach?
> >
> > Dan
> >
> > (DISCLAIMER: I'm posting this for academic reasons, not as a pitch.
While the Swift team's responses may inform opinions on the matter, I do
not want this to turn into a 'this is how I think Swift should do
concurrency' debate. That discussion will come when it comes.)
> > _______________________________________________
> > swift-evolution mailing list
> > swift-evolution@swift.org
> > https://lists.swift.org/mailman/listinfo/swift-evolution
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


(Charlie Monroe) #16

According to http://c9x.me/art/gthreads/intro.html I would guess so - pretty much userland threads - swapping context without kernel.

···

On Aug 9, 2016, at 11:07 PM, Goffredo Marocchi via swift-evolution <swift-evolution@swift.org> wrote:

Talking about green threads, are they similar to fibers? http://www.gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine

Sent from my iPhone

On 9 Aug 2016, at 21:59, Joe Groff via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Aug 9, 2016, at 1:28 PM, Kevin Ballard via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

The Rust language used to use a green thread model like Go (actually it exposed a configurable threading interface so you could choose green threads or OS threads). It also used segmented stacks like Go did. Over time, Rust ended up dropping the segmented stacks because it significantly complicated FFI without providing much, if any, benefit (and IIRC Go followed suite and dropped segmented stacks somewhere around version 1.5), and then a little while later Rust dropped green threads entirely. If you can find them, there are lots of discussions of the pros and cons that were documented during this process (on mailing lists, in IRC, possibly on Discourse, there's probably at least one post about it in the Rust subreddit, etc). But ultimately, it was determined that keeping this ability significantly complicated the Rust runtime and it provided almost no benefit. The OS is already really good at scheduling threads, and there's no memory savings without segmented stacks (though the OS will map virtual pages for the stack and only allocate the backing physical pages as the memory is touched, so even if you have a 2MB stack, a new thread will only actually allocate something like 8kb). And there are some pretty big downsides to green threads, such as the fact that it significantly complicates the runtime since all I/O everywhere has to be nonblocking and it has to be transparent to the code, and FFI ends up as a major problem (even without segmented stacks), because you have no idea if an FFI call will block. Green threading libraries end up having to allocate extra OS threads just to continue servicing the green threads when the existing threads are potentially blocked in FFI.

So ultimately, green threads really only make sense when you control the entire ecosystem, so you can ensure the whole stack is compatible with green threads and won't ever issue blocking calls, and even there there's not much benefit and there's a lot of complexity involved.

In addition to FFI, there's also no way for memory-mapped IO to be non-blocking (a page fault can only be handled by the kernel, after all).

-Joe
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


(Dmitri Gribenko) #17

"42.0.bitPattern". Why do you think this conversion is unsafe?

Dmitri

···

On Wed, Aug 10, 2016 at 3:50 PM, David Sweeris via swift-evolution <swift-evolution@swift.org> wrote:

For example, maybe you really do need to interpret a 64-bit chunk of data as an Int64, even though the compiler is convinced it’s a Double. We can do that in Swift through the various “unsafe” functions, which is where they belong because 99.99% of the time that’s a bad idea.

--
main(i,j){for(i=2;;i++){for(j=2;j<i;j++){if(!(i%j)){j=0;break;}}if
(j){printf("%d\n",i);}}} /*Dmitri Gribenko <gribozavr@gmail.com>*/


(Lily Ballard) #18

I'm confused by your email. Rust is all about performance, and embedded
devices are one of the targets for Rust. And I can't think of any
language that uses green threading that is appropriate for constrained
devices (e.g. Go definitely isn't appropriate for that). One of the
arguments for getting rid of green threading in Rust is that the extra
runtime complexity imposed a performance cost.

-Kevin

···

On Thu, Aug 11, 2016, at 10:36 AM, Goffredo Marocchi wrote:

Thanks Kevin, I think they have accepted that they do not need to
enter every segment of computing so the extra performance they could
get on some devices is not worth the risk and the complexity it
brings. Not everyone is trying to cram complex 3D experiences at 60-
90+ FPS on a console like constrained devices and I guess Rust is not
targeting that right now :).

On Thu, Aug 11, 2016 at 6:12 PM, Kevin Ballard via swift-evolution <swift- > evolution@swift.org> wrote:

For anyone interested in reading more about Rust's decisions, here's
two links:

The email about abandoning segmented stacks:
https://mail.mozilla.org/pipermail/rust-dev/2013-November/006314.html

The RFC to remove green threading, with motivation:
https://github.com/aturon/rfcs/blob/remove-runtime/active/0000-remove-runtime.md

-Kevin Ballard

On Tue, Aug 9, 2016, at 01:28 PM, Kevin Ballard wrote:
> The Rust language used to use a green thread model like Go
> (actually it exposed a configurable threading interface so you
> could choose green threads or OS threads). It also used segmented
> stacks like Go did. Over time, Rust ended up dropping the
> segmented stacks because it significantly complicated FFI without
> providing much, if any, benefit (and IIRC Go followed suite and
> dropped segmented stacks somewhere around version 1.5), and then a
> little while later Rust dropped green threads entirely. If you can
> find them, there are lots of discussions of the pros and cons that
> were documented during this process (on mailing lists, in IRC,
> possibly on Discourse, there's probably at least one post about it
> in the Rust subreddit, etc). But ultimately, it was determined
> that keeping this ability significantly complicated the Rust
> runtime and it provided almost no benefit. The OS is already
> really good at scheduling threads, and there's no memory savings
> without segmented stacks (though the OS will map virtual pages for
> the stack and only allocate the backing physical pages as the
> memory is touched, so even if you have a 2MB stack, a new thread
> will only actually allocate something like 8kb). And there are
> some pretty big downsides to green threads, such as the fact that
> it significantly complicates the runtime since all I/O everywhere
> has to be nonblocking and it has to be transparent to the code,
> and FFI ends up as a major problem (even without segmented
> stacks), because you have no idea if an FFI call will block. Green
> threading libraries end up having to allocate extra OS threads
> just to continue servicing the green threads when the existing
> threads are potentially blocked in FFI.
>
> So ultimately, green threads really only make sense when you
> control the entire ecosystem, so you can ensure the whole stack is
> compatible with green threads and won't ever issue blocking calls,
> and even there there's not much benefit and there's a lot of
> complexity involved.
>
> -Kevin Ballard
>
> On Tue, Aug 9, 2016, at 12:04 PM, Dan Stenmark via swift-evolution >> > wrote:
> > I'd like to inquire as to what the Swift team thoughts on Go's
> > concurrency model are? I'm not referring to convenience of the
> > 'go' keyword and nor am I referring to how the language handles
> > Channels, both of which being what most folks associate with it.
> > Rather, I'd like to ask about the language's use of Green
> > Threads and how the runtime handles the heavy lifting of
> > multiplexing and scheduling them. What are some of the
> > strengths and weaknesses the Swift team sees to Go's approach?
> >
> > Dan
> >
> > (DISCLAIMER: I'm posting this for academic reasons, not as a
> > pitch. While the Swift team's responses may inform opinions on
> > the matter, I do not want this to turn into a 'this is how I
> > think Swift should do concurrency' debate. That discussion will
> > come when it comes.)
> > _______________________________________________
> > swift-evolution mailing list
> > swift-evolution@swift.org
> > https://lists.swift.org/mailman/listinfo/swift-evolution
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


(Lily Ballard) #19

AIUI, fibers are basically coroutines. Even the Naughty Dog presentation
says that fibers are run on threads, and you have to make an explicit
call to switch between fibers. Looking at Ruby's Fiber type, that's also
an explicit coroutine, where you actually yield up a value when you
yield your fiber (which is exactly what coroutines do).

So basically, green threading is preemptive multithreading where the
preempting is done in user-space by the runtime (so it only happens at
specific points where your code calls back into the runtime, but it can
happen at any of those points), and multiple green threads get scheduled
onto the same OS thread, whereas fibers is cooperative multithreading
where your code explicitly yields back to the runtime to switch fibers.

Of course I could be wrong, but that's the impression I got after
reading a few different things about Fibers.

-Kevin

···

On Thu, Aug 11, 2016, at 10:54 AM, Goffredo Marocchi wrote:

Hello Kevin,
I may be wrong in my equating support for fibers to green threads (and
the runtime cost of supporting them), but I do have seen and linked to
a presentation of the use and more than trivial benefits to Naughty
Dog's engine in utilising the 8 Jaguar x86 cores in the PlayStation 4
CPU. Although like you said, it did not come for free or without
evident pain points for them.

On Thu, Aug 11, 2016 at 6:50 PM, Kevin Ballard <kevin@sb.org> wrote:

__
I'm confused by your email. Rust is all about performance, and
embedded devices are one of the targets for Rust. And I can't think
of any language that uses green threading that is appropriate for
constrained devices (e.g. Go definitely isn't appropriate for that).
One of the arguments for getting rid of green threading in Rust is
that the extra runtime complexity imposed a performance cost.

-Kevin

On Thu, Aug 11, 2016, at 10:36 AM, Goffredo Marocchi wrote:

Thanks Kevin, I think they have accepted that they do not need to
enter every segment of computing so the extra performance they could
get on some devices is not worth the risk and the complexity it
brings. Not everyone is trying to cram complex 3D experiences at 60-
90+ FPS on a console like constrained devices and I guess Rust is
not targeting that right now :).

On Thu, Aug 11, 2016 at 6:12 PM, Kevin Ballard via swift-evolution >>> <swift-evolution@swift.org> wrote:

For anyone interested in reading more about Rust's decisions,
here's two links:

The email about abandoning segmented stacks:
https://mail.mozilla.org/pipermail/rust-dev/2013-November/006314.html

The RFC to remove green threading, with motivation:
https://github.com/aturon/rfcs/blob/remove-runtime/active/0000-remove-runtime.md

-Kevin Ballard

On Tue, Aug 9, 2016, at 01:28 PM, Kevin Ballard wrote:
> The Rust language used to use a green thread model like Go
> (actually it exposed a configurable threading interface so you
> could choose green threads or OS threads). It also used segmented
> stacks like Go did. Over time, Rust ended up dropping the
> segmented stacks because it significantly complicated FFI without
> providing much, if any, benefit (and IIRC Go followed suite and
> dropped segmented stacks somewhere around version 1.5), and then
> a little while later Rust dropped green threads entirely. If you
> can find them, there are lots of discussions of the pros and cons
> that were documented during this process (on mailing lists, in
> IRC, possibly on Discourse, there's probably at least one post
> about it in the Rust subreddit, etc). But ultimately, it was
> determined that keeping this ability significantly complicated
> the Rust runtime and it provided almost no benefit. The OS is
> already really good at scheduling threads, and there's no memory
> savings without segmented stacks (though the OS will map virtual
> pages for the stack and only allocate the backing physical pages
> as the memory is touched, so even if you have a 2MB stack, a new
> thread will only actually allocate something like 8kb). And there
> are some pretty big downsides to green threads, such as the fact
> that it significantly complicates the runtime since all I/O
> everywhere has to be nonblocking and it has to be transparent to
> the code, and FFI ends up as a major problem (even without
> segmented stacks), because you have no idea if an FFI call will
> block. Green threading libraries end up having to allocate extra
> OS threads just to continue servicing the green threads when the
> existing threads are potentially blocked in FFI.
>
> So ultimately, green threads really only make sense when you
> control the entire ecosystem, so you can ensure the whole stack
> is compatible with green threads and won't ever issue blocking
> calls, and even there there's not much benefit and there's a lot
> of complexity involved.
>
> -Kevin Ballard
>
> On Tue, Aug 9, 2016, at 12:04 PM, Dan Stenmark via swift- >>>> > evolution wrote:
> > I'd like to inquire as to what the Swift team thoughts on Go's
> > concurrency model are? I'm not referring to convenience of the
> > 'go' keyword and nor am I referring to how the language handles
> > Channels, both of which being what most folks associate with
> > it. Rather, I'd like to ask about the language's use of Green
> > Threads and how the runtime handles the heavy lifting of
> > multiplexing and scheduling them. What are some of the
> > strengths and weaknesses the Swift team sees to Go's approach?
> >
> > Dan
> >
> > (DISCLAIMER: I'm posting this for academic reasons, not as a
> > pitch. While the Swift team's responses may inform opinions on
> > the matter, I do not want this to turn into a 'this is how I
> > think Swift should do concurrency' debate. That discussion
> > will come when it comes.)
> > _______________________________________________
> > swift-evolution mailing list
> > swift-evolution@swift.org
> > https://lists.swift.org/mailman/listinfo/swift-evolution
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


(Goffredo Marocchi) #20

+Swift Evolution

By the way, I was genuine when I said thank you because reading how Rust
evolved in those two aspects is very insightful.

···

On Thu, Aug 11, 2016 at 6:54 PM, Goffredo Marocchi <panajev@gmail.com> wrote:

Hello Kevin,

I may be wrong in my equating support for fibers to green threads (and the
runtime cost of supporting them), but I do have seen and linked to a
presentation of the use and more than trivial benefits to Naughty Dog's
engine in utilising the 8 Jaguar x86 cores in the PlayStation 4 CPU.
Although like you said, it did not come for free or without evident pain
points for them.

On Thu, Aug 11, 2016 at 6:50 PM, Kevin Ballard <kevin@sb.org> wrote:

I'm confused by your email. Rust is all about performance, and embedded
devices are one of the targets for Rust. And I can't think of any language
that uses green threading that is appropriate for constrained devices (e.g.
Go definitely isn't appropriate for that). One of the arguments for getting
rid of green threading in Rust is that the extra runtime complexity imposed
a performance cost.

-Kevin

On Thu, Aug 11, 2016, at 10:36 AM, Goffredo Marocchi wrote:

Thanks Kevin, I think they have accepted that they do not need to enter
every segment of computing so the extra performance they could get on some
devices is not worth the risk and the complexity it brings. Not everyone is
trying to cram complex 3D experiences at 60-90+ FPS on a console like
constrained devices and I guess Rust is not targeting that right now :).

On Thu, Aug 11, 2016 at 6:12 PM, Kevin Ballard via swift-evolution < >> swift-evolution@swift.org> wrote:

For anyone interested in reading more about Rust's decisions, here's two
links:

The email about abandoning segmented stacks:
https://mail.mozilla.org/pipermail/rust-dev/2013-November/006314.html

The RFC to remove green threading, with motivation:
https://github.com/aturon/rfcs/blob/remove-runtime/active/00
00-remove-runtime.md

-Kevin Ballard

On Tue, Aug 9, 2016, at 01:28 PM, Kevin Ballard wrote:
> The Rust language used to use a green thread model like Go (actually it
exposed a configurable threading interface so you could choose green
threads or OS threads). It also used segmented stacks like Go did. Over
time, Rust ended up dropping the segmented stacks because it significantly
complicated FFI without providing much, if any, benefit (and IIRC Go
followed suite and dropped segmented stacks somewhere around version 1.5),
and then a little while later Rust dropped green threads entirely. If you
can find them, there are lots of discussions of the pros and cons that were
documented during this process (on mailing lists, in IRC, possibly on
Discourse, there's probably at least one post about it in the Rust
subreddit, etc). But ultimately, it was determined that keeping this
ability significantly complicated the Rust runtime and it provided almost
no benefit. The OS is already really good at scheduling threads, and
there's no memory savings without segmented stacks (though the OS will map
virtual pages for the stack and only allocate the backing physical pages as
the memory is touched, so even if you have a 2MB stack, a new thread will
only actually allocate something like 8kb). And there are some pretty big
downsides to green threads, such as the fact that it significantly
complicates the runtime since all I/O everywhere has to be nonblocking and
it has to be transparent to the code, and FFI ends up as a major problem
(even without segmented stacks), because you have no idea if an FFI call
will block. Green threading libraries end up having to allocate extra OS
threads just to continue servicing the green threads when the existing
threads are potentially blocked in FFI.
>
> So ultimately, green threads really only make sense when you control
the entire ecosystem, so you can ensure the whole stack is compatible with
green threads and won't ever issue blocking calls, and even there there's
not much benefit and there's a lot of complexity involved.
>
> -Kevin Ballard
>
> On Tue, Aug 9, 2016, at 12:04 PM, Dan Stenmark via swift-evolution >> wrote:
> > I'd like to inquire as to what the Swift team thoughts on Go's
concurrency model are? I'm not referring to convenience of the 'go'
keyword and nor am I referring to how the language handles Channels, both
of which being what most folks associate with it. Rather, I'd like to ask
about the language's use of Green Threads and how the runtime handles the
heavy lifting of multiplexing and scheduling them. What are some of the
strengths and weaknesses the Swift team sees to Go's approach?
> >
> > Dan
> >
> > (DISCLAIMER: I'm posting this for academic reasons, not as a pitch.
While the Swift team's responses may inform opinions on the matter, I do
not want this to turn into a 'this is how I think Swift should do
concurrency' debate. That discussion will come when it comes.)
> > _______________________________________________
> > swift-evolution mailing list
> > swift-evolution@swift.org
> > https://lists.swift.org/mailman/listinfo/swift-evolution
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution