Proposal: Universal dynamic dispatch for method calls

Yes it does. When message dispatch is done using the obj-c runtime, the compiler is free to omit the message send and use static dispatch if it believes it knows the concrete type of the receiver. This normally behaves the same, because dynamic dispatch with a known receiver type and static dispatch are the same, except if the method is dynamically replaced using the obj-c runtime methods, the static dispatch will not invoke the dynamically overridden method. The most common way you see this is with KVO. If you try and KVO an @objc property on a Swift object, it may work sometimes and may not work at other times, depending on when the compiler believes it can safely use static dispatch. This is why Swift has a whole keyword called `dynamic` whose job is to say "no really, use dynamic dispatch for every single access to this method/property, I don't care that you know for a fact it's a Foo and not a subclass, just trust me on this".

More generally, I don't see there being any real difference between

extension Proto {
    func someMethodNotDefinedInTheProto()
}

and saying

func someFuncThatUsesTheProto<T: Proto>(x: T)

except that one is invoked using method syntax and one is invoked using function syntax. Method invocation syntax does not inherently mean "dynamic dispatch" any more than function syntax does.

-Kevin Ballard

···

On Thu, Dec 10, 2015, at 02:24 AM, Brent Royal-Gordon wrote:

Swift loves to dispatch things statically and does so wherever possible. In some cases—such as value types not having inheritance—language features are clearly designed the way they are specifically to allow them to be statically dispatched. But Swift never uses static dispatch where dynamic dispatch would have given a different result. This contrasts with, for instance, a non-virtual C++ method’s behavior of “ha ha ha, I’m just going to ignore your override for speed." You could write a Swift compiler which dispatched everything dynamically, and you would never see any difference in semantics.

Thank you for writing this up. I have a few concerns:

1. The usage of `final` seems to be conflating class behavior and protocol behavior. Your stated reason for preferring `final` can be summed up as you view Swift from a virtual dispatch worldview, but the keyword doesn't make any sense here when viewed from a static dispatch worldview. I say it doesn't make sense based on the existing meaning of `final`, where it applies to classes only, and restricts subclasses.
2. I also worry that `final` will be confusing when you consider protocol inheritance; if a protocol P has an extension method foo(), and a protocol Q inherits P and also defines foo(), the usage of `final` suggests that should be disallowed when in fact it's perfectly fine.
3. I still don't understand the point of @incoherent. I feel like it's just going to be extra boilerplate that very few people are actually going to understand the reason for, and almost everyone will just add it when the compiler yells at them. Or since you propose not even offering it as a Fix-It, then people will think Swift simply doesn't support having the same method name on a type and on a protocol, and will view that as an annoying limitation.
4. I also think @incoherent only makes any sense at all if you view Swift from a dynamic dispatch worldview (not even virtual dispatch; protocol extensions don't have a virtual function table, so a virtual dispatch view should be fine with current behavior). Which is to say, there's simply no conflict when viewed from a static dispatch worldview. So this attribute is going to be required for everyone, even when we don't see it as being in conflict.

I do think it's valid to say that you can't tell from looking at a protocol extension which methods are default implementations and which ones are new methods that aren't part of the protocol, and that's why I proposed the `default` keyword. But using that keyword also means no @incoherent (because there's no implicit acknowledgement of a "conflict" like with the `final` keyword, it simply seeks to disambiguate which methods are default implementations).

-Kevin

···

On Thu, Dec 10, 2015, at 02:09 AM, Brent Royal-Gordon wrote:

> The details of the solution are tricky, but I like this general approach with “final” (or whatever the right keyword is). It passes the smell test for me.

I’ve written up a draft of about half of a formal proposal at <https://github.com/brentdax/swift-evolution/blob/master/proposals/0000-require-final-on-protocol-extension-methods.md>. It provides a bit more detail and hopefully a more complete rationale for why I favor the design I’ve proposed.

--
Brent Royal-Gordon
Architechies

Totally unrelated to your point, but FYI “dynamic” is a declaration modifier, not a keyword. "var dynamic = 42” is perfectly legal in Swift.

This is one of the advantages of having a keyword at the start of every decl and statement: we get decl modifiers without taking keywords. You can see the list of them here (search for DeclModifier):

-Chris

···

On Dec 10, 2015, at 4:31 PM, Kevin Ballard via swift-evolution <swift-evolution@swift.org> wrote:

This is why Swift has a whole keyword called `dynamic` whose job is…

Swift loves to dispatch things statically and does so wherever possible. In some cases—such as value types not having inheritance—language features are clearly designed the way they are specifically to allow them to be statically dispatched. But Swift never uses static dispatch where dynamic dispatch would have given a different result. This contrasts with, for instance, a non-virtual C++ method’s behavior of “ha ha ha, I’m just going to ignore your override for speed." You could write a Swift compiler which dispatched everything dynamically, and you would never see any difference in semantics.

Yes it does. When message dispatch is done using the obj-c runtime, the compiler is free to omit the message send and use static dispatch if it believes it knows the concrete type of the receiver. This normally behaves the same, because dynamic dispatch with a known receiver type and static dispatch are the same, except if the method is dynamically replaced using the obj-c runtime methods, the static dispatch will not invoke the dynamically overridden method. The most common way you see this is with KVO. If you try and KVO an @objc property on a Swift object, it may work sometimes and may not work at other times, depending on when the compiler believes it can safely use static dispatch. This is why Swift has a whole keyword called `dynamic` whose job is to say "no really, use dynamic dispatch for every single access to this method/property, I don't care that you know for a fact it's a Foo and not a subclass, just trust me on this”.

Okay, time to introduce some precise definitions so we don’t talk past each other.

There are three types of dispatch:

* Static. The compiler determines the exact function to execute.
* Virtual (I was previously calling this “dynamic”). The compiler determines the function to execute’s position in the instance’s vtable. (I don’t know if Swift actually calls this data structure a “vtable”, but you get my meaning.)
* Dynamic. The compiler determines the selector to send to the instance, thereby causing a function to execute.

Swift prefers virtual dispatch. Unless you request dynamic dispatch with `dynamic`, or you’re using members written in Objective-C (which aren’t included in the vtables Swift uses for virtual dispatch), Swift always behaves as if you’re going to get at least virtual dispatch. In some cases it uses static dispatch, but only where the language’s semantics guarantee that virtual dispatch would give the same result. Examples of this include `final` and `static` members (which forbid overriding the members in question, and thus prevent a mismatch between the results of virtual dispatch and static) and value types (which don’t support inheritance, so there’s no way to introduce a mismatch).

Again, the sole exception to this is protocol extensions. Unlike any other construct in the language, protocol extension methods are dispatched statically in a situation where a virtual dispatch would cause different results. No compiler error prevents this mismatch.

More generally, I don't see there being any real difference between

extension Proto {
   func someMethodNotDefinedInTheProto()
}

and saying

func someFuncThatUsesTheProto<T: Proto>(x: T)

except that one is invoked using method syntax and one is invoked using function syntax. Method invocation syntax does not inherently mean "dynamic dispatch" any more than function syntax does.

I’m sorry, I just don’t agree with you on this. In Swift, generics and overloads are statically resolved at compile time. Other than explicit branching constructs like `if` and `switch`, the *only* place in Swift where a dynamic, *runtime* type—as opposed to the static, *compile-time* type—chooses which code to run is in the self position of a member access. Swift may choose to use static dispatch in certain cases, but the language is designed to prevent any difference in semantics based on that decision.

Protocol extensions are the weird outlier here—the only case in which the static and virtual dispatch behaviors are different, and Swift chooses the static dispatch behavior.

···

--
Brent Royal-Gordon
Architechies

Method invocation syntax does not inherently mean "dynamic dispatch" any more than function syntax does.

In languages now in widespread use — Java, Ruby, Javascript, C#, what I know of Python — that is _precisely_ what it means. In all those languages, as far as I know:

all uses of the dot invocation syntax either use dynamic dispatch, or have something on the left whose compile-time and runtime types are identical; and
all instances of dynamic dispatch in the language either use the dot invocation syntax, or have and implicit “self.” / “this.” that uses it.

In short, programmers are likely to bring to Swift the assumption that dot invocation = dynamic dispatch.

Our difference of perspective on this single point [rimshot] is probably the source of our larger disagreement on this entire thread.

Cheers, P

···

On Dec 10, 2015, at 6:31 PM, Kevin Ballard via swift-evolution <swift-evolution@swift.org> wrote:

On Thu, Dec 10, 2015, at 02:24 AM, Brent Royal-Gordon wrote:

Swift loves to dispatch things statically and does so wherever possible. In some cases—such as value types not having inheritance—language features are clearly designed the way they are specifically to allow them to be statically dispatched. But Swift never uses static dispatch where dynamic dispatch would have given a different result. This contrasts with, for instance, a non-virtual C++ method’s behavior of “ha ha ha, I’m just going to ignore your override for speed." You could write a Swift compiler which dispatched everything dynamically, and you would never see any difference in semantics.

Yes it does. When message dispatch is done using the obj-c runtime, the compiler is free to omit the message send and use static dispatch if it believes it knows the concrete type of the receiver. This normally behaves the same, because dynamic dispatch with a known receiver type and static dispatch are the same, except if the method is dynamically replaced using the obj-c runtime methods, the static dispatch will not invoke the dynamically overridden method. The most common way you see this is with KVO. If you try and KVO an @objc property on a Swift object, it may work sometimes and may not work at other times, depending on when the compiler believes it can safely use static dispatch. This is why Swift has a whole keyword called `dynamic` whose job is to say "no really, use dynamic dispatch for every single access to this method/property, I don't care that you know for a fact it's a Foo and not a subclass, just trust me on this".

More generally, I don't see there being any real difference between

extension Proto {
   func someMethodNotDefinedInTheProto()
}

and saying

func someFuncThatUsesTheProto<T: Proto>(x: T)

except that one is invoked using method syntax and one is invoked using function syntax. Method invocation syntax does not inherently mean "dynamic dispatch" any more than function syntax does.

-Kevin Ballard
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I know almost nothing about C#, so let's ignore that for a moment.

The other languages you listed don't have some notion that dot-syntax
means dynamic dispatch, instead they're just dynamically-dispatched
languages in general. Java, Ruby, and Python are object-oriented
languages, so all interactions with objects are always dynamic.
JavaScript is a prototype-based language, but the same basic idea
applies. Python has a bunch of global functions that are actually
dynamically dispatched based on the argument runtime type (e.g.
`len(foo)` is actually `foo.__len__()`). JavaScript, Python, and Ruby
are all scripting languages so they don't even know at compile-time what
a function invocation resolves to, all function calls are resolved using
the same scoping rules that affect variables, and in Python and
JavaScript (not sure about Ruby) these functions are actually defined on
an object that's automatically in the scope, e.g. in Python the `len()`
function is actually `__builtins__.len()`, and you can say
`__builtins__.len = None` if you want to break any code that tries to
call the function.

So really there's just two classes of language that you've called out
that are predominately dynamic dispatch: Pure-OOP languages (like Java)
and scripting languages. And that really shouldn't be a surprise that
they're dynamic dispatch. But that's not inherent in the syntax.

Meanwhile, there's languages that use dot-notation for static dispatch.
The first language on my list is Swift, because that's absolutely true
for structs and enums. It's even somewhat true for generics; I say
"somewhat" because the behavior for generics is identical whether it's
virtual dispatch or static dispatch, and in fact it uses virtual
dispatch for the basic version of the function and uses static dispatch
for all specializations, so it's just as valid to say that dispatch on
generics is static as it is to say that it's virtual.

Beyond that, Rust uses static dispatch (with optional virtual dispatch
using trait objects, but most uses of traits—i.e. in generics—is still
static dispatch). C++ uses static dispatch by default and requires the
`virtual` keyword to enable virtual dispatch. I'm sure there's other
examples too, but I don't have a comprehensive knowledge of all
programming languages that use dot-syntax.

We're also only considering single-dispatch at the moment, where the
method is dispatched based on the receiver type. There's languages that
are multiple-dispatch, and even Swift supports this using
function/method overloading. I don't know enough about multiple-dispatch
languages like Dylan and Julia to know if they use static or dynamic
dispatch though.

Overall, my point is that when you say that "programmers are likely to
bring to Swift the assumption that dot invocation = dynamic dispatch",
you're speaking from a very limited point of view. People who only have
experience with Java, Ruby, Python, and JavaScript might think this,
although they also might think that all functions are resolved
dynamically at runtime, or that everything is an object, and those are
certainly not true. People who come from a different background, such as
C++, or Rust, may think that dot invocation == static by default. Or
people might come from a background that doesn't have dot invocation at
all, or maybe they'll come from a mixed background and recognize that
there's a wide variety of dispatching and method resolution strategies
and that each language does things its own way.

-Kevin Ballard

···

On Thu, Dec 10, 2015, at 09:27 PM, Paul Cantrell wrote:

Method invocation syntax does not inherently mean "dynamic dispatch"
any more than function syntax does.

In languages now in widespread use — Java, Ruby, Javascript, C#, what
I know of Python — that is _precisely_ what it means. In all those
languages, as far as I know:

* *all* uses of the dot invocation syntax either use dynamic
   dispatch, or have something on the left whose compile-time and
   runtime types are identical; and
* *all* instances of dynamic dispatch in the language either use the
   dot invocation syntax, or have and implicit “self.” / “this.” that
   uses it.

In short, programmers are likely to bring to Swift the assumption that
dot invocation = dynamic dispatch.

You think that Swift prefers virtual dispatch. I think it prefers static.

I think what's really going on here is that _in most cases_ there's no observable difference between static dispatch and virtual dispatch. If you think of Swift as an OOP language with a powerful value-typed system added on, then you'll probably think Swift prefers virtual dispatch. If you think of Swift as a value-typed language with an OOP layer added, then you'll probably think Swift prefers static dispatch. In reality, Swift is a hybrid language and it uses different types of dispatch in different situations as appropriate. In the OOP subset of the language, Swift is mostly virtual-dispatch, with a bit of dynamic-dispatch thrown in, and some opt-in static dispatch. In the value-typed subset of the language, Swift is static-dispatch.

Protocols are where it gets a little weird, as it's actually a combination of static and virtual dispatch, and runtime type information. When using generics, semantically speaking, the receiver type is statically resolved, and then it will use whatever dispatch the actual method in question uses (e.g. static for structs and values, virtual or dynamic for classes). Technically it does actually use virtual dispatch in the un-specialized case, but then specializes under optimization to match the described semantic behavior, but this is not an observable distinction. When using a protocol as a "protocol object" (i.e. a value typed as the protocol itself) it is of course always virtual dispatch (which may turn into dynamic dispatch for dynamic methods on classes). And the runtime type information part of protocols is the fact that you can always query the runtime type of the contained value, both checking what concrete type a protocol object has, and checking what protocols a value conforms to.

And finally for protocol extensions. These are strictly static typing, all the way. Which makes sense, because there's no virtual function table (or in Swift terms, protocol witness table) to put the methods into. Extensions can provide default implementations of protocol methods because the type that conforms to the protocol puts the extension method implementation into its own protocol witness table (and they only do this if the type doesn't already implement the method itself). Since the protocol witness table only contains things defined in the protocol itself, protocol extension methods that aren't part of the protocol don't get put anywhere. So invoking one of those methods has no possible virtual function table to check, the only thing it can do is statically invoke the method from the extension. And this is why you can't override them (or rather, why your override isn't called if the method resolution is done via the protocol).

The only way to make protocol extension methods work via virtual dispatch is to define a new protocol witness table for the extension itself, but types that were defined without being able to see the extension won't know to create and populate this protocol witness table. So now you need to be able to figure out if a protocol witness table exists at all before you can invoke it, and if it doesn't you have to fall back to static dispatch. Even worse, this means that even if the type already defines an appropriate override, because there's no protocol witness table the override can't get called. And that is a recipe for serious confusion, because you see that a method foo() is defined in an extension for protocol P, and you see that type T conforms to P and implements foo(), but calling foo() via the protocol will never invoke T.foo(). And there's no way for the user to know whether it will or will not work unless they know exactly which module P.foo() was defined in and exactly which module T.foo() was defined in and whether the module that defined T.foo() could see the module that defined P.foo() (and things get more complicated if the conformance of T: P was declared in yet a third module). This problem doesn't happen with protocols today because any type T that conforms to P by definition knows all the methods/properties that P defines and therefore can populate its protocol witness table.

The only other solution that comes to mind is turning all protocol dispatch into dynamic dispatch, which I hope you'll agree is not a good idea.

-Kevin Ballard

···

On Thu, Dec 10, 2015, at 10:35 PM, Brent Royal-Gordon wrote:

>> Swift loves to dispatch things statically and does so wherever possible. In some cases—such as value types not having inheritance—language features are clearly designed the way they are specifically to allow them to be statically dispatched. But Swift never uses static dispatch where dynamic dispatch would have given a different result. This contrasts with, for instance, a non-virtual C++ method’s behavior of “ha ha ha, I’m just going to ignore your override for speed." You could write a Swift compiler which dispatched everything dynamically, and you would never see any difference in semantics.
>
> Yes it does. When message dispatch is done using the obj-c runtime, the compiler is free to omit the message send and use static dispatch if it believes it knows the concrete type of the receiver. This normally behaves the same, because dynamic dispatch with a known receiver type and static dispatch are the same, except if the method is dynamically replaced using the obj-c runtime methods, the static dispatch will not invoke the dynamically overridden method. The most common way you see this is with KVO. If you try and KVO an @objc property on a Swift object, it may work sometimes and may not work at other times, depending on when the compiler believes it can safely use static dispatch. This is why Swift has a whole keyword called `dynamic` whose job is to say "no really, use dynamic dispatch for every single access to this method/property, I don't care that you know for a fact it's a Foo and not a subclass, just trust me on this”.

Okay, time to introduce some precise definitions so we don’t talk past each other.

There are three types of dispatch:

* Static. The compiler determines the exact function to execute.
* Virtual (I was previously calling this “dynamic”). The compiler determines the function to execute’s position in the instance’s vtable. (I don’t know if Swift actually calls this data structure a “vtable”, but you get my meaning.)
* Dynamic. The compiler determines the selector to send to the instance, thereby causing a function to execute.

Swift prefers virtual dispatch. Unless you request dynamic dispatch with `dynamic`, or you’re using members written in Objective-C (which aren’t included in the vtables Swift uses for virtual dispatch), Swift always behaves as if you’re going to get at least virtual dispatch. In some cases it uses static dispatch, but only where the language’s semantics guarantee that virtual dispatch would give the same result. Examples of this include `final` and `static` members (which forbid overriding the members in question, and thus prevent a mismatch between the results of virtual dispatch and static) and value types (which don’t support inheritance, so there’s no way to introduce a mismatch).

Again, the sole exception to this is protocol extensions. Unlike any other construct in the language, protocol extension methods are dispatched statically in a situation where a virtual dispatch would cause different results. No compiler error prevents this mismatch.

> More generally, I don't see there being any real difference between
>
> extension Proto {
> func someMethodNotDefinedInTheProto()
> }
>
> and saying
>
> func someFuncThatUsesTheProto<T: Proto>(x: T)
>
> except that one is invoked using method syntax and one is invoked using function syntax. Method invocation syntax does not inherently mean "dynamic dispatch" any more than function syntax does.

I’m sorry, I just don’t agree with you on this. In Swift, generics and overloads are statically resolved at compile time. Other than explicit branching constructs like `if` and `switch`, the *only* place in Swift where a dynamic, *runtime* type—as opposed to the static, *compile-time* type—chooses which code to run is in the self position of a member access. Swift may choose to use static dispatch in certain cases, but the language is designed to prevent any difference in semantics based on that decision.

Protocol extensions are the weird outlier here—the only case in which the static and virtual dispatch behaviors are different, and Swift chooses the static dispatch behavior.

--
Brent Royal-Gordon
Architechies

(emphasis mine)

I know that this is a bit philosophical, but let me suggest a “next level down” way to look at this. Static and dynamic are *both* great after all, and if you’re looking to type-cast languages, you need to consider them both in light of their semantics, but also factor in their compilation strategy and the programmer model that they all provide. Let me give you some examples, but keep in mind that this is a narrow view and just MHO:

1. C: Static compilation model, static semantics. While it does provide indirect function pointers, C does everything possible to punish their use (ever see the non-typedef'd prototype for signal(3/7)?), and is almost always statically compiled. It provides a very “static centric” programming model. This is great in terms of predictability - it makes it trivial to “predict” what your code will look like at a machine level.

2. Javascript: Completely dynamic compilation model, completely dynamic semantics. No one talks about statically compiling javascript, because the result of doing so would be a really really slow executable. Javascript performance hinges on dynamic profile information to be able to efficiently execute a program. This provides a very “dynamic centric” programming model, with no ability to understand how your code executes at a machine level.

3. C++: C++ is a step up from C in terms of introducing dynamism into the model with virtual functions. Sadly, C++ also provides a hostile model for static optimizability - the existence of placement new prevents a lot of interesting devirtualization opportunities, and generally makes the compiler’s life difficult. OTOH, like C, C++ provides a very predictable model: C++ programmers assume that C constructs are static, but virtual methods will be dynamically dispatched. This is correct because (except for narrow cases) the compiler has to use dynamic dispatch for C++ virtual methods. The good news here is that its dynamism is completely opt in, so C++ preserves all of the predictability, performance, and static-compilability of C while providing a higher level programming model. If virtual methods are ever actually a performance problem, a C++ programmer has ways to deal with that, directly in their code.

4. Java: Java makes nearly "everything" an object (no structs or other non-primitive value types), and all methods default to being “virtual” (in the C++ sense). Java also introduces interfaces, which offer an added dimension on dynamic dispatch. To cope with this, Java assumes a JIT compilation model, which can use dynamic behavior to de-virtualize the (almost always) monomorphic calls into checked direct calls. This works out really well in practice, because JIT compilers are great at telling when a program with apparently very dynamic semantics actually have static semantics in practice (e.g. a dynamic call has a single receiver). OTOH, since the compilation model assumes a JIT, this means that purely “AOT” static compilers (which have no profile information, no knowledge of class loaders, etc) necessarily produce inferior code. It also means that Java doesn’t “scale down” well to small embedded systems that can’t support a JIT, like a bootloader.

5) Objective-C: Objective-C provides a hybrid model which favors predictability due to its static compilation model (similar in some ways to C++). The C-like constructs provide C-like performance, and the “messaging” constructs are never “devirtualized”, so they provide very predictable performance characteristics. Because it is predictable, if the cost of a message send ever becomes an issue in practice, the programmer has many patterns to deal with it (including "imp caching", and also including the ability to define the problem away by rewriting code in terms of C constructs). The end result of this is that programmers write code which use C-level features where performance matters and dynamicism doesn’t, but use ObjC features where dynamicism is important or where performance doesn’t matter.

While it would be possible to implement a JIT compiler for ObjC, I’d expect the wins to be low, because the “hot” code which may be hinging on these dynamic features is likely to already be optimized by hand.

6) GoLang: From this narrow discussion and perspective, Go has a hybrid model that has similar characteristics to Objective-C 2013 (which introduced modules, but didn’t yet have generics). It assumes static compilation and provides a very predictable hybrid programming model. Its func’s are statically dispatched, but its interfaces are dynamically dispatched. It doesn’t provide guaranteed dynamic dispatch (or “classes") like ObjC, but it provides even more dynamic feautres in other areas (e.g. it requires a cycle-collecting garbage collector). Its "interface{}” type is pretty equivalent to “id” (e.g. all uses of it are dynamically dispatched or must be downcasted), and it encourages use of it in the same places that Objective-C does. Go introduces checked downcasts, which introduce some run-time overhead, but also provide safety compared to Objective-C. Go thankfully introduces a replacement for the imperative constructs in C, which defines away a bunch of C problems that Objective-C inherited, and it certainly is prettier!

… I can go on about other languages, but I have probably already gotten myself into enough trouble. :-)

With this as context, lets talk about Swift:

Swift is another case of a hybrid model: its semantics provide predictability between obviously static (structs, enums, and global funcs) and obviously dynamic (classes, protocols, and closures) constructs. A focus of Swift (like Java and Javascript) is to provide an apparently simple programming model. However, Swift also intentionally "cheats" in its global design by mixing in a few tricks to make the dynamic parts of the language optimizable by a static compiler in many common cases, without requiring profiling or other dynamic information.. For example, the Swift compiler can tell if methods in non-public classes are never overridden (and non-public is the default, for a lot of good reasons) - thus treating them as final. This allows eliminating much of the overhead of dynamic dispatch without requiring a JIT. Consider an “app”: because it never needs to have non-public classes, this is incredibly powerful - the design of the swift package manager extends this even further (in principle, not done yet) to external libraries. Further, Swift’s generics provide an a static performance model similar to C++ templates in release builds (though I agree we need to do more to really follow through on this) -- while Swift existentials (values of protocol type) provide a balance by giving a highly dynamic model.

The upshot of this is that Swift isn’t squarely in either of the static or dynamic camps: it aims to provide a very predictable performance model (someone writing a bootloader or firmware can stick to using Swift structs and have a simple guarantee of no dynamic overhead or runtime dependence) while also providing an expressive and clean high level programming model - simplifying learning and the common case where programmers don’t care to count cycles. If anything, I’d say that Swift is an “opportunistic” language, in that it provides a very dynamic “default" programming model, where you don’t have to think about the fact that a static compiler is able to transparently provide great performance - without needing the overhead of a JIT.

Finally, while it is possible that a JIT compiler might be interesting someday in the Swift space, if we do things right, it will never be “worth it” because programmers will have enough ability to reason about performance at their fingertips. This means that there should be no Java or Javascript-magnitude "performance delta" sitting on the table waiting for a JIT to scoop up. We’ll see how it works out long term, but I think we’re doing pretty well so far.

TL;DR: What I’m really getting at is that the old static vs dynamic trope is at the very least only half of the story. You really need to include the compilation model and thus the resultant programmer model into the story, and the programmer model is what really matters, IMHO.

-Chris

···

On Dec 11, 2015, at 8:56 PM, Kevin Ballard via swift-evolution <swift-evolution@swift.org> wrote:

You think that Swift prefers virtual dispatch. I think it prefers static.

I think what's really going on here is that _in most cases_ there's no observable difference between static dispatch and virtual dispatch. If you think of Swift as an OOP language with a powerful value-typed system added on, then you'll probably think Swift prefers virtual dispatch. If you think of Swift as a value-typed language with an OOP layer added, then you'll probably think Swift prefers static dispatch. In reality, Swift is a hybrid language and it uses different types of dispatch in different situations as appropriate.

Thanks, Chris, for this writeup! It’s full of useful insight about language design, and also a helpful window into the thinking of the core team.

In this writeup, I think we have a way to unify the seemingly competing lines of thought, and have a way to move forward with a common understanding even though we bring different viewpoints.

Quoting some relevant snippers:

Swift isn’t squarely in either of the static or dynamic camps: it aims to provide a very predictable performance model … while also providing an expressive and clean high level programming model. A focus of Swift … is to provide an apparently simple programming model. However, Swift also intentionally "cheats" in its global design by mixing in a few tricks to make the dynamic parts of the language optimizable by a static compiler in many common cases, without requiring profiling or other dynamic information.

I’d say that Swift is an “opportunistic” language, in that it provides a very dynamic “default" programming model, where you don’t have to think about the fact that a static compiler is able to transparently provide great performance - without needing the overhead of a JIT.

You really need to include the compilation model and thus the resultant programmer model into the story, and the programmer model is what really matters, IMHO.

First, two clarification requests for Chris on two things I imagine might lead to confusion on this thread:

When you say “programmer model,” I understand you to mean "how a Swift programmer thinks about the language’s semantics while writing Swift code, without regard to how they’re implemented in the compiler.”

When you say “dynamic,” I take that to mean any kind of dispatch based on runtime type — whether implemented using vtables a la C++, message dispatch a la Objective-C, string-based lookup in a hash a la Javascript, or anything else that uses something’s runtime type to resolve a method call.

Do I understand you correctly?

• • •

On this thread, there are (I think?) two related goals at hand:

Allow dynamic dispatch of protocol extension methods even when the method does not appear in the extended protocol.
Provide a good mental model of the language for programmers, and prevent programmer errors caused by misunderstandings about dispatch rules (if such misunderstandings do indeed exist in the wild).

I’ll copy and paste what Chris wrote into a “Swift philosophy” checklist for Brent’s proposal, and for any others working toward these goals. Chris, please correct me if I’m putting words in your mouth!
Provide a programmer model that:
is high level
is expressive and clean
is dynamic by default
doesn’t require a programmer to think about the fact that a static compiler is able to transparently provide great performance
Provide a performance model that:
is predictable
makes the dynamic parts of the language optimizable by a static compiler in many common cases
does not requiring profiling or other dynamic information
does not require JIT compilation
How do we resolve tension between these goals? The programmer model is what really matters, but we cannot reason about it without considering its impact on the compilation model. We should give the compiler opportunities to “cheat” in its optimization whenever we can do so without undermining the programmer model.

That’s a clear set of priorities. Assuming, that is, that I don’t misunderstand Chris, and that we’re willing to follow his lead!

Cheers,

Paul

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
https://innig.net@inthehandshttp://siestaframework.com/

···

On Dec 12, 2015, at 1:45 AM, Chris Lattner via swift-evolution <swift-evolution@swift.org> wrote:

On Dec 11, 2015, at 8:56 PM, Kevin Ballard via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

You think that Swift prefers virtual dispatch. I think it prefers static.

I think what's really going on here is that _in most cases_ there's no observable difference between static dispatch and virtual dispatch. If you think of Swift as an OOP language with a powerful value-typed system added on, then you'll probably think Swift prefers virtual dispatch. If you think of Swift as a value-typed language with an OOP layer added, then you'll probably think Swift prefers static dispatch. In reality, Swift is a hybrid language and it uses different types of dispatch in different situations as appropriate.

(emphasis mine)

I know that this is a bit philosophical, but let me suggest a “next level down” way to look at this. Static and dynamic are *both* great after all, and if you’re looking to type-cast languages, you need to consider them both in light of their semantics, but also factor in their compilation strategy and the programmer model that they all provide. Let me give you some examples, but keep in mind that this is a narrow view and just MHO:

1. C: Static compilation model, static semantics. While it does provide indirect function pointers, C does everything possible to punish their use (ever see the non-typedef'd prototype for signal(3/7)?), and is almost always statically compiled. It provides a very “static centric” programming model. This is great in terms of predictability - it makes it trivial to “predict” what your code will look like at a machine level.

2. Javascript: Completely dynamic compilation model, completely dynamic semantics. No one talks about statically compiling javascript, because the result of doing so would be a really really slow executable. Javascript performance hinges on dynamic profile information to be able to efficiently execute a program. This provides a very “dynamic centric” programming model, with no ability to understand how your code executes at a machine level.

3. C++: C++ is a step up from C in terms of introducing dynamism into the model with virtual functions. Sadly, C++ also provides a hostile model for static optimizability - the existence of placement new prevents a lot of interesting devirtualization opportunities, and generally makes the compiler’s life difficult. OTOH, like C, C++ provides a very predictable model: C++ programmers assume that C constructs are static, but virtual methods will be dynamically dispatched. This is correct because (except for narrow cases) the compiler has to use dynamic dispatch for C++ virtual methods. The good news here is that its dynamism is completely opt in, so C++ preserves all of the predictability, performance, and static-compilability of C while providing a higher level programming model. If virtual methods are ever actually a performance problem, a C++ programmer has ways to deal with that, directly in their code.

4. Java: Java makes nearly "everything" an object (no structs or other non-primitive value types), and all methods default to being “virtual” (in the C++ sense). Java also introduces interfaces, which offer an added dimension on dynamic dispatch. To cope with this, Java assumes a JIT compilation model, which can use dynamic behavior to de-virtualize the (almost always) monomorphic calls into checked direct calls. This works out really well in practice, because JIT compilers are great at telling when a program with apparently very dynamic semantics actually have static semantics in practice (e.g. a dynamic call has a single receiver). OTOH, since the compilation model assumes a JIT, this means that purely “AOT” static compilers (which have no profile information, no knowledge of class loaders, etc) necessarily produce inferior code. It also means that Java doesn’t “scale down” well to small embedded systems that can’t support a JIT, like a bootloader.

5) Objective-C: Objective-C provides a hybrid model which favors predictability due to its static compilation model (similar in some ways to C++). The C-like constructs provide C-like performance, and the “messaging” constructs are never “devirtualized”, so they provide very predictable performance characteristics. Because it is predictable, if the cost of a message send ever becomes an issue in practice, the programmer has many patterns to deal with it (including "imp caching", and also including the ability to define the problem away by rewriting code in terms of C constructs). The end result of this is that programmers write code which use C-level features where performance matters and dynamicism doesn’t, but use ObjC features where dynamicism is important or where performance doesn’t matter.

While it would be possible to implement a JIT compiler for ObjC, I’d expect the wins to be low, because the “hot” code which may be hinging on these dynamic features is likely to already be optimized by hand.

6) GoLang: From this narrow discussion and perspective, Go has a hybrid model that has similar characteristics to Objective-C 2013 (which introduced modules, but didn’t yet have generics). It assumes static compilation and provides a very predictable hybrid programming model. Its func’s are statically dispatched, but its interfaces are dynamically dispatched. It doesn’t provide guaranteed dynamic dispatch (or “classes") like ObjC, but it provides even more dynamic feautres in other areas (e.g. it requires a cycle-collecting garbage collector). Its "interface{}” type is pretty equivalent to “id” (e.g. all uses of it are dynamically dispatched or must be downcasted), and it encourages use of it in the same places that Objective-C does. Go introduces checked downcasts, which introduce some run-time overhead, but also provide safety compared to Objective-C. Go thankfully introduces a replacement for the imperative constructs in C, which defines away a bunch of C problems that Objective-C inherited, and it certainly is prettier!

… I can go on about other languages, but I have probably already gotten myself into enough trouble. :slight_smile:

With this as context, lets talk about Swift:

Swift is another case of a hybrid model: its semantics provide predictability between obviously static (structs, enums, and global funcs) and obviously dynamic (classes, protocols, and closures) constructs. A focus of Swift (like Java and Javascript) is to provide an apparently simple programming model. However, Swift also intentionally "cheats" in its global design by mixing in a few tricks to make the dynamic parts of the language optimizable by a static compiler in many common cases, without requiring profiling or other dynamic information.. For example, the Swift compiler can tell if methods in non-public classes are never overridden (and non-public is the default, for a lot of good reasons) - thus treating them as final. This allows eliminating much of the overhead of dynamic dispatch without requiring a JIT. Consider an “app”: because it never needs to have non-public classes, this is incredibly powerful - the design of the swift package manager extends this even further (in principle, not done yet) to external libraries. Further, Swift’s generics provide an a static performance model similar to C++ templates in release builds (though I agree we need to do more to really follow through on this) -- while Swift existentials (values of protocol type) provide a balance by giving a highly dynamic model.

The upshot of this is that Swift isn’t squarely in either of the static or dynamic camps: it aims to provide a very predictable performance model (someone writing a bootloader or firmware can stick to using Swift structs and have a simple guarantee of no dynamic overhead or runtime dependence) while also providing an expressive and clean high level programming model - simplifying learning and the common case where programmers don’t care to count cycles. If anything, I’d say that Swift is an “opportunistic” language, in that it provides a very dynamic “default" programming model, where you don’t have to think about the fact that a static compiler is able to transparently provide great performance - without needing the overhead of a JIT.

Finally, while it is possible that a JIT compiler might be interesting someday in the Swift space, if we do things right, it will never be “worth it” because programmers will have enough ability to reason about performance at their fingertips. This means that there should be no Java or Javascript-magnitude "performance delta" sitting on the table waiting for a JIT to scoop up. We’ll see how it works out long term, but I think we’re doing pretty well so far.

TL;DR: What I’m really getting at is that the old static vs dynamic trope is at the very least only half of the story. You really need to include the compilation model and thus the resultant programmer model into the story, and the programmer model is what really matters, IMHO.

-Chris

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

3. C++: C++ is a step up from C in terms of introducing dynamism into the model with virtual functions. Sadly, C++ also provides a hostile model for static optimizability - the existence of placement new prevents a lot of interesting devirtualization opportunities, and generally makes the compiler’s life difficult.

Interesting, I haven’t heard that placement new is problematic before. What are the problems involved (feel free to reply off-list). Is it legal to use placement new to replace an existing instance and change its vtable pointer or do aliasing rules prohibit that?

Swift is another case of a hybrid model: its semantics provide predictability between obviously static (structs, enums, and global funcs) and obviously dynamic (classes, protocols, and closures) constructs.

Yeah. It’s also pretty cool how a consequence of the runtime generics model is that all value types can be manipulated reflectively, without penalizing static code. I think the key concept here is separating in-memory data from metadata needed for runtime manipulation (value witness tables, protocol conformances etc). I hope more languages design around this in the future.

A focus of Swift (like Java and Javascript) is to provide an apparently simple programming model. However, Swift also intentionally "cheats" in its global design by mixing in a few tricks to make the dynamic parts of the language optimizable by a static compiler in many common cases, without requiring profiling or other dynamic information.. For example, the Swift compiler can tell if methods in non-public classes are never overridden (and non-public is the default, for a lot of good reasons) - thus treating them as final. This allows eliminating much of the overhead of dynamic dispatch without requiring a JIT. Consider an “app”: because it never needs to have non-public classes, this is incredibly powerful - the design of the swift package manager extends this even further (in principle, not done yet) to external libraries. Further, Swift’s generics provide an a static performance model similar to C++ templates in release builds (though I agree we need to do more to really follow through on this) -- while Swift existentials (values of protocol type) provide a balance by giving a highly dynamic model.

The upshot of this is that Swift isn’t squarely in either of the static or dynamic camps: it aims to provide a very predictable performance model (someone writing a bootloader or firmware can stick to using Swift structs and have a simple guarantee of no dynamic overhead or runtime dependence)

To be fair, we still emit implicit heap allocations for value types whose size isn’t known. So your boot loader will have to avoid generics (and non-@noescape closures), at least :slight_smile:

Actually, for code within a single module, are we always able to fully instantiate all generics?

Finally, while it is possible that a JIT compiler might be interesting someday in the Swift space, if we do things right, it will never be “worth it” because programmers will have enough ability to reason about performance at their fingertips. This means that there should be no Java or Javascript-magnitude "performance delta" sitting on the table waiting for a JIT to scoop up. We’ll see how it works out long term, but I think we’re doing pretty well so far.

JITs can teach us a lot about optimizing for compile time though, which would help REPL usage, tooling and scripting.

Slava

···

On Dec 11, 2015, at 11:45 PM, Chris Lattner via swift-evolution <swift-evolution@swift.org> wrote:

3. C++: C++ is a step up from C in terms of introducing dynamism into the model with virtual functions. Sadly, C++ also provides a hostile model for static optimizability - the existence of placement new prevents a lot of interesting devirtualization opportunities, and generally makes the compiler’s life difficult.

Interesting, I haven’t heard that placement new is problematic before. What are the problems involved (feel free to reply off-list). Is it legal to use placement new to replace an existing instance and change its vtable pointer or do aliasing rules prohibit that?

It looks like there was a recent discussion on reddit that explained more:

The upshot of this is that Swift isn’t squarely in either of the static or dynamic camps: it aims to provide a very predictable performance model (someone writing a bootloader or firmware can stick to using Swift structs and have a simple guarantee of no dynamic overhead or runtime dependence)

To be fair, we still emit implicit heap allocations for value types whose size isn’t known. So your boot loader will have to avoid generics (and non-@noescape closures), at least :-)

Yes. I would expect such system code to be built in a mode that would warn or error when any of these occurred. These are all statically knowable situations.

Actually, for code within a single module, are we always able to fully instantiate all generics?

We have the mechanics & design to do that, but I don’t thing we have any user-visible way to expose it. There is certainly more to be done in any case.

Finally, while it is possible that a JIT compiler might be interesting someday in the Swift space, if we do things right, it will never be “worth it” because programmers will have enough ability to reason about performance at their fingertips. This means that there should be no Java or Javascript-magnitude "performance delta" sitting on the table waiting for a JIT to scoop up. We’ll see how it works out long term, but I think we’re doing pretty well so far.

JITs can teach us a lot about optimizing for compile time though, which would help REPL usage, tooling and scripting.

Absolutely. I’m not anti-JIT at all, and in fact our REPL and #! script mode use a JIT (as I’m sure you know).

I was trying to convey that building a model that depends on a JIT for performance means it is very difficult to practically target spaces where a JIT won’t work (e.g. because of space constraints). If you have a model that doesn’t rely on a JIT, then a JIT can provide value add.

-Chris

···

On Dec 12, 2015, at 3:36 PM, Slava Pestov <spestov@apple.com> wrote:

On Dec 11, 2015, at 11:45 PM, Chris Lattner via swift-evolution <swift-evolution@swift.org> wrote:

random note: my previous email was very high level, so I’ve made an effort to make this more concrete and include examples, to avoid confusion.

Swift isn’t squarely in either of the static or dynamic camps: it aims to provide a very predictable performance model … while also providing an expressive and clean high level programming model. A focus of Swift … is to provide an apparently simple programming model. However, Swift also intentionally "cheats" in its global design by mixing in a few tricks to make the dynamic parts of the language optimizable by a static compiler in many common cases, without requiring profiling or other dynamic information.

I’d say that Swift is an “opportunistic” language, in that it provides a very dynamic “default" programming model, where you don’t have to think about the fact that a static compiler is able to transparently provide great performance - without needing the overhead of a JIT.

You really need to include the compilation model and thus the resultant programmer model into the story, and the programmer model is what really matters, IMHO.

First, two clarification requests for Chris on two things I imagine might lead to confusion on this thread:

When you say “programmer model,” I understand you to mean "how a Swift programmer thinks about the language’s semantics while writing Swift code, without regard to how they’re implemented in the compiler.”

Yes. Except in extreme cases, the interesting question isn’t whether it is “possible" to do thing X in language Foo, it is to ask whether Foo “encourages" X and how it rewards it. For example, people can (and do!) implement v-table dynamic dispatch systems in C to manually build OO models, but C requires tons of boilerplate to do that, and rewards those efforts with lack of type checking, no optimization of those dispatch mechanisms, and a typically unpleasant debugger experience.

What I really care about is “what kind of code is written by a FooLang programmer in practice”, which is what I refer to as the programmer model encouraged by FooLang. This question requires you to integrate across large bodies of different code and think about the sort of programmer who wrote it (e.g. “systemsy" people often write different code than “scripty” people) and how FooLang’s design led to that happening. People end up writing code a certain ways because many obvious and subtle incentives inherent in the language. When designing a programming language from scratch or considering adding a feature to an existing one, the “big” question is what the programmer model should be and whether a particular aggregation of features will provide it.

As a concrete example, consider “let”. A Swift goal is to “encourage" immutability, without “punishing” mutability (other languages take a position of making mutability very painful, or don’t care about immutability). This is why we use “let” as a keyword instead of “const” or “let mut". If it were longer than “var”, some people would just use var everywhere with the argument that consistency is better. Warning about vars that could be lets is another important aspect of this position.

As a more general example, Swift’s goal is to provide a scalable programming model, where it is easy, friendly and familiar for people new to Swift and/or new to programming. Its defaults are set up so that common mistakes don’t lead to bugs, and that forgetting to think about something shouldn’t paint you into a corner. OTOH, Swift doesn’t achieve this by being “watered down” for newbies, it does this by factoring the language so that power-user features can be learned at the appropriate point on the learning curve. “Niche” features for power uses make sense when they enable new things things being expressed, new markets to be addressed, or new performance wins to be had. This is key to Swift being able to scale from “low level system programming” all the way up to “scripting”, something I’m quite serious about.

If you’re interested in examples of niche power-user features, they could be things like inline assembly support, “#pragma pack” equivalents, enforced language subsets for constrained environments, or a single-ownership / borrow / move model to guarantee no ARC overhead or runtime interaction. So long as the feature doesn’t complicate the basic model for all Swift programmers, allowing more expert users to have more power and control is a (very) good thing IMO.

When you say “dynamic,” I take that to mean any kind of dispatch based on runtime type — whether implemented using vtables a la C++, message dispatch a la Objective-C, string-based lookup in a hash a la Javascript, or anything else that uses something’s runtime type to resolve a method call.

Do I understand you correctly?

Yes, I’d also include checked downcasting, since that relies on runtime type as well. It is admittedly a stretch, but I include mark and sweep GCs as well, since these need runtime type descriptors to be able to walk pointer graphs in the “mark" phase.

On this thread, there are (I think?) two related goals at hand:

Allow dynamic dispatch of protocol extension methods even when the method does not appear in the extended protocol.
Provide a good mental model of the language for programmers, and prevent programmer errors caused by misunderstandings about dispatch rules (if such misunderstandings do indeed exist in the wild).

I’ll copy and paste what Chris wrote into a “Swift philosophy” checklist for Brent’s proposal, and for any others working toward these goals. Chris, please correct me if I’m putting words in your mouth!
Provide a programmer model that:
is high level
is expressive and clean
is dynamic by default
doesn’t require a programmer to think about the fact that a static compiler is able to transparently provide great performance
Provide a performance model that:
is predictable
makes the dynamic parts of the language optimizable by a static compiler in many common cases
does not requiring profiling or other dynamic information
does not require JIT compilation

Yes, this is a good summary.

How do we resolve tension between these goals? The programmer model is what really matters, but we cannot reason about it without considering its impact on the compilation model. We should give the compiler opportunities to “cheat” in its optimization whenever we can do so without undermining the programmer model.

I’d consider adding a keyword (really, a decl modifier) to make it clear what the behavior is. This provides predictability.

-Chris

···

On Dec 12, 2015, at 10:04 AM, Paul Cantrell <cantrell@pobox.com> wrote:

I realize I’m straying for the topic of the thread (and Brent’s neglected proposal, which I really do mean to think some more about), but how I can I not chime in to these wonderful musings on language design?

When you say “programmer model,” I understand you to mean "how a Swift programmer thinks about the language’s semantics while writing Swift code, without regard to how they’re implemented in the compiler.”

Yes. Except in extreme cases, the interesting question isn’t whether it is “possible" to do thing X in language Foo, it is to ask whether Foo “encourages" X and how it rewards it.

Yes! When students ask why they should take Theory of Computation, part of my answer is that it’s good to get a really deep handle on the question of what’s possible in a language, and how very different that is from the question of what’s elegant in a language. The Church-Turing Thesis closes the door on a whole category of questions about what a given language can do: algorithmically, all these languages we work with are equivalent! It’s a really freeing insight once you’ve wrapped your head around it.

What I really care about is “what kind of code is written by a FooLang programmer in practice”, which is what I refer to as the programmer model encouraged by FooLang.

When designing a programming language from scratch or considering adding a feature to an existing one, the “big” question is what the programmer model should be and whether a particular aggregation of features will provide it.

Thanks for this. I was thinking “programmer model” meant only the programmer’s mental model of the language — but you’re talking about something broader and deeper: the style, the culture, the patterns of thought, and the aesthetics that arise from the experience of working with a particular language.

That’s wonderful. And daunting.

So … how do you test this? How do you evaluate language features for it? I think of these questions about protocol extensions, and trying to predict the resulting programmer model seems a fool’s errand.

This is why we use “let” as a keyword instead of “const” or “let mut". If it were longer than “var”, some people would just use var everywhere with the argument that consistency is better.

I love this example. Yes, of course, we programmers would all concoct some post-hoc justification for doing what’s comfortable to us.

Swift doesn’t achieve this by being “watered down” for newbies, it does this by factoring the language so that power-user features can be learned at the appropriate point on the learning curve. “Niche” features for power uses make sense when they enable new things things being expressed, new markets to be addressed, or new performance wins to be had. This is key to Swift being able to scale from “low level system programming” all the way up to “scripting”, something I’m quite serious about.

The other half of this is that the language doesn’t impose any cognitive burden on those who don’t use the niche / expert features. I don’t want to be an expert in everything all the time; I want to be able to focus on only the tools appropriate to the problem at hand. I don’t want to have to worry about bumping into the unshielded circular saw every time I pick up a screwdriver, even if I do know how to use a circular saw.

I like what Swift has done on this front so far. UnsafePointer is a great example. Swift can still provide bare memory access without making it ubiquitous. Take that, C++!

On which note: is there thought of eventually bootstrapping the Swift compiler?

Cheers,

Paul

I realize I’m straying for the topic of the thread (and Brent’s neglected proposal, which I really do mean to think some more about), but how I can I not chime in to these wonderful musings on language design?

No problem, I’m taking time to pontificate here for the benefit of the community, hopefully it will pay itself back over time, because people understand the rationale / thought process that led to Swift better :-)

When you say “programmer model,” I understand you to mean "how a Swift programmer thinks about the language’s semantics while writing Swift code, without regard to how they’re implemented in the compiler.”

Yes. Except in extreme cases, the interesting question isn’t whether it is “possible" to do thing X in language Foo, it is to ask whether Foo “encourages" X and how it rewards it.

Yes! When students ask why they should take Theory of Computation, part of my answer is that it’s good to get a really deep handle on the question of what’s possible in a language, and how very different that is from the question of what’s elegant in a language. The Church-Turing Thesis closes the door on a whole category of questions about what a given language can do: algorithmically, all these languages we work with are equivalent!

Yep, almost. I’m still hoping to get an infinite tape someday :-)

Thanks for this. I was thinking “programmer model” meant only the programmer’s mental model of the language — but you’re talking about something broader and deeper: the style, the culture, the patterns of thought, and the aesthetics that arise from the experience of working with a particular language.

Right.

So … how do you test this?

You can only test it by looking at a large enough body of code and seeing what problems they face. Any language that is used widely will evidence of problems that people are having. There are are shallow problems like “I have to type a few extra characters that are redundant and it annoys me”, and large problems “Two years into my project, I decided to throw it away and rewrite it because it had terrible performance / didn’t scale / was too buggy / couldn’t be maintained / etc". I don’t believe that there is ever a metric of “ultimate success", but the more big problems people have, the more work there is left to be done.

The good news is that we, as programmers, are a strongly opinionated group and if something irritates us we complain about it :-). It is somewhat funny that (through selection bias) I have one of the largest list of gripes about Swift, because I see a pretty broad range of what people are doing and what isn’t working well (a result of reading almost everything written about swift, as well as tracking many, many, bug reports and feature requests). This drives my personal priorities, and explains why I obsess about weird little things like getting implicit conversions for optionals right, how the IUO model works, and making sure the core type checker can be fixed, but prefer to push off “simple” syntactic sugar for later when other pieces come in.

How do you evaluate language features for it? I think of these questions about protocol extensions, and trying to predict the resulting programmer model seems a fool’s errand.

Adding a feature can produce surprising outcomes. A classic historical example is when the C++ added templates to the language without realizing they were a turing complete meta-language. Sometime later this was discovered and a new field of template metaprogramming came into being. Today, there are differing opinions about whether this was a good or bad thing for the C++ programmer model.

That said, most features have pretty predictable effects, because most features are highly precedented in other systems, and we can see their results and the emergent issues with them. Learning from history is extremely important. You can also think about the feature in terms of common metrics by asking things like “what is the error of omission?” which occurs someone fails to think about the feature. For example, if methods defaulted to final, then the error of omission would be that someone didn’t think about overridability, and then discovered later that they actually wanted it. If symbols defaulted to public, then people would naturally export way too much stuff, because they wouldn’t think about marking them internal, etc.

Swift doesn’t achieve this by being “watered down” for newbies, it does this by factoring the language so that power-user features can be learned at the appropriate point on the learning curve. “Niche” features for power uses make sense when they enable new things things being expressed, new markets to be addressed, or new performance wins to be had. This is key to Swift being able to scale from “low level system programming” all the way up to “scripting”, something I’m quite serious about.

The other half of this is that the language doesn’t impose any cognitive burden on those who don’t use the niche / expert features. I don’t want to be an expert in everything all the time; I want to be able to focus on only the tools appropriate to the problem at hand. I don’t want to have to worry about bumping into the unshielded circular saw every time I pick up a screwdriver, even if I do know how to use a circular saw.

I like what Swift has done on this front so far. UnsafePointer is a great example. Swift can still provide bare memory access without making it ubiquitous. Take that, C++!

Right!

On which note: is there thought of eventually bootstrapping the Swift compiler?

There are no short term plans. Unless you’d consider rewriting all of LLVM as part of the project (something that would be awesome, but that I wouldn’t recommend :-), we’d need Swift to be able to import C++ APIs. I’m personally hopeful that we’ll be able to tackle at least some of that in Swift 4, but we’ll see - no planning can be done for Swift 4 until Swift 3 starts to wind down.

-Chris

···

On Dec 13, 2015, at 9:34 PM, Paul Cantrell <cantrell@pobox.com> wrote:

I forgot the most important part. The most important aspect of evaluating something new is to expose it to ridiculously smart people, to see what they think.

For best effect, they should come from diverse backgrounds and perspectives, and be willing to share their thoughts in a clear and direct way. This is one of the biggest benefits of all of swift being open source - public design and open debate directly leads to a better programming language.

-Chris

···

On Dec 13, 2015, at 10:03 PM, Chris Lattner via swift-evolution <swift-evolution@swift.org> wrote:

How do you evaluate language features for it? I think of these questions about protocol extensions, and trying to predict the resulting programmer model seems a fool’s errand.

Adding a feature can produce surprising outcomes.

No problem, I’m taking time to pontificate here for the benefit of the community, hopefully it will pay itself back over time, because people understand the rationale / thought process that led to Swift better :slight_smile:

I appreciate these philosophical musings about language design greatly! They make for very interesting reading and definitely shed more light on some of the decisions made in the design of Swift.

You can also think about the feature in terms of common metrics by asking things like “what is the error of omission?” which occurs someone fails to think about the feature. For example, if methods defaulted to final, then the error of omission would be that someone didn’t think about overridability, and then discovered later that they actually wanted it.

In this example I think it is reasonable to consider “what is the error of omission?” from the reverse standpoint. Because methods do not default to final somebody may not think about inheritance / overridability and fail to specify final. The class or method may be one that really should not be inheritable / overridable or it may be one where this is reasonable, but the implementation is not well designed to support this. In this case they may later discover that they have buggy subclasses upon which a lot of code depends. IMHO this is much more serious than discovering that one should have allowed inheritance / overridability which can be added with a non-breaking change (from a semantic point of view, maybe implementation is not so simple when ABI resilience is a requirement).

Inheritance is a pretty complex tool to wield *without* prior consideration IMHO. Swift already offers a lot of tools that reduce the need for inheritance and will hopefully offer more in the future (such as improved protocols / generics, better support for composition through synthesized forwarding, etc). Why not *require* some forethought to use inheritance? This would provide subtle guidance towards other solutions where appropriate just as Swift subtly guides users towards immutability.

As an aside, final is actually the right choice for the majority of classes I encounter in iOS apps. This may not always be the case in system frameworks but they *should* receive much more careful design than application level code.

Please don’t take this as pedantic. I’m genuinely curious to hear more about why you apply “error of omission” one way and not the other in this case.

Thanks,
Matthew

You can also think about the feature in terms of common metrics by asking things like “what is the error of omission?” which occurs someone fails to think about the feature. For example, if methods defaulted to final, then the error of omission would be that someone didn’t think about overridability, and then discovered later that they actually wanted it.

In this example I think it is reasonable to consider “what is the error of omission?” from the reverse standpoint.

Yes, absolutely.

Because methods do not default to final somebody may not think about inheritance / overridability and fail to specify final. The class or method may be one that really should not be inheritable / overridable or it may be one where this is reasonable, …

Understood, I wasn’t trying to present a well-rounded analysis of this decision, I just wanted to use it as a simple example.

-Chris

···

On Dec 14, 2015, at 8:23 AM, Matthew Johnson <matthew@anandabits.com> wrote:

No problem, I’m taking time to pontificate here for the benefit of the community, hopefully it will pay itself back over time, because people understand the rationale / thought process that led to Swift better :slight_smile:

Cheers to that. It’s helpful to get these philosophical thoughts from the core team, as well as little “smell checks” on specific proposals — both for taste and for feasibility. I don’t think it will stop anyone from sharing opinions (programmers, strong opinions, like you said), but it does help guide discussion.

I see a pretty broad range of what people are doing and what isn’t working well … This drives my personal priorities, and explains why I obsess about weird little things like getting implicit conversions for optionals right, how the IUO model works…

It’s no “weird little thing” — that’s been huge. Confusing implicit optional conversions (or lack thereof) + lack of unwrapping conveniences + too many things unnecessarily marked optional in Cocoa all made optionals quite maddening in Swift 1.0. When I first tried the language, I thought the whole approach to optionals might be a mistake.

Yet with improvements on all those fronts, I find working with optionals in Swift 2 quite pleasant. In 1.0, when optionals forced me to stop and think, it was usually about the language and how to work around it; in 2.x, when optionals force me to stop and think, it’s usually about my code, what I’m modeling with it, and where there are gaps in my reasoning. Turns out the basic optionals approach was solid all along, but needed the right surrounding details to make it play out well. Fiddly details had a big impact on the language experience.

Still, it seems like a lot of people fall back on forced unwrapping rather than trying to fully engage with the type system and think through their unwrappings. Is this a legacy of 1.x? Or does the language still nudge that way? I see a lot of instances of “foo!” in the wild, especially from relative beginners, that seem to be a reflexive reaction to a compiler error and not a carefully considered assertion about invariants guaranteeing safe unwrapping. This discussion makes me wonder: conversely to the decision of making “let” as short as “var,” perhaps “foo!” is too easy to type. Should the compiler remove fixits that suggest forced / implicit unwraps? Should it even be something ugly like “forceUnwrap!(foo)”? (OK, probably not. But there may be more gentle ways to tweak the incentives.)

So there’s the notion of the “programmer model” playing out in practice.

Adding a feature can produce surprising outcomes. A classic historical example is when the C++ added templates to the language without realizing they were a turing complete meta-language. Sometime later this was discovered and a new field of template metaprogramming came into being.

I remember my mixture of delight & horror when I first learned that! (I was an intern for HP’s dev tools group back in the mid-90s, and spent a summer trying to find breaking test cases for their C++ compiler. Templates made it like shooting fish in a barrel — which is nothing against the compiler devs, who were awesome, but just a comment on the deep darkness of the corners of C++.)

That experience makes me wonder whether in some cases the Swift proposal process might put the cart before the horse by having a feature written up before it’s implemented. With some of these proposals, at least the more novel ones where the history of other languages isn’t as strong a guide, it could be valuable to have a phase where it’s prototyped on a branch and we all spend a little time playing with a feature before it’s officially accepted.

One of my favorite features of Swift so far has been its willingness to make breaking changes for the health of the language. But it would be nice to have those breaking changes happen _before_ a release when possible!

I forgot the most important part. The most important aspect of evaluating something new is to expose it to ridiculously smart people, to see what they think.

Well, I don’t have the impression that the Swift core team is exactly hurting on _that_ front. But…

This is one of the biggest benefits of all of swift being open source - public design and open debate directly leads to a better programming language.

…yes, hopefully many eyes bring value that’s complementary to the intelligence & expertise of the core team. There’s also a lot to be said for the sense of ownership and investment that comes from involving people in the decision making. That certainly pays dividends over time, in so many different community endeavors.

I’m grateful and excited to be involved in thinking about the language, as I’m sure are many others on this list. When it comes right down to it, I trust the core team to do good work because you always have — but it’s fun to be involved, and I do hope that involvement indeed proves valuable to the language.

Cheers,

Paul

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
https://innig.net@inthehandshttp://siestaframework.com/

Understood, I wasn’t trying to present a well-rounded analysis of this decision, I just wanted to use it as a simple example.

Makes sense. If you're ever in the mood to share more on this I would find it very interesting (even if it doesn't convince me :slight_smile: ).

Matthew

I see a pretty broad range of what people are doing and what isn’t working well … This drives my personal priorities, and explains why I obsess about weird little things like getting implicit conversions for optionals right, how the IUO model works…

It’s no “weird little thing” — that’s been huge. Confusing implicit optional conversions (or lack thereof) + lack of unwrapping conveniences + too many things unnecessarily marked optional in Cocoa all made optionals quite maddening in Swift 1.0. When I first tried the language, I thought the whole approach to optionals might be a mistake.

Yet with improvements on all those fronts, I find working with optionals in Swift 2 quite pleasant. In 1.0, when optionals forced me to stop and think, it was usually about the language and how to work around it; in 2.x, when optionals force me to stop and think, it’s usually about my code, what I’m modeling with it, and where there are gaps in my reasoning. Turns out the basic optionals approach was solid all along, but needed the right surrounding details to make it play out well. Fiddly details had a big impact on the language experience.

Right, but what I’m getting at is that there is more work to be done in Swift 3 (once Swift 2.2 is out of the way). I find it deeply unfortunate that stuff like this still haunt us:

  let x = foo() // foo returns an T!
  let y = [x, x] // without looking, does this produce "[T!]" or "[T]” ???

There are other similar problems where the implicit promotion from T to T? interacts with sametype constraints in unexpected ways, for example, around the ?? operator. There is also the insane typechecker complexity and performance issues that arise from these implicit conversions. These need to be fixed, as they underly many of the symptoms that people observe.

Still, it seems like a lot of people fall back on forced unwrapping rather than trying to fully engage with the type system and think through their unwrappings. Is this a legacy of 1.x? Or does the language still nudge that way? I see a lot of instances of “foo!” in the wild, especially from relative beginners, that seem to be a reflexive reaction to a compiler error and not a carefully considered assertion about invariants guaranteeing safe unwrapping.

Unclear. I’m aware of many unfortunate uses of IUOs that are the result of language limitations that I’m optimistic about fixing in Swift 3 (e.g. two phase initialization situations like awakeFromNib that force an property to be an IUO or optional unnecessarily), but I’m not aware of pervasive use of force unwraps. Maybe we’re talking about the same thing where the developer decided to use T? instead of T!.

This discussion makes me wonder: conversely to the decision of making “let” as short as “var,” perhaps “foo!” is too easy to type. Should the compiler remove fixits that suggest forced / implicit unwraps? Should it even be something ugly like “forceUnwrap!(foo)”? (OK, probably not. But there may be more gentle ways to tweak the incentives.) So there’s the notion of the “programmer model” playing out in practice.

It depends on “how evil” you consider force unwrap to be. If you draw an analogy to C, the C type system has a notion of const pointers. It is a deeply flawed design for a number of reasons :-), but it does allow modeling some useful things. However, if you took away the ability to "cast away" const (const_cast in C++ nomenclature), then the model wouldn’t work (too many cases would be impossible to express). I put force unwrap in the same sort of bucket: without it, optionals would force really unnatural code in corner cases. It is “bad” in some sense, but its presence is visible and greppable enough to make it carry weight. The fact that ! is a unifying scary thing with predictable semantics in swift is a good thing IMO. From my perspective, I think the Swift community has absorbed this well enough :-)

Here is another (different but supportive) way to look at why we treat unsafety in Swift the way we do:

With force unwrap as an example, consider an API like UIImage(named:"foo”). It obviously can fail if “foo.png" is missing, but when used in an app context, an overwhelming use-case is loading an image out of your app bundle. In that case, the only way it can fail is if your app is somehow mangled. Should we require developers to write recovery code to handle that situation?

To feature creep the discussion even more, lets talk about object allocation in general. In principle, malloc(16 bytes) can fail and return nil, which means that allocation of any class type can fail. Should we model this as saying that all classes have a failable initializer, and expect callers to write recovery code to handle this situation? If you’re coming from an ObjC perspective, should a class be expected to handle the situation when NSObject’s -init method returns nil?

You can predict my opinion based on the current Swift design: the answer to the both questions is no: in the first case, we want the API to allow the developer to write failure code in situations they want, and situations don’t care they can use !. In the later case, we don’t think that primitive object allocation should ever fail (and if it does, it should be handled by the runtime or some OS service like purgable memory) and thus the app developer should never have to think about it.

This isn’t out of laziness: “error handling” and “recovery” code not only needs to be written, but it needs to be *correct*. Unless there is a good way to test the code that is written, it is better to not write it in the first place. Foisting complexity onto a caller (which is what UIImage is doing) is something that should only be done when the caller may actually be able to write useful recovery code, and this only works (from a global system design perspective) if the developer has an efficient way to say “no really, I know what I’m doing in this case, leave me alone”. This is where ! comes in. Similarly, IUOs are a way to balance an equation involving the reality that we’ll need to continue importing unaudited APIs for a long time, as well as a solution for situations where direct initialization of a value is impractical.

This sort of thought process and design is what got us to the current Swift approach. This is balancing many conflicting goals, in an aim to produce a programming model that leads to reliable code being written the first time. In the cases when it isn’t reliable, it is hopefully testable, e.g. by “failing fast”. (https://en.wikipedia.org/wiki/Fail-fast).

Adding a feature can produce surprising outcomes. A classic historical example is when the C++ added templates to the language without realizing they were a turing complete meta-language. Sometime later this was discovered and a new field of template metaprogramming came into being.

I remember my mixture of delight & horror when I first learned that! (I was an intern for HP’s dev tools group back in the mid-90s, and spent a summer trying to find breaking test cases for their C++ compiler. Templates made it like shooting fish in a barrel — which is nothing against the compiler devs, who were awesome, but just a comment on the deep darkness of the corners of C++.)

Sadly, templates aren’t the only area of modern C++ that have that characteristic… :-) :-)

That experience makes me wonder whether in some cases the Swift proposal process might put the cart before the horse by having a feature written up before it’s implemented. With some of these proposals, at least the more novel ones where the history of other languages isn’t as strong a guide, it could be valuable to have a phase where it’s prototyped on a branch and we all spend a little time playing with a feature before it’s officially accepted.

I’m of two minds about this. On the one hand, it can be challenging that people are proposing lots of changes that are more “personal wishlist” items than things they plan to implement and contribute themselves. On the other hand, we *want* the best ideas from the community, and don’t want to stymie or overly “control” the direction of Swift if it means that we don’t listen to everyone. It’s a hard problem, one that we’ll have to figure out as a community.

Another way of looking at it: Just because you’re a hard core compiler engineer, it doesn’t mean your ideas are great. Just because you’re not a hard core compiler engineer, it doesn’t mean your ideas are bad.

One of my favorite features of Swift so far has been its willingness to make breaking changes for the health of the language. But it would be nice to have those breaking changes happen _before_ a release when possible!

+1. I think that this is the essential thing that enables Swift to be successful over the long term. Swift releases are time bound (to a generally yearly cadence), Swift is still young, and we are all learning along the way. Locking it down too early would be bad for its long term health -- but it also clearly needs to settle over time (and sooner is better than later).

Overall, we knew that it would be a really bad idea to lock down swift before it was open source. There are a lot of smart people at Apple of course, but there are also a lot of smart people outside, and we want to draw on the best ideas from wherever we can get them.

I forgot the most important part. The most important aspect of evaluating something new is to expose it to ridiculously smart people, to see what they think.

Well, I don’t have the impression that the Swift core team is exactly hurting on _that_ front. But…

Frankly, one of my biggest surprises since we’ve open sourced swift is how “shy” some of the smartest engineers are. Just to pick on one person, did you notice today that Slava covertly fixed 91% of the outstanding practicalswift compiler crashers today? Sheesh, he makes it look easy! Fortunately for all of us, Slava isn’t the only shy one…

This is one of the biggest benefits of all of swift being open source - public design and open debate directly leads to a better programming language.

…yes, hopefully many eyes bring value that’s complementary to the intelligence & expertise of the core team. There’s also a lot to be said for the sense of ownership and investment that comes from involving people in the decision making. That certainly pays dividends over time, in so many different community endeavors.

Yes it does. The thing about design in general and language design in particular is that the obviously good ideas and obviously bad ideas are both “obvious". The ones that need the most debate are the ones that fall in between. I’ll observe that most ideas fall in the middle :-)

I’m grateful and excited to be involved in thinking about the language, as I’m sure are many others on this list. When it comes right down to it, I trust the core team to do good work because you always have — but it’s fun to be involved, and I do hope that involvement indeed proves valuable to the language.

I’m glad you’re here!

-Chris

···

On Dec 14, 2015, at 1:46 PM, Paul Cantrell <cantrell@pobox.com> wrote:

Terms of Service

Privacy Policy

Cookie Policy