Why Swift is iOS only language

And Elements lets you target Linux natively, via .NET/Mono or via JVM — whichever API/platform you prefer... ;)

Potato potato. Code written for the normal, "reference" if you will, implementation of Swift might very well not run on this. Silver certainly doesn't enable you to "use Swift for other platforms" not supported by upstream, at best it allows you to write code that looks a lot like Swift.

Additionally, it's a commercial closed source product, so you should probably at least mention you work for them when advertising it.

9 Likes

@marc_hoffman Sorry, but Silver lost me at the first screenshot. The code in those screenshots looks like it was translated from Java/C# with no attention to Swift's design guidelines whatsoever. It it wasn't for the .swift extension, I would have never known they contained Swift code.

I'm a language implementor. I'm always excited to see interesting new language implementations. I think Silver is a neat project, and I'd be happy to talk to you about ways you could efficiently implement full value semantics in a JVM or similar environment. I certainly don't begrudge the existence of a second implementation of Swift, although as a project maintainer, it does pain me a little to see that effort not going into improving the main implementation. (And I think SIL would help a lot with that efficient-value-semantics problem!)

But... a programming language is not just a grammar and a type system. There's a lot of room for variation between implementations, but basic semantics are part of the language, and that includes core library types. Many typical programs will not work the same way in the main Swift implementation and in Silver, and to me, that does make them not implementations of the same language. So, speaking just for myself, I'm not very happy about Silver calling itself a Swift implementation until it fixes that problem.

26 Likes

Now back to the original question: Why is Swift iOS only language?

Perhaps there’s some insight to be gained from Stack Overflow Trends:

The iPhone was and remains the killer app, initially driving the adoption of Objective-C. After the initial gold rush subsided, the interest in the platform has stabilized around 2012, while its primary language has peaked mid-2011 and is in steady decline (in terms of SO question share) ever since. Shortly after its introduction, Swift has succeeded in cleanly replacing Objective-C as THE language for the platform since 2015.

Programming languages backed by the platform vendor have built-in audience of majority of the developers for that platform, which enables a language to become overnight success. It is also not an accident, because such language is usually backed by the full might of the platform vendor’s in-house developers that work very hard to make the language well suited for the primary domain, while also adapting the platform to better fit the language. The smoothness of Swift dethroning the Objective-C is a testament to excellent job Apple did in making it work great for iOS development.

For a language to break out of initial niche (however large) into programming at large is a many years effort and Swift is simply too immature to have a serious shot at it at this point in time.

It would be very interesting to hear a coherent vision for the future of Swift directly from the horse’s mouth, because my impression is that the SE community is quite out of sync from the efforts currently in progress largely behind closed doors.

6 Likes

It would be nice to have an official Ubuntu package for Swift.

Also:

In this case it's the other way around: it would be nice to see more support for Fedora on swift.org.

It's an interesting question: where does the language end and the (standard) library begin? However, in every language I've worked in that has arrays, the array type has been considered to be part of the language no matter where its implementation resides.

If the following program prints [1, 2, 4] on a platform, I would say the platform is a buggy implementation of Swift, or not Swift at all.

var a = [1, 2, 3]
var b = a
b[2] = 4
print("\(a)")
2 Likes

I'd certainly love to hear your suggestions for now to efficiently (read: probably with copy-on-write semantics, not copying the entire array data every time), within the confines of the CLR (.NET) runtime. All the other platforms we support are doable, I think, but on the CLR I don't see a clean way to do it that would hold up when mixing such a type with code not under our compiler's control (eg passing it to Visual C# or Visual Basic code)

And trust me, I would love to see this changed, if we can.

I hear you, but look at it this way: whatever effort we are putting into our Swift front-end — if we'd not be doing that, it would not be going towards Apple's compiler, but towards something entirely different. IOW, it's not effort "lost" on the side of Apple's compiler. And since we're supporting quite few platforms Apple's compiler will probably never support (.NET and JVM, in particular I don't see happening, ever), I'd say thats still a net positive to the Swift community at large?

Maybe, but does SIL get me to IL or Java Byte code? No. In the end, on those two platforms, we're limited by what the runtime allows and supports.

That said, yes, I hope we can address the array/dictionary issue at some point.

thanx!

At least in the case of the Android sample, it probably was. ;). I'd appreciate your honest feedback on what exactly makes it look non-swifty in your opinion (within the confines of the OS APIs, which obviously are what they are), though.

which, mind you, have changed every other week over the past three years, and that particular code was probably converted during the Swift 1.0 days. But point taken.

thanx,
marc

Not when most of the Swift community is using the reference Swift compiler.

I would rather call Silver a Swift-inspired language for other platforms, rather than an actual implementation of "Swift". I don't see Swift being like EMCAScript or Lisp where there's many different variates of a language specification. Swift is a language with a single reference implementation, since it's not a formally specified language, and probably never will be.

And when you start having differences among various "Swift" compilers you're going to start creating different communities, and at that point is it really Swift you're talking about? Or a Swift-language-family community. That's something I don't know if the proper Swift community really wants to see. If anything I think we would want to see things like TensorFlow project. Where it's a "branch" (really almost a fork) of the main project. But at the same time, many of the changes that are being done in the TensorFlow project are planned to be proposed for inclusion in the upstream branch. That way the version of Swift that ships with TensorFlow has minimal, if any, difference from the upstream Swift compiler. And those changes should be specialized to the domain TensorFlow operates in.

But even with the TensorFlow model, many people were (and probably still are) worried that it will create two distinct "Swifts", with incompatible communities.

Hmm. Having more options is always a good thing, IMHO.

But virtually every other language (that isn't super-niche or vendor-locked-in, or both) has multiple implementations, and — in many cases — drastic variations. How many PASCAL dialects are out there, for example. There's separate C# compilers (Microsoft's, ours, and Mono's). Probably dozens of C/C++ ones (in fact, Swift is built on the very foundation of someone thinking they can build a better C/C++/Obj-C compiler, and doing so. If the LLVM guys had said "you know, there's GCC, its fine, let not", there'd probably be no Swift today).

Programming language ecosystems thrive by having different implementations and options.

1 Like

Again, I don't object at all to the idea of having a second implementation of Swift, and of course two different implementations will always have some minor differences. The problem is that replacing value semantics with reference semantics is not really a minor difference at all; it's changing a very core and very intentional aspect of the language design. If you were implementing Java, and you suddenly decided that byte should be an unsigned type, you wouldn't really be implementing Java anymore — and that's a far less important and far more justifiable decision than what Silver has done with Swift.

5 Likes

As for actually implementing mutable value types efficiently in a JVM environment:

The first thing is that you have to accept that you're going to have some extra overhead because the system isn't really designed for uniqueness testing. Given that, I assume that you're creating a class for every value type, maybe with some peephole to avoid that overhead for types that can be represented as a single scalar. I would suggest adding a flag to that class indicating whether the object is not uniquely referenced. The flag is one-way: once it's been set, you'll never be able to modify that particular object again; that's the big sacrifice you make relative to a ref-counted environment. The rules are then:

  • An inout parameter (including self for a mutating method) is an ordinary object reference with a precondition and postcondition that the object is uniquely referenced. The caller ensures uniqueness before making the call, then just passes the current value.
  • self in a non-mutating method is an ordinary object reference with a postcondition that the object reference is uniquely referenced if it was uniquely referenced on entry. This rule lets you call non-mutating methods on mutable variables without cloning.
  • Other parameters receive values in whatever state and may set the non-unique bit on them if necessary (e.g. if the usage of the value is non-affine).
  • Constructors of value types should return a unique reference.
  • Other return values may be in any uniqueness state.
  • Whole-value assignment into an inout parameter does a member-wise assignment.
  • Passing an inout parameter as an inout argument just forwards the object reference.
  • Modifying a field of an inout parameter just applies the ordinary mutable variable rule to the field of the object reference.
  • Whole-value assignment into an ordinary mutable variable replaces the object reference.
  • Passing an ordinary mutable variable as an inout argument first ensures uniqueness of the variable's current value and then passes it as the object reference.
  • Modifying a field of an ordinary mutable variable first ensures uniqueness of the field's current value and then applies the ordinary mutable variable rule to that object.
  • Whole-value copying of an inout parameter or self in a non-mutating method clones the object.
  • Whole-value copying of any other variable copies the object reference and sets the non-unique flag.
  • Cloning an object sets the non-unique flag on all the fields.

You'll want a data-flow-aware optimizer so that you can do things like (1) return the value of a variable without setting the non-unique bit on it and (2) avoid redundant uniqueness checks when e.g. making a series of mutating method calls on the same variable.

Note that the flag doesn't need to be volatile because it's only checked during a modification and so any race on the flag would necessarily be an illegal race with the modification.

12 Likes

Yep, and that's where it falls apart, because any IL code compiled with a compiler that's not ours can interact with these types, and break that flag. IOW, I could write a function that returns a [String], and that function then gets called from — say Visual C# or Visual Basic, and that IL code knows nothing the flag and happily copies there struct around...

Correct. If you want to allow in-place mutation at all, you have to impose some high-level rules and assume that there isn't any code violating them. Even with reference types, you have to assume that there isn't malicious code grabbing references and then arbitrarily changing them asynchronously, or else all invariants go out the window. In my career, I have focused on native compilers that interoperate with C, so admitting that I'm dependent on foreign code not deliberately screwing with me has always been par for the course.

If you're talking about intended interaction points with other languages, I would recommend using defensive copying in the external-facing functions. Swift does the same thing when e.g. bridging NSArray to Array (and even then, we rely on the assumption that somebody hasn't maliciously crafted a mutable NSArray subclass for which -copy doesn't create an immutable instance). The expense of these additional measures are why it's a good idea to require the interaction points to be explicitly declared, as Swift does with @objc.

2 Likes

I am.

yeah, but for .NET there's no concept of "thats external, for other languages", like there's With Swift vs Cocoa. in .NET, all types happily live in the same space. Any method I'd expose (returning, say a [String] to be called from Silver would also be callable from VC#, or vice versa. For a .NET languages to be a good citizen, it needs to interact with the same object space.

On the plus side, it also means you don't have silly conversions back and forth like you have swift Swift and ObjC (where really, if you watch the session on this from this years WWDC, you need to think three times about ANY Cocoa you call that uses Arrays or even Strings, because you're easily ending up in non-toll-free bridging hell w/o ever realizing it. (Silver doesn't have that, not when you use it for Cocoa, either. its strings bridge toll-free to Cocoa APIs and back ;)

Public Swift functions are global symbols that anyone can just declare and call from C. Our implementation depends for correctness on nobody doing that without following the ABI. You could absolutely just apply the same principle and give your internal-only functions weird names so that people won't accidentally use them. But you don't really need to, because both the JVM and CLR support function identifiers at the implementation level that cannot be written in ordinary Java or .NET source code.

3 Likes

You can have value types on the JVM? Scala does for example. The general way this works is that the struct is expanded to its constituents by the compiler. EG:

struct S {
    let i: Int
    let c: SomeClass
}

When used S gets expanded (recursively), e.g.:

var s = S(...)
f(s)
let ss = [S]()

Becomes:

// var s = S(...)
var s$i = ...
var s$c = ...

// f(s)
f(s$i, s$c)

// let ss = [S]()
ss$i = [Int]()
ss$c = [SomeClass]()

This allows mutation of a struct:

// s.i = 0
s$i = 0

PS For the JVM it is conventional to use $ for generated names.

There's also work toward value classes in the JVM, which are required to be immutable and identity-insensitive. I don't know if any JVM implementations do so in practice, but in principle it ought to be possible for an implementation to turn methods that take and return immutable values into ones that behave like Swift inout arguments, updating a memory slot in-place instead.

FWIW, I agree with John. IMO, Silver is best characterized as a "Swift inspired" language, but it is not an alternate implementation of Swift.

I say this because it can't run even the most basic Swift programs that use arrays correctly. It is true that Swift pushes a lot of language complexity into the standard library, but just as an implementation of Swift that doesn't provide the Int type "isn't Swift", an implementation that doesn't provide proper array semantics isn't either.

Given our design, the API provided by the standard library is intrinsically tied to the definition of the language platform.

-Chris

8 Likes