[Planning][Request] "constexpr" for Swift 5

The parameters for a fixed-size array type determine the type's size/stride, so how could the bounds not be needed during compile-time? The compiler can't layout objects otherwise.

Swift is not C; it is perfectly capable of laying out objects at run time. It already has to do that for generic types and types with resilient members. That does, of course, have performance consequences, and those performance consequences might be unacceptable to you; but the fact that we can handle it means that we don't ultimately require a semantic concept of a constant expression, except inasmuch as we want to allow users to explicitly request guarantees about static layout.

Doesn't this defeat the purpose of generic value parameters? We might as well use a regular parameter if there's no compile-time evaluation involved. In that case, fixed-sized arrays will be useless, because they'll be normal arrays with resizing disabled.

You're making huge leaps here. The primary purpose of a fixed-size array feature is to allow the array to be allocated "inline" in its context instead of "out-of-line" using heap-allocated copy-on-write buffers. There is no reason that that representation would not be supportable just because the array's bound is not statically known; the only thing that matters is whether the bound is consistent for all instances of the container.

That is, it would not be okay to have a type like:
struct Widget {
   let length: Int
   var array: [length x Int]
}
because the value of the bound cannot be computed independently of a specific value.

But it is absolutely okay to have a type like:
struct Widget {
   var array: [(isRunningOnIOS15() ? 20 : 10) x Int]
}
It just means that the bound would get computed at runtime and, presumably, cached. The fact that this type's size isn't known statically does mean that the compiler has to be more pessimistic, but its values would still get allocated inline into their containers and even on the stack, using pretty much the same techniques as C99 VLAs.

I see your point. Dynamically-sized in-place allocation is something that completely escaped me when I was thinking of fixed-size arrays. I can say with confidence that a large portion of private-class-copy-on-write value types would greatly benefit from this and would finally be able to become true value types.

As far as I know, the pinnacle of uses for fixed-size arrays is having a compile-time pre-allocated space of the necessary size (either literally at compile-time if that's a static variable, or added to the pre-computed offset of the stack pointer in case of a local variable).

The difference between having to use dynamic offsets + alloca() and static offsets + a normal stack slot is noticeable but not nearly as extreme as you're imagining. And again, in most common cases we would absolutely be able to fold a bound statically and fall into the optimal path you're talking about. The critical guarantee, that the array does not get heap-allocated, is still absolutely intact.

Yet again, Swift (specifically - you in this case) is teaching me to trust the compiler to optimize, which is still an alien feeling to me even after all these years of heavy Swift usage. Damn you, C++ for corrupting my brain :grinning:.
In the specific case of having dynamic-sized in-place-allocated value types this will absolutely work. But this raises a chicken-and-the-egg problem: which is built in what: in-place allocated dynamic-sized value types, or specifically fixed-size arrays? On one hand I'm tempted to think that value types should be able to dynamically decide (inside the initializer) the exact size of the allocated memory (no less than the static size) that they occupy (no matter if on the heap, on the stack or anywhere else), after which they'd be able to access the "leftover" memory by a pointer and do whatever they want with it. This approach seems more logical, since this is essentially how fixed-size arrays would be implemented under the hood. But on the other hand, this does make use of unsafe pointers (and no part of Swift currently relies on unsafe pointers to function), so abstracting it away behind a magical fixed-size array seems safer (with a hope that a fixed-size array of UInt8 would be optimized down to exactly the first case).

Value equality would still affect the type-checker, but I think we could pretty easily just say that all bound expressions are assumed to potentially resolve unequally unless they are literals or references to the same 'let' constant.

Shouldn't the type-checker use the Equatable protocol conformance to test for equality?

The Equatable protocol does guarantee reflexivity.

Moreover, as far as I know, Equatable is not recognized by the compiler in any way, so it's just a regular protocol.

That's not quite true: we synthesize Equatable instances in several places.

What would make it special? Some types would implement operator == to compare themselves to other types, that's beyond the scope of Equatable. What about those? And how are custom operator implementations going to serve this purpose at compile-time? Or will it just ignore the semantics of the type and reduce it to a sequence of bits? Or maybe only a few hand-picked types will be supported?

The seemingly simple generic value parameter concept gets vastly complicated and/or poorly designed without an elaborate compile-time execution system... Unless I'm missing an obvious way out.

The only thing the compiler really *needs* to know is whether two types are known to be the same, i.e. whether two values are known to be the same.

I think having arbitrary value-type literals would be a great place to start. Currently there are only these types of literals:
  * nil
  * boolean
  * integer
  * floating-point
  * string, extended grapheme cluster, unicode scalar
  * array
  * dictionary

The last three of which are kinda weird because they're not really literals, because they can contains dynamically generated values.
If value types were permitted to have a special kind of initializer (I'll call it literal initializer for now), which only allows directly assigning to its stored properties or self form parameters with no operations, then that initializer could be used to produce a compile-time literal of that value type. A similar special equality operator would only allow directly comparing stored properties between two literal-capable value types.

struct Foo {
  
  literal init(one: Int, two: Float) {
    self.one = one
    self.two = two
  }

  let one: Int

  let two: Float

}

literal static func == (_ some: Foo, _ other: Foo) -> Bool {
  return some.one == other.one && some.two == other.two
}

only assignment would be allowed in the initializer and only equality check and boolean operations would be allowed inside the equality operator. These limitations would guarantee completely deterministic literal creation and equality conformance at compile-time.
Types that conform to _BuiltinExpressibleBy*Literal would be magically equipped with both of these.
String, array and dictionary literals would be unavailable.

An elaborate compile-time execution system would not be sufficient here, because again, Swift is not C or C++: we need to be able to answer that question even in generic code rather than relying on the ability to fold all computations statically. We do not want to add an algebraic solver to the type-checker. The obvious alternative is to simply be conservatively correct by treating independent complex expressions as always yielding different values.

How exactly does generic type resolution happen? Obviously, it's not all compile-time, since it has to deal with existential containers. Without customizable generic resolution, I don't see a way to implement satisfactory generic value parameters. But if we settle on magical fixed-size arrays, we wouldn't need generic value parameters, we would only need to support constraining the size of the array with Comparable operators:

func foo<T>(_ array: T) where T: [Int], T.count == 5 {
  // ...
}

let array: [5 of Int] = [1, 2, 3, 4, 5]
foo(array)

The only hard constraint is that types need to be consistent, but that just means that we need to have a model in which bound expressions are evaluated exactly once at runtime (and of course typically folded at compile time).

What exactly would it take to be able to execute select piece of code at compile-time? Taking the AST, converting it to LLVM IR and feeding it to the MCJIT engine seems to be easy enough. But I'm pretty sure it's more tricky than that. Is there a special assumption or two made about the code that prevents this from happening?

We already have the ability to fold simple expressions in SIL; we would just make sure that could handle anything that we considered really important and allow everything else to be handled dynamically.

So, with some minor adjustments, we could get a well-defined subset of Swift that can be executed at compile-time to yield values that would pass as literals in any context?
This would some day allow relaxing the limitations on literal initializers and literal equality operators by pre-computing and caching values at compile-time outside the scope of the type checker, allowing the type checker to stay simple, while essentially allowing generics with complex resolution logic.

···

On Jul 31, 2017, at 10:09 PM, John McCall <rjmccall@apple.com> wrote:

On Jul 31, 2017, at 3:15 AM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com> wrote:

On Jul 31, 2017, at 7:10 AM, John McCall via swift-evolution <swift-evolution@swift.org> wrote:

On Jul 30, 2017, at 11:43 PM, Daryle Walker <darylew@mac.com> wrote:

John.

John.

Or do you mean that the bounds are integer literals? (That's what I have in the design document now.)

Sent from my iPhone

On Jul 30, 2017, at 8:51 PM, John McCall <rjmccall@apple.com> wrote:

On Jul 29, 2017, at 7:01 PM, Daryle Walker via swift-evolution <swift-evolution@swift.org> wrote:
The “constexpr” facility from C++ allows users to define constants and functions that are determined and usable at compile-time, for compile-time constructs but still usable at run-time. The facility is a key step for value-based generic parameters (and fixed-size arrays if you don’t want to be stuck with integer literals for bounds). Can figuring out Swift’s story here be part of Swift 5?

Note that there's no particular reason that value-based generic parameters, including fixed-size arrays, actually need to be constant expressions in Swift.

John.

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Sure, and hence my point: suppose now `foo` is a function in the stdlib, and the stdlib authors have annotated the function so that it is `func foo(arr: fixed [Int])`. Then, any user who writes `var array = ...` could benefit from a performance boost because the compiler will not longer have to be pessimistic about copying in order to maintain COW semantics. This is why I made an explicit analogy to the design proposed for ownership in Swift, where end users don't have to understand it in order to benefit from the feature because the functions they call can give sufficiently important hints to help the compiler avoid unnecessary copying.

I don't think that you claimed that this should be a solution to the fixed-size array problem, but just in case, also note that it's not. We can't make any [Int] layout-compatible with C fixed-size arrays because the length has to be encoded in the type (as it cannot be encoded in the data itself).

I don't understand this point. Can you elaborate on what you mean here? Why does it have to be layout-compatible?

Then it seems that I have stricter requirements for fixed-size arrays than you do, and I'd be curious to hear what you want to get out of fixed-size arrays. If we compare a hypothetical `fixed [int]` to a hypothetical `FixedSizeArray<T, N>`, these are some of the things that matter to me which `fixed` can't offer:

It doesn't allow you to specify the size of the array, it's just a promise that the array is some immutable size. For instance, a `fixed [CGFloat]` would be a terrible type to represent a vector, but a FixedSizeArray<CGFloat, 4> would be appropriate, at least for the backing storage. A raw tuple would be a poor choice because dynamically indexing into them is painful.

Shouldn't this be solved by improvements to tuples? It's a highly desired feature (by me and at least a few others) to allow tuples to conform to protocols, whether fixed-size arrays are added or not. I expect that conforming homogeneous tuples to Collection and enabling subscripting is simply a matter of time.

I disagree; it seems to me that a homogeneous fixed-size sequence is its own concept, and there isn't any natural link between that concept and that of a tuple. The elements of a tuple are independent with no naturally-implied relationship; or put another way, tuple indices are nominal, not ordinal.

Importing C arrays as tuples is a big old hack that has never really worked out well.

John.

···

On Jul 31, 2017, at 9:54 PM, Xiaodi Wu via swift-evolution <swift-evolution@swift.org> wrote:
On Mon, Jul 31, 2017 at 11:45 AM, Félix Cloutier <felixcloutier@icloud.com <mailto:felixcloutier@icloud.com>> wrote:
`fixed` is only useful when the compiler can determine the size of the array statically. This makes it mostly useless as a storage qualifier if you received the array as a parameter (*even* if you received a `fixed` array), because you know that it has a constant size but you don't know what that size is.
Therefore, using a fixed-size array as a generic parameter (crucially, such as `fixed [fixed [Int]]`) is unlikely to work.
Even if that semantic hurdle is overcome, we'd still have no idea how much memory to allocate for the outer array's buffer to make it work.

As John McCall has replied, the array's bounds don't need to be statically known for fixed-size arrays to have benefits.

Even if `fixed [fixed [Int]]` could work, then each inner array could still have a different size, which is almost certainly not what you intend by nesting two fixed-size arrays.

That's fair, but why at that point wouldn't you make your own Matrix type of fixed size, which uses an Array of fixed size as the underlying storage?

Layout compatibility is important if you want to use fixed-size arrays to replace the clunky tuples that currently represent fixed-size arrays in structs exported from C (which is probably my one single biggest motivation for fixed-size arrays). You can't achieve layout compatibility if the size is part of the data instead of part of the type.

For me, that's an anti-goal, as IMO tuples are the most sensible way of bridging a lot of these fixed-size arrays from C. Quite simply, I'd argue that the most idiomatic way to represent four CGFloat instances is `(CGFloat, CGFloat, CGFloat, CGFloat)`. The solution to certain operations being clunky with tuples is to improve the ergonomics of tuples. For instance, if we need a shorthand to avoid typing all those `CGFloat`s, then add one: `(4 * CGFloat)`. If we need subscripting, then add it.

Besides, attaching fixed-size array semantics to an inherently variable-size Array is awkward. For `fixed` to be effective, it needs to disable methods that change the size of the array, or warn that you're using them. I don't like the cross-concern impact: now a keyword needs to know about method implementations to restrict them. It also has to work with extension methods on the Array type, and it shouldn't apply to just mutating functions because mutations that don't change the length of the array are fine.

The idea is that all facilities which would benefit from knowing that an array is of a fixed count would opt into that benefit by indicating as such. That is, all stdlib array methods that are guaranteed to preserve the size of the array would be annotated as such. Again, by analogy to the ownership manifesto's design where functions that take shared arguments could be optimized on the basis of such annotation. The rest would fall out naturally.

What would you use `fixed [Int]` for? Only as an optimization tool?

Yes.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Do we really want to make that guarantee about heap/stack allocation? C99’s VLAs are not very loop-friendly:

echo "int main() {
        for(int i = 0; i<1000000; i++) {
          int myArray[i * 1000]; myArray[0] = 32;
        }
        return 0;
      }" | clang -x c - && ./a.out

Segmentation Fault: 11

C compilers also do not inline code with VLAs by default. If you force it, you expose yourself to possible stack overflows:

echo "static inline void doSomething(int i) {
        int myArray[i * 1000]; myArray[0] = 32;
      }
      int main() {
        for(int i = 0; i<1000000; i++) {
          doSomething(i);
        }
      return 0;
      }" | clang -x c - && ./a.out

Segmentation Fault: 11

I wouldn’t like us to import these kinds of issues in to Swift

- Karl

···

On 31. Jul 2017, at 21:09, John McCall via swift-evolution <swift-evolution@swift.org> wrote:

On Jul 31, 2017, at 3:15 AM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com> wrote:

On Jul 31, 2017, at 7:10 AM, John McCall via swift-evolution <swift-evolution@swift.org> wrote:

On Jul 30, 2017, at 11:43 PM, Daryle Walker <darylew@mac.com> wrote:
The parameters for a fixed-size array type determine the type's size/stride, so how could the bounds not be needed during compile-time? The compiler can't layout objects otherwise.

Swift is not C; it is perfectly capable of laying out objects at run time. It already has to do that for generic types and types with resilient members. That does, of course, have performance consequences, and those performance consequences might be unacceptable to you; but the fact that we can handle it means that we don't ultimately require a semantic concept of a constant expression, except inasmuch as we want to allow users to explicitly request guarantees about static layout.

Doesn't this defeat the purpose of generic value parameters? We might as well use a regular parameter if there's no compile-time evaluation involved. In that case, fixed-sized arrays will be useless, because they'll be normal arrays with resizing disabled.

You're making huge leaps here. The primary purpose of a fixed-size array feature is to allow the array to be allocated "inline" in its context instead of "out-of-line" using heap-allocated copy-on-write buffers. There is no reason that that representation would not be supportable just because the array's bound is not statically known; the only thing that matters is whether the bound is consistent for all instances of the container.

That is, it would not be okay to have a type like:
struct Widget {
   let length: Int
   var array: [length x Int]
}
because the value of the bound cannot be computed independently of a specific value.

But it is absolutely okay to have a type like:
struct Widget {
   var array: [(isRunningOnIOS15() ? 20 : 10) x Int]
}
It just means that the bound would get computed at runtime and, presumably, cached. The fact that this type's size isn't known statically does mean that the compiler has to be more pessimistic, but its values would still get allocated inline into their containers and even on the stack, using pretty much the same techniques as C99 VLAs.

Sorry for jumping late into this,
about the topic of compile time execution, I raised it pretty much in
the beginning of Swift being open sourced and it stills pops in my
mind everytime I see Jonathan Blow use it in his language.

So for my own curiosity, how feasible it is for Swift to do it with
the current compiler pipeline (SIL, LLVM....)? And by that I mean
actually marking some function/file and letting it run at compile
time. Would it be posible for the compiler to interpret arbitrary
Swift code? It probably won't be easy but I just want to know if it's
closer to "it can be done" or "no way". From Blow's streams I know
that this probably would have to work by compiling everything it can
before interpreting the compile-time code which I don't know how Swift
would deal with it.
But ignoring that, I would be pretty happy if we could have things
like the ones mentioned (including a be able to do what Sourcery and
what the compiler with equatable/etc do in plain Swift).

Thanks

···

On Thu, Aug 10, 2017 at 1:10 PM, Tino Heth via swift-evolution <swift-evolution@swift.org> wrote:

Imho this topic was much better than that other one ;-) — and I just
realised that of metaprogramming build on top of reflection wasn't discussed
in its own thread yet…
I fear "constexpr" is already burned, because people associate it with
things like calculating Fibonacci numbers at compile time (which is kind of
cool, but doesn't have much merit for most of us).

Right now, there is SE-185, which allows to synthesise Equatable and
Hashable.
To do so, nearly 1500 lines of C++ are needed, and even if we assume that
two thirds are comments and whitespace, it's still a big piece of code that
could only be written by someone with deep knowledge about C++ and the Swift
compiler.
Compile time metaprogramming could do the same, but in probably less than
twenty lines of Swift that could be written rather easily by anyone who
knows the language…

So to update my list of things that might be added, there are also some
points that are already there and whose implementation could have been
simplified drastically:

- Forwarding of protocol conformance (Kotlin, for example, has this: When a
member conforms to a protocol, you don't have to write a bunch of methods
that just say "let my member do this")
- init with reduced boilerplate
- Subtyping for non-class types, including a "newtype" option
- Property behaviours

- Equatable, Hashabable
- Encoding/Decoding

I still wonder that virtually no one else seems to be thrilled about the
power of the idea… @gor.f.gyolchanyan would you like join an attempt to
raise the attention?

- Tino

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

--
Alejandro Martinez

The parameters for a fixed-size array type determine the type's size/stride, so how could the bounds not be needed during compile-time? The compiler can't layout objects otherwise.

Swift is not C; it is perfectly capable of laying out objects at run time. It already has to do that for generic types and types with resilient members. That does, of course, have performance consequences, and those performance consequences might be unacceptable to you; but the fact that we can handle it means that we don't ultimately require a semantic concept of a constant expression, except inasmuch as we want to allow users to explicitly request guarantees about static layout.

Doesn't this defeat the purpose of generic value parameters? We might as well use a regular parameter if there's no compile-time evaluation involved. In that case, fixed-sized arrays will be useless, because they'll be normal arrays with resizing disabled.

You're making huge leaps here. The primary purpose of a fixed-size array feature is to allow the array to be allocated "inline" in its context instead of "out-of-line" using heap-allocated copy-on-write buffers. There is no reason that that representation would not be supportable just because the array's bound is not statically known; the only thing that matters is whether the bound is consistent for all instances of the container.

That is, it would not be okay to have a type like:
struct Widget {
   let length: Int
   var array: [length x Int]
}
because the value of the bound cannot be computed independently of a specific value.

But it is absolutely okay to have a type like:
struct Widget {
   var array: [(isRunningOnIOS15() ? 20 : 10) x Int]
}
It just means that the bound would get computed at runtime and, presumably, cached. The fact that this type's size isn't known statically does mean that the compiler has to be more pessimistic, but its values would still get allocated inline into their containers and even on the stack, using pretty much the same techniques as C99 VLAs.

I see your point. Dynamically-sized in-place allocation is something that completely escaped me when I was thinking of fixed-size arrays. I can say with confidence that a large portion of private-class-copy-on-write value types would greatly benefit from this and would finally be able to become true value types.

To be clear, it's not obvious that using an inline array is always a good move for performance! But it would be a tool available for use when people felt it was important.

As far as I know, the pinnacle of uses for fixed-size arrays is having a compile-time pre-allocated space of the necessary size (either literally at compile-time if that's a static variable, or added to the pre-computed offset of the stack pointer in case of a local variable).

The difference between having to use dynamic offsets + alloca() and static offsets + a normal stack slot is noticeable but not nearly as extreme as you're imagining. And again, in most common cases we would absolutely be able to fold a bound statically and fall into the optimal path you're talking about. The critical guarantee, that the array does not get heap-allocated, is still absolutely intact.

Yet again, Swift (specifically - you in this case) is teaching me to trust the compiler to optimize, which is still an alien feeling to me even after all these years of heavy Swift usage. Damn you, C++ for corrupting my brain :grinning:.

Well. Trust but verify. :)

In the specific case of having dynamic-sized in-place-allocated value types this will absolutely work. But this raises a chicken-and-the-egg problem: which is built in what: in-place allocated dynamic-sized value types, or specifically fixed-size arrays? On one hand I'm tempted to think that value types should be able to dynamically decide (inside the initializer) the exact size of the allocated memory (no less than the static size) that they occupy (no matter if on the heap, on the stack or anywhere else), after which they'd be able to access the "leftover" memory by a pointer and do whatever they want with it. This approach seems more logical, since this is essentially how fixed-size arrays would be implemented under the hood. But on the other hand, this does make use of unsafe pointers (and no part of Swift currently relies on unsafe pointers to function), so abstracting it away behind a magical fixed-size array seems safer (with a hope that a fixed-size array of UInt8 would be optimized down to exactly the first case).

Representationally, I think we would have a builtin fixed-sized array type that. But "fixed-size" means "the size is an inherent part of the type", not "we actually know that size statically". Swift would just be able to use more optimal code-generation patterns for types whose bounds it was actually able to compute statically.

John.

···

On Jul 31, 2017, at 4:00 PM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com> wrote:

On Jul 31, 2017, at 10:09 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 31, 2017, at 3:15 AM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 7:10 AM, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jul 30, 2017, at 11:43 PM, Daryle Walker <darylew@mac.com <mailto:darylew@mac.com>> wrote:

Value equality would still affect the type-checker, but I think we could pretty easily just say that all bound expressions are assumed to potentially resolve unequally unless they are literals or references to the same 'let' constant.

Shouldn't the type-checker use the Equatable protocol conformance to test for equality?

The Equatable protocol does guarantee reflexivity.

Moreover, as far as I know, Equatable is not recognized by the compiler in any way, so it's just a regular protocol.

That's not quite true: we synthesize Equatable instances in several places.

What would make it special? Some types would implement operator == to compare themselves to other types, that's beyond the scope of Equatable. What about those? And how are custom operator implementations going to serve this purpose at compile-time? Or will it just ignore the semantics of the type and reduce it to a sequence of bits? Or maybe only a few hand-picked types will be supported?

The seemingly simple generic value parameter concept gets vastly complicated and/or poorly designed without an elaborate compile-time execution system... Unless I'm missing an obvious way out.

The only thing the compiler really *needs* to know is whether two types are known to be the same, i.e. whether two values are known to be the same.

I think having arbitrary value-type literals would be a great place to start. Currently there are only these types of literals:
  * nil
  * boolean
  * integer
  * floating-point
  * string, extended grapheme cluster, unicode scalar
  * array
  * dictionary

The last three of which are kinda weird because they're not really literals, because they can contains dynamically generated values.
If value types were permitted to have a special kind of initializer (I'll call it literal initializer for now), which only allows directly assigning to its stored properties or self form parameters with no operations, then that initializer could be used to produce a compile-time literal of that value type. A similar special equality operator would only allow directly comparing stored properties between two literal-capable value types.

struct Foo {
  
  literal init(one: Int, two: Float) {
    self.one = one
    self.two = two
  }

  let one: Int

  let two: Float

}

literal static func == (_ some: Foo, _ other: Foo) -> Bool {
  return some.one == other.one && some.two == other.two
}

only assignment would be allowed in the initializer and only equality check and boolean operations would be allowed inside the equality operator. These limitations would guarantee completely deterministic literal creation and equality conformance at compile-time.
Types that conform to _BuiltinExpressibleBy*Literal would be magically equipped with both of these.
String, array and dictionary literals would be unavailable.

An elaborate compile-time execution system would not be sufficient here, because again, Swift is not C or C++: we need to be able to answer that question even in generic code rather than relying on the ability to fold all computations statically. We do not want to add an algebraic solver to the type-checker. The obvious alternative is to simply be conservatively correct by treating independent complex expressions as always yielding different values.

How exactly does generic type resolution happen? Obviously, it's not all compile-time, since it has to deal with existential containers. Without customizable generic resolution, I don't see a way to implement satisfactory generic value parameters. But if we settle on magical fixed-size arrays, we wouldn't need generic value parameters, we would only need to support constraining the size of the array with Comparable operators:

func foo<T>(_ array: T) where T: [Int], T.count == 5 {
  // ...
}

let array: [5 of Int] = [1, 2, 3, 4, 5]
foo(array)

The only hard constraint is that types need to be consistent, but that just means that we need to have a model in which bound expressions are evaluated exactly once at runtime (and of course typically folded at compile time).

What exactly would it take to be able to execute select piece of code at compile-time? Taking the AST, converting it to LLVM IR and feeding it to the MCJIT engine seems to be easy enough. But I'm pretty sure it's more tricky than that. Is there a special assumption or two made about the code that prevents this from happening?

We already have the ability to fold simple expressions in SIL; we would just make sure that could handle anything that we considered really important and allow everything else to be handled dynamically.

So, with some minor adjustments, we could get a well-defined subset of Swift that can be executed at compile-time to yield values that would pass as literals in any context?
This would some day allow relaxing the limitations on literal initializers and literal equality operators by pre-computing and caching values at compile-time outside the scope of the type checker, allowing the type checker to stay simple, while essentially allowing generics with complex resolution logic.

John.

John.

Or do you mean that the bounds are integer literals? (That's what I have in the design document now.)

Sent from my iPhone

On Jul 30, 2017, at 8:51 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 29, 2017, at 7:01 PM, Daryle Walker via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
The “constexpr” facility from C++ allows users to define constants and functions that are determined and usable at compile-time, for compile-time constructs but still usable at run-time. The facility is a key step for value-based generic parameters (and fixed-size arrays if you don’t want to be stuck with integer literals for bounds). Can figuring out Swift’s story here be part of Swift 5?

Note that there's no particular reason that value-based generic parameters, including fixed-size arrays, actually need to be constant expressions in Swift.

John.

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

No, this is a misunderstanding: I mean "you" as yourself, not "you" as the compiler. As far as I know, given a type variable, as in `let foo: NSObject.Type = NSArray.self`, you (the developer) can't use `foo` to instantiate a generic type.

Ah, yes, I did misunderstand what you were saying.

The question stands again: are there any specific reasons that you can't do that right now, or is it just because the need for this is not particularly evident?

There is no specific reason you can't do that right now. We could certainly just allow you to name a local constant in type contexts if it happens to be of metatype type — I don't know if that would be the best language design, but there's nothing inherent about the language model that makes it unreasonable. I do think we would at least demand that it be a constant and not a variable.

John.

···

On Jul 31, 2017, at 11:29 PM, Félix Cloutier <felixcca@yahoo.ca> wrote:

Félix

Le 31 juil. 2017 à 11:29, John McCall <rjmccall@apple.com> a écrit :

On Jul 31, 2017, at 3:58 AM, Félix Cloutier <felixcca@yahoo.ca> wrote:
It seems to me that this applies just the same to type generic parameters. Are there any reason that you can't instantiate a generic type at runtime, or is it just because the need is not as evident as it would/could be with non-type generic parameters?

Are you under the impression that Swift does not, today, instantiate generic types at runtime?

A lot of this entire thread seems to be premised on really weird, incorrect assumptions about the Swift compilation model.

John.

Le 30 juil. 2017 à 21:10, John McCall via swift-evolution <swift-evolution@swift.org> a écrit :

On Jul 30, 2017, at 11:43 PM, Daryle Walker <darylew@mac.com> wrote:
The parameters for a fixed-size array type determine the type's size/stride, so how could the bounds not be needed during compile-time? The compiler can't layout objects otherwise.

Swift is not C; it is perfectly capable of laying out objects at run time. It already has to do that for generic types and types with resilient members. That does, of course, have performance consequences, and those performance consequences might be unacceptable to you; but the fact that we can handle it means that we don't ultimately require a semantic concept of a constant expression, except inasmuch as we want to allow users to explicitly request guarantees about static layout.

Value equality would still affect the type-checker, but I think we could pretty easily just say that all bound expressions are assumed to potentially resolve unequally unless they are literals or references to the same 'let' constant.

The only hard constraint is that types need to be consistent, but that just means that we need to have a model in which bound expressions are evaluated exactly once at runtime (and of course typically folded at compile time).

John.

Or do you mean that the bounds are integer literals? (That's what I have in the design document now.)

Sent from my iPhone

On Jul 30, 2017, at 8:51 PM, John McCall <rjmccall@apple.com> wrote:

On Jul 29, 2017, at 7:01 PM, Daryle Walker via swift-evolution <swift-evolution@swift.org> wrote:
The “constexpr” facility from C++ allows users to define constants and functions that are determined and usable at compile-time, for compile-time constructs but still usable at run-time. The facility is a key step for value-based generic parameters (and fixed-size arrays if you don’t want to be stuck with integer literals for bounds). Can figuring out Swift’s story here be part of Swift 5?

Note that there's no particular reason that value-based generic parameters, including fixed-size arrays, actually need to be constant expressions in Swift.

John.

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

The parameters for a fixed-size array type determine the type's size/stride, so how could the bounds not be needed during compile-time? The compiler can't layout objects otherwise.

Swift is not C; it is perfectly capable of laying out objects at run time. It already has to do that for generic types and types with resilient members. That does, of course, have performance consequences, and those performance consequences might be unacceptable to you; but the fact that we can handle it means that we don't ultimately require a semantic concept of a constant expression, except inasmuch as we want to allow users to explicitly request guarantees about static layout.

Doesn't this defeat the purpose of generic value parameters? We might as well use a regular parameter if there's no compile-time evaluation involved. In that case, fixed-sized arrays will be useless, because they'll be normal arrays with resizing disabled.

You're making huge leaps here. The primary purpose of a fixed-size array feature is to allow the array to be allocated "inline" in its context instead of "out-of-line" using heap-allocated copy-on-write buffers. There is no reason that that representation would not be supportable just because the array's bound is not statically known; the only thing that matters is whether the bound is consistent for all instances of the container.

That is, it would not be okay to have a type like:
struct Widget {
   let length: Int
   var array: [length x Int]
}
because the value of the bound cannot be computed independently of a specific value.

But it is absolutely okay to have a type like:
struct Widget {
   var array: [(isRunningOnIOS15() ? 20 : 10) x Int]
}
It just means that the bound would get computed at runtime and, presumably, cached. The fact that this type's size isn't known statically does mean that the compiler has to be more pessimistic, but its values would still get allocated inline into their containers and even on the stack, using pretty much the same techniques as C99 VLAs.

I see your point. Dynamically-sized in-place allocation is something that completely escaped me when I was thinking of fixed-size arrays. I can say with confidence that a large portion of private-class-copy-on-write value types would greatly benefit from this and would finally be able to become true value types.

To be clear, it's not obvious that using an inline array is always a good move for performance! But it would be a tool available for use when people felt it was important.

That's why I'm trying to push for compile-time execution system. All these problems (among many others) could be designed out of existence and the compiler would be incredibly simple in the light of all the different specific features that the community is asking for. But I do feel your urge to avoid inventing a bulldozer factory just for digging a hole in a sandbox. It doesn't have to be relied upon by the type checker or generic resolution mechanism. It would be purely auxiliary. But that would single-handedly move a large chunk of the compiler into stdlib and a huge portion of various little incidental proposals would fade away because they can now easily be implemented in Swift for specific purposes.

As far as I know, the pinnacle of uses for fixed-size arrays is having a compile-time pre-allocated space of the necessary size (either literally at compile-time if that's a static variable, or added to the pre-computed offset of the stack pointer in case of a local variable).

The difference between having to use dynamic offsets + alloca() and static offsets + a normal stack slot is noticeable but not nearly as extreme as you're imagining. And again, in most common cases we would absolutely be able to fold a bound statically and fall into the optimal path you're talking about. The critical guarantee, that the array does not get heap-allocated, is still absolutely intact.

Yet again, Swift (specifically - you in this case) is teaching me to trust the compiler to optimize, which is still an alien feeling to me even after all these years of heavy Swift usage. Damn you, C++ for corrupting my brain :grinning:.

Well. Trust but verify. :slightly_smiling_face:

The only good way I can think of doing that is hand-crafting a lightning-fast implementation LLVM IR, then doing the same in Swift, decompiling the bitcode and then doing a diff. It's going to be super tedious and painful, but it seems to be the only way to prove that Swift can (hopefully, some day...) replace C++ in sheer performance potential.

In the specific case of having dynamic-sized in-place-allocated value types this will absolutely work. But this raises a chicken-and-the-egg problem: which is built in what: in-place allocated dynamic-sized value types, or specifically fixed-size arrays? On one hand I'm tempted to think that value types should be able to dynamically decide (inside the initializer) the exact size of the allocated memory (no less than the static size) that they occupy (no matter if on the heap, on the stack or anywhere else), after which they'd be able to access the "leftover" memory by a pointer and do whatever they want with it. This approach seems more logical, since this is essentially how fixed-size arrays would be implemented under the hood. But on the other hand, this does make use of unsafe pointers (and no part of Swift currently relies on unsafe pointers to function), so abstracting it away behind a magical fixed-size array seems safer (with a hope that a fixed-size array of UInt8 would be optimized down to exactly the first case).

Representationally, I think we would have a builtin fixed-sized array type that. But "fixed-size" means "the size is an inherent part of the type", not "we actually know that size statically". Swift would just be able to use more optimal code-generation patterns for types whose bounds it was actually able to compute statically.

Well, yeah, knowing its size statically is not a requirement, but having a guarantee of in-place allocation is. As long as non-escaped local fixed-size arrays live on the stack, I'm happy. :slightly_smiling_face:

···

On Jul 31, 2017, at 11:23 PM, John McCall <rjmccall@apple.com> wrote:

On Jul 31, 2017, at 4:00 PM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 10:09 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 31, 2017, at 3:15 AM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 7:10 AM, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jul 30, 2017, at 11:43 PM, Daryle Walker <darylew@mac.com <mailto:darylew@mac.com>> wrote:

John.

Value equality would still affect the type-checker, but I think we could pretty easily just say that all bound expressions are assumed to potentially resolve unequally unless they are literals or references to the same 'let' constant.

Shouldn't the type-checker use the Equatable protocol conformance to test for equality?

The Equatable protocol does guarantee reflexivity.

Moreover, as far as I know, Equatable is not recognized by the compiler in any way, so it's just a regular protocol.

That's not quite true: we synthesize Equatable instances in several places.

What would make it special? Some types would implement operator == to compare themselves to other types, that's beyond the scope of Equatable. What about those? And how are custom operator implementations going to serve this purpose at compile-time? Or will it just ignore the semantics of the type and reduce it to a sequence of bits? Or maybe only a few hand-picked types will be supported?

The seemingly simple generic value parameter concept gets vastly complicated and/or poorly designed without an elaborate compile-time execution system... Unless I'm missing an obvious way out.

The only thing the compiler really *needs* to know is whether two types are known to be the same, i.e. whether two values are known to be the same.

I think having arbitrary value-type literals would be a great place to start. Currently there are only these types of literals:
  * nil
  * boolean
  * integer
  * floating-point
  * string, extended grapheme cluster, unicode scalar
  * array
  * dictionary

The last three of which are kinda weird because they're not really literals, because they can contains dynamically generated values.
If value types were permitted to have a special kind of initializer (I'll call it literal initializer for now), which only allows directly assigning to its stored properties or self form parameters with no operations, then that initializer could be used to produce a compile-time literal of that value type. A similar special equality operator would only allow directly comparing stored properties between two literal-capable value types.

struct Foo {
  
  literal init(one: Int, two: Float) {
    self.one = one
    self.two = two
  }

  let one: Int

  let two: Float

}

literal static func == (_ some: Foo, _ other: Foo) -> Bool {
  return some.one == other.one && some.two == other.two
}

only assignment would be allowed in the initializer and only equality check and boolean operations would be allowed inside the equality operator. These limitations would guarantee completely deterministic literal creation and equality conformance at compile-time.
Types that conform to _BuiltinExpressibleBy*Literal would be magically equipped with both of these.
String, array and dictionary literals would be unavailable.

An elaborate compile-time execution system would not be sufficient here, because again, Swift is not C or C++: we need to be able to answer that question even in generic code rather than relying on the ability to fold all computations statically. We do not want to add an algebraic solver to the type-checker. The obvious alternative is to simply be conservatively correct by treating independent complex expressions as always yielding different values.

How exactly does generic type resolution happen? Obviously, it's not all compile-time, since it has to deal with existential containers. Without customizable generic resolution, I don't see a way to implement satisfactory generic value parameters. But if we settle on magical fixed-size arrays, we wouldn't need generic value parameters, we would only need to support constraining the size of the array with Comparable operators:

func foo<T>(_ array: T) where T: [Int], T.count == 5 {
  // ...
}

let array: [5 of Int] = [1, 2, 3, 4, 5]
foo(array)

The only hard constraint is that types need to be consistent, but that just means that we need to have a model in which bound expressions are evaluated exactly once at runtime (and of course typically folded at compile time).

What exactly would it take to be able to execute select piece of code at compile-time? Taking the AST, converting it to LLVM IR and feeding it to the MCJIT engine seems to be easy enough. But I'm pretty sure it's more tricky than that. Is there a special assumption or two made about the code that prevents this from happening?

We already have the ability to fold simple expressions in SIL; we would just make sure that could handle anything that we considered really important and allow everything else to be handled dynamically.

So, with some minor adjustments, we could get a well-defined subset of Swift that can be executed at compile-time to yield values that would pass as literals in any context?
This would some day allow relaxing the limitations on literal initializers and literal equality operators by pre-computing and caching values at compile-time outside the scope of the type checker, allowing the type checker to stay simple, while essentially allowing generics with complex resolution logic.

John.

John.

Or do you mean that the bounds are integer literals? (That's what I have in the design document now.)

Sent from my iPhone

On Jul 30, 2017, at 8:51 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 29, 2017, at 7:01 PM, Daryle Walker via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
The “constexpr” facility from C++ allows users to define constants and functions that are determined and usable at compile-time, for compile-time constructs but still usable at run-time. The facility is a key step for value-based generic parameters (and fixed-size arrays if you don’t want to be stuck with integer literals for bounds). Can figuring out Swift’s story here be part of Swift 5?

Note that there's no particular reason that value-based generic parameters, including fixed-size arrays, actually need to be constant expressions in Swift.

John.

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

Right. This distinction is subtle, but important, even when the tuple is homogenous.

For the mathematically-inclined, this is much like the distinction between R^2 (vector / fixed size array) and C (tuple / struct). They are isomorphic as vector spaces over R, but C has a bunch of extra structure that depends on one piece being real and the other being imaginary, but in R^2 the two components are always treated the same.

This even has calling convention implications; for efficiency, on most architectures, most of the time, it's best to pass a two-element vector as elements 0 and 1 of a vector register, but to pass a complex number as two scalars.

- Steve

···

Sent from my iPhone

On Jul 31, 2017, at 23:47, John McCall via swift-evolution <swift-evolution@swift.org> wrote:

I disagree; it seems to me that a homogeneous fixed-size sequence is its own concept, and there isn't any natural link between that concept and that of a tuple. The elements of a tuple are independent with no naturally-implied relationship; or put another way, tuple indices are nominal, not ordinal.

I expect that conforming homogeneous tuples to Collection and enabling subscripting is simply a matter of time.

I disagree; it seems to me that a homogeneous fixed-size sequence is its own concept, and there isn't any natural link between that concept and that of a tuple.

I'm reliefed to read this response coming from an @apple.com address ;-).
Imho it's a terrible idea two introduce two different kinds of tuples:
Arrays are one of the building blocks of programming, and they deserve being called with their name.
Tuples share more with structs than with arrays.

It doesn't allow you to specify the size of the array, it's just a promise that the array is some immutable size. For instance, a `fixed [CGFloat]` would be a terrible type to represent a vector, but a FixedSizeArray<CGFloat, 4> would be appropriate, at least for the backing storage. A raw tuple would be a poor choice because dynamically indexing into them is painful.

Shouldn't this be solved by improvements to tuples? It's a highly desired feature (by me and at least a few others) to allow tuples to conform to protocols, whether fixed-size arrays are added or not. I expect that conforming homogeneous tuples to Collection and enabling subscripting is simply a matter of time.

Of course, if this happens, the point is moot and we have the type of fixed-size arrays that I've been asking for. However, crystal-balling Chris's last comment on the matter, it looks like it might not be happening too soon <https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20170724/038225.html&gt;\.

`fixed` is only useful when the compiler can determine the size of the array statically. This makes it mostly useless as a storage qualifier if you received the array as a parameter (*even* if you received a `fixed` array), because you know that it has a constant size but you don't know what that size is.
Therefore, using a fixed-size array as a generic parameter (crucially, such as `fixed [fixed [Int]]`) is unlikely to work.
Even if that semantic hurdle is overcome, we'd still have no idea how much memory to allocate for the outer array's buffer to make it work.

As John McCall has replied, the array's bounds don't need to be statically known for fixed-size arrays to have benefits.

This (partially) applies to the first point, but it leaves a lot of holes. The size does not need to be statically known, but you need to know how much memory you're going to need ahead of allocating it. How much memory do you need for this?

var foo = fixed [Int]()
for i in 0..<param {
  foo.append(i)
}

Arrays can't work if elements don't have a fixed size. How big is an element in this example?

This is not the idea. The idea is more like

  let n = ...
  var foo = [Int x n](repeating: 13)

The bound value is still fundamentally part of the type of the variable; it's just that the actual value is not known statically.

John.

···

On Aug 2, 2017, at 11:44 AM, Félix Cloutier via swift-evolution <swift-evolution@swift.org> wrote:

Le 31 juil. 2017 à 18:54, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> a écrit :

var foo = fixed [fixed [Int]]()
foo.append([1])
foo.append([2, 3])

Where do you allocate this array's storage, and how big is it?

struct Foo {
  var array: fixed [Int]
}

var foo = Foo()
foo.array.append(4)

In the general case, this makes `fixed` meaningful as a type annotation for parameters, but instantiating such an object in the first place would require you to give up a lot of flexibility in how it's populated. The unconstrained problem of finding how many elements you'll have in an array is uncomputable.

You could say that a fixed-size array stays as big as it was when it was instantiated, but this still causes problems with fixed-size arrays in structures (storage has already been allocated when the initializer runs), and with generic collections (you couldn't initialize a variable-sized array of fixed-size arrays, for instance).

Even if `fixed [fixed [Int]]` could work, then each inner array could still have a different size, which is almost certainly not what you intend by nesting two fixed-size arrays.

That's fair, but why at that point wouldn't you make your own Matrix type of fixed size, which uses an Array of fixed size as the underlying storage?

I'd do it if I needed to, but there are plenty of 2D arrays that are just that and don't need their own wrapper type to function, and I'd be happier to not do it.

Layout compatibility is important if you want to use fixed-size arrays to replace the clunky tuples that currently represent fixed-size arrays in structs exported from C (which is probably my one single biggest motivation for fixed-size arrays). You can't achieve layout compatibility if the size is part of the data instead of part of the type.

For me, that's an anti-goal, as IMO tuples are the most sensible way of bridging a lot of these fixed-size arrays from C. Quite simply, I'd argue that the most idiomatic way to represent four CGFloat instances is `(CGFloat, CGFloat, CGFloat, CGFloat)`. The solution to certain operations being clunky with tuples is to improve the ergonomics of tuples. For instance, if we need a shorthand to avoid typing all those `CGFloat`s, then add one: `(4 * CGFloat)`. If we need subscripting, then add it.

I think that any template-based fixed-size array structure would still need that kind of (4 * CGFloat) syntax to be implemented, or some other facility to allocate a parameterized amount of automatic storage. With that said, if we can implement fixed-size arrays with just that without having to wait for the restrictions on anonymous types to be lifted, to me, it's a fully acceptable solution.

Besides, attaching fixed-size array semantics to an inherently variable-size Array is awkward. For `fixed` to be effective, it needs to disable methods that change the size of the array, or warn that you're using them. I don't like the cross-concern impact: now a keyword needs to know about method implementations to restrict them. It also has to work with extension methods on the Array type, and it shouldn't apply to just mutating functions because mutations that don't change the length of the array are fine.

The idea is that all facilities which would benefit from knowing that an array is of a fixed count would opt into that benefit by indicating as such. That is, all stdlib array methods that are guaranteed to preserve the size of the array would be annotated as such. Again, by analogy to the ownership manifesto's design where functions that take shared arguments could be optimized on the basis of such annotation. The rest would fall out naturally.

There are more problems to this that you may be able to iron out but that make the story more complex still. How do you annotate MutableCollection-based algorithms, or any other protocol-based extension that doesn't directly accept an array? What's the syntax of an Array extension method, which doesn't have an explicit self parameter, to mean that the function is mutating but doesn't change the array's bounds?

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

We probably would not make an absolute guarantee of stack allocation, no.

John.

···

On Aug 2, 2017, at 5:56 PM, Karl Wagner <razielim@gmail.com> wrote:

On 31. Jul 2017, at 21:09, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jul 31, 2017, at 3:15 AM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 7:10 AM, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jul 30, 2017, at 11:43 PM, Daryle Walker <darylew@mac.com <mailto:darylew@mac.com>> wrote:
The parameters for a fixed-size array type determine the type's size/stride, so how could the bounds not be needed during compile-time? The compiler can't layout objects otherwise.

Swift is not C; it is perfectly capable of laying out objects at run time. It already has to do that for generic types and types with resilient members. That does, of course, have performance consequences, and those performance consequences might be unacceptable to you; but the fact that we can handle it means that we don't ultimately require a semantic concept of a constant expression, except inasmuch as we want to allow users to explicitly request guarantees about static layout.

Doesn't this defeat the purpose of generic value parameters? We might as well use a regular parameter if there's no compile-time evaluation involved. In that case, fixed-sized arrays will be useless, because they'll be normal arrays with resizing disabled.

You're making huge leaps here. The primary purpose of a fixed-size array feature is to allow the array to be allocated "inline" in its context instead of "out-of-line" using heap-allocated copy-on-write buffers. There is no reason that that representation would not be supportable just because the array's bound is not statically known; the only thing that matters is whether the bound is consistent for all instances of the container.

That is, it would not be okay to have a type like:
struct Widget {
   let length: Int
   var array: [length x Int]
}
because the value of the bound cannot be computed independently of a specific value.

But it is absolutely okay to have a type like:
struct Widget {
   var array: [(isRunningOnIOS15() ? 20 : 10) x Int]
}
It just means that the bound would get computed at runtime and, presumably, cached. The fact that this type's size isn't known statically does mean that the compiler has to be more pessimistic, but its values would still get allocated inline into their containers and even on the stack, using pretty much the same techniques as C99 VLAs.

Do we really want to make that guarantee about heap/stack allocation? C99’s VLAs are not very loop-friendly:

echo "int main() {
        for(int i = 0; i<1000000; i++) {
          int myArray[i * 1000]; myArray[0] = 32;
        }
        return 0;
      }" | clang -x c - && ./a.out

Segmentation Fault: 11

C compilers also do not inline code with VLAs by default. If you force it, you expose yourself to possible stack overflows:

echo "static inline void doSomething(int i) {
        int myArray[i * 1000]; myArray[0] = 32;
      }
      int main() {
        for(int i = 0; i<1000000; i++) {
          doSomething(i);
        }
      return 0;
      }" | clang -x c - && ./a.out

Segmentation Fault: 11

I wouldn’t like us to import these kinds of issues in to Swift

Although I will note that the problem in your example has nothing to do with it being a loop and everything to do with it asking for an almost 4GB array. :)

John.

···

On Aug 2, 2017, at 6:10 PM, John McCall via swift-evolution <swift-evolution@swift.org> wrote:

On Aug 2, 2017, at 5:56 PM, Karl Wagner <razielim@gmail.com <mailto:razielim@gmail.com>> wrote:

On 31. Jul 2017, at 21:09, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jul 31, 2017, at 3:15 AM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 7:10 AM, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jul 30, 2017, at 11:43 PM, Daryle Walker <darylew@mac.com <mailto:darylew@mac.com>> wrote:
The parameters for a fixed-size array type determine the type's size/stride, so how could the bounds not be needed during compile-time? The compiler can't layout objects otherwise.

Swift is not C; it is perfectly capable of laying out objects at run time. It already has to do that for generic types and types with resilient members. That does, of course, have performance consequences, and those performance consequences might be unacceptable to you; but the fact that we can handle it means that we don't ultimately require a semantic concept of a constant expression, except inasmuch as we want to allow users to explicitly request guarantees about static layout.

Doesn't this defeat the purpose of generic value parameters? We might as well use a regular parameter if there's no compile-time evaluation involved. In that case, fixed-sized arrays will be useless, because they'll be normal arrays with resizing disabled.

You're making huge leaps here. The primary purpose of a fixed-size array feature is to allow the array to be allocated "inline" in its context instead of "out-of-line" using heap-allocated copy-on-write buffers. There is no reason that that representation would not be supportable just because the array's bound is not statically known; the only thing that matters is whether the bound is consistent for all instances of the container.

That is, it would not be okay to have a type like:
struct Widget {
   let length: Int
   var array: [length x Int]
}
because the value of the bound cannot be computed independently of a specific value.

But it is absolutely okay to have a type like:
struct Widget {
   var array: [(isRunningOnIOS15() ? 20 : 10) x Int]
}
It just means that the bound would get computed at runtime and, presumably, cached. The fact that this type's size isn't known statically does mean that the compiler has to be more pessimistic, but its values would still get allocated inline into their containers and even on the stack, using pretty much the same techniques as C99 VLAs.

Do we really want to make that guarantee about heap/stack allocation? C99’s VLAs are not very loop-friendly:

echo "int main() {
        for(int i = 0; i<1000000; i++) {
          int myArray[i * 1000]; myArray[0] = 32;
        }
        return 0;
      }" | clang -x c - && ./a.out

Segmentation Fault: 11

C compilers also do not inline code with VLAs by default. If you force it, you expose yourself to possible stack overflows:

echo "static inline void doSomething(int i) {
        int myArray[i * 1000]; myArray[0] = 32;
      }
      int main() {
        for(int i = 0; i<1000000; i++) {
          doSomething(i);
        }
      return 0;
      }" | clang -x c - && ./a.out

Segmentation Fault: 11

I wouldn’t like us to import these kinds of issues in to Swift

We probably would not make an absolute guarantee of stack allocation, no.

Yeah, apologies - it was a bit of a poorly-written example.

The root cause, of course, is that the VLAs require new stack allocations each time, and the stack is only deallocated as one lump when the frame ends. A fixed-size object could be allocated once and re-used. Inlining prevents new stack frames being created and hence defers deallocation of those objects until the outer function ends, pushing up the high-water mark of the stack.

The problem also happens with an outer loop of only 10_000, so only 38MB. Still, enough to blow it up.

- Karl

···

On 3. Aug 2017, at 00:21, John McCall <rjmccall@apple.com> wrote:

On Aug 2, 2017, at 6:10 PM, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Aug 2, 2017, at 5:56 PM, Karl Wagner <razielim@gmail.com <mailto:razielim@gmail.com>> wrote:

On 31. Jul 2017, at 21:09, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jul 31, 2017, at 3:15 AM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 7:10 AM, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jul 30, 2017, at 11:43 PM, Daryle Walker <darylew@mac.com <mailto:darylew@mac.com>> wrote:
The parameters for a fixed-size array type determine the type's size/stride, so how could the bounds not be needed during compile-time? The compiler can't layout objects otherwise.

Swift is not C; it is perfectly capable of laying out objects at run time. It already has to do that for generic types and types with resilient members. That does, of course, have performance consequences, and those performance consequences might be unacceptable to you; but the fact that we can handle it means that we don't ultimately require a semantic concept of a constant expression, except inasmuch as we want to allow users to explicitly request guarantees about static layout.

Doesn't this defeat the purpose of generic value parameters? We might as well use a regular parameter if there's no compile-time evaluation involved. In that case, fixed-sized arrays will be useless, because they'll be normal arrays with resizing disabled.

You're making huge leaps here. The primary purpose of a fixed-size array feature is to allow the array to be allocated "inline" in its context instead of "out-of-line" using heap-allocated copy-on-write buffers. There is no reason that that representation would not be supportable just because the array's bound is not statically known; the only thing that matters is whether the bound is consistent for all instances of the container.

That is, it would not be okay to have a type like:
struct Widget {
   let length: Int
   var array: [length x Int]
}
because the value of the bound cannot be computed independently of a specific value.

But it is absolutely okay to have a type like:
struct Widget {
   var array: [(isRunningOnIOS15() ? 20 : 10) x Int]
}
It just means that the bound would get computed at runtime and, presumably, cached. The fact that this type's size isn't known statically does mean that the compiler has to be more pessimistic, but its values would still get allocated inline into their containers and even on the stack, using pretty much the same techniques as C99 VLAs.

Do we really want to make that guarantee about heap/stack allocation? C99’s VLAs are not very loop-friendly:

echo "int main() {
        for(int i = 0; i<1000000; i++) {
          int myArray[i * 1000]; myArray[0] = 32;
        }
        return 0;
      }" | clang -x c - && ./a.out

Segmentation Fault: 11

C compilers also do not inline code with VLAs by default. If you force it, you expose yourself to possible stack overflows:

echo "static inline void doSomething(int i) {
        int myArray[i * 1000]; myArray[0] = 32;
      }
      int main() {
        for(int i = 0; i<1000000; i++) {
          doSomething(i);
        }
      return 0;
      }" | clang -x c - && ./a.out

Segmentation Fault: 11

I wouldn’t like us to import these kinds of issues in to Swift

We probably would not make an absolute guarantee of stack allocation, no.

Although I will note that the problem in your example has nothing to do with it being a loop and everything to do with it asking for an almost 4GB array. :)

John.

Just to point out that people may be missing the case of fixed capacity
variable sized arrays, which are an important component of a general
system for optimizing variably-sized data structures. Implementing such
an array without dynamic allocation requires language support.

Cheers,

···

on Mon Jul 31 2017, John McCall <swift-evolution@swift.org> wrote:

I see your point. Dynamically-sized in-place allocation is something
that completely escaped me when I was thinking of fixed-size
arrays. I can say with confidence that a large portion of
private-class-copy-on-write value types would greatly benefit from
this and would finally be able to become true value types.

To be clear, it's not obvious that using an inline array is always a
good move for performance! But it would be a tool available for use
when people felt it was important.

--
-Dave

The parameters for a fixed-size array type determine the type's size/stride, so how could the bounds not be needed during compile-time? The compiler can't layout objects otherwise.

Swift is not C; it is perfectly capable of laying out objects at run time. It already has to do that for generic types and types with resilient members. That does, of course, have performance consequences, and those performance consequences might be unacceptable to you; but the fact that we can handle it means that we don't ultimately require a semantic concept of a constant expression, except inasmuch as we want to allow users to explicitly request guarantees about static layout.

Doesn't this defeat the purpose of generic value parameters? We might as well use a regular parameter if there's no compile-time evaluation involved. In that case, fixed-sized arrays will be useless, because they'll be normal arrays with resizing disabled.

You're making huge leaps here. The primary purpose of a fixed-size array feature is to allow the array to be allocated "inline" in its context instead of "out-of-line" using heap-allocated copy-on-write buffers. There is no reason that that representation would not be supportable just because the array's bound is not statically known; the only thing that matters is whether the bound is consistent for all instances of the container.

That is, it would not be okay to have a type like:
struct Widget {
   let length: Int
   var array: [length x Int]
}
because the value of the bound cannot be computed independently of a specific value.

But it is absolutely okay to have a type like:
struct Widget {
   var array: [(isRunningOnIOS15() ? 20 : 10) x Int]
}
It just means that the bound would get computed at runtime and, presumably, cached. The fact that this type's size isn't known statically does mean that the compiler has to be more pessimistic, but its values would still get allocated inline into their containers and even on the stack, using pretty much the same techniques as C99 VLAs.

I see your point. Dynamically-sized in-place allocation is something that completely escaped me when I was thinking of fixed-size arrays. I can say with confidence that a large portion of private-class-copy-on-write value types would greatly benefit from this and would finally be able to become true value types.

To be clear, it's not obvious that using an inline array is always a good move for performance! But it would be a tool available for use when people felt it was important.

That's why I'm trying to push for compile-time execution system. All these problems (among many others) could be designed out of existence and the compiler would be incredibly simple in the light of all the different specific features that the community is asking for. But I do feel your urge to avoid inventing a bulldozer factory just for digging a hole in a sandbox. It doesn't have to be relied upon by the type checker or generic resolution mechanism. It would be purely auxiliary.

FWIW, if we were having this conversation before maintaining source compatibility was such an important goal, I'd be far more willing to go along with just getting a feature implemented so that we can play now and tweak the design later. As it is, though — and even without that compatibility goal — realistically speaking, it'll be a year+ before anything major that we approve now is likely to make its way into an official toolchain. There's no need to rush the process, not until we near the end of the Swift 5 proposal timeframe.

Although I will give my +1 for having this discussion being declared in-scope for a Swift 5.

But that would single-handedly move a large chunk of the compiler into stdlib and a huge portion of various little incidental proposals would fade away because they can now easily be implemented in Swift for specific purposes.

Getting rid of as much "compiler magic" as possible is one of Swift's goals. If soneobe though of a way to move the grammar spec itself into the stdlib, I think there are some of us who'd support that. (To be clear, while I think that'd be "winter-storm on Pluto"-cool, I'm not sure it'd actually be a good idea at this time... seems like that belongs, if anywhere, in a macro system proposal, which IMHO would be premature for quite a while yet.)

- Dave Sweeris

···

On Jul 31, 2017, at 1:37 PM, Gor Gyolchanyan via swift-evolution <swift-evolution@swift.org> wrote:

On Jul 31, 2017, at 11:23 PM, John McCall <rjmccall@apple.com> wrote:

On Jul 31, 2017, at 4:00 PM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com> wrote:

On Jul 31, 2017, at 10:09 PM, John McCall <rjmccall@apple.com> wrote:

On Jul 31, 2017, at 3:15 AM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com> wrote:

On Jul 31, 2017, at 7:10 AM, John McCall via swift-evolution <swift-evolution@swift.org> wrote:
On Jul 30, 2017, at 11:43 PM, Daryle Walker <darylew@mac.com> wrote:

The parameters for a fixed-size array type determine the type's size/stride, so how could the bounds not be needed during compile-time? The compiler can't layout objects otherwise.

Swift is not C; it is perfectly capable of laying out objects at run time. It already has to do that for generic types and types with resilient members. That does, of course, have performance consequences, and those performance consequences might be unacceptable to you; but the fact that we can handle it means that we don't ultimately require a semantic concept of a constant expression, except inasmuch as we want to allow users to explicitly request guarantees about static layout.

Doesn't this defeat the purpose of generic value parameters? We might as well use a regular parameter if there's no compile-time evaluation involved. In that case, fixed-sized arrays will be useless, because they'll be normal arrays with resizing disabled.

You're making huge leaps here. The primary purpose of a fixed-size array feature is to allow the array to be allocated "inline" in its context instead of "out-of-line" using heap-allocated copy-on-write buffers. There is no reason that that representation would not be supportable just because the array's bound is not statically known; the only thing that matters is whether the bound is consistent for all instances of the container.

That is, it would not be okay to have a type like:
struct Widget {
   let length: Int
   var array: [length x Int]
}
because the value of the bound cannot be computed independently of a specific value.

But it is absolutely okay to have a type like:
struct Widget {
   var array: [(isRunningOnIOS15() ? 20 : 10) x Int]
}
It just means that the bound would get computed at runtime and, presumably, cached. The fact that this type's size isn't known statically does mean that the compiler has to be more pessimistic, but its values would still get allocated inline into their containers and even on the stack, using pretty much the same techniques as C99 VLAs.

I see your point. Dynamically-sized in-place allocation is something that completely escaped me when I was thinking of fixed-size arrays. I can say with confidence that a large portion of private-class-copy-on-write value types would greatly benefit from this and would finally be able to become true value types.

To be clear, it's not obvious that using an inline array is always a good move for performance! But it would be a tool available for use when people felt it was important.

That's why I'm trying to push for compile-time execution system. All these problems (among many others) could be designed out of existence and the compiler would be incredibly simple in the light of all the different specific features that the community is asking for. But I do feel your urge to avoid inventing a bulldozer factory just for digging a hole in a sandbox. It doesn't have to be relied upon by the type checker or generic resolution mechanism. It would be purely auxiliary. But that would single-handedly move a large chunk of the compiler into stdlib and a huge portion of various little incidental proposals would fade away because they can now easily be implemented in Swift for specific purposes.

My point here had nothing to do with compile-time vs. dynamic-time evaluation of array bounds. Inline array storage is not a performance panacea even if everything about them is static. The exact balance point will vary by element type, machine, and the overall load on the memory system in your program, but even for an array of bytes, as the size of the array grows it will eventually become the case that retaining a buffer pointer will be cheaper than copying the buffer contents.

As far as I know, the pinnacle of uses for fixed-size arrays is having a compile-time pre-allocated space of the necessary size (either literally at compile-time if that's a static variable, or added to the pre-computed offset of the stack pointer in case of a local variable).

The difference between having to use dynamic offsets + alloca() and static offsets + a normal stack slot is noticeable but not nearly as extreme as you're imagining. And again, in most common cases we would absolutely be able to fold a bound statically and fall into the optimal path you're talking about. The critical guarantee, that the array does not get heap-allocated, is still absolutely intact.

Yet again, Swift (specifically - you in this case) is teaching me to trust the compiler to optimize, which is still an alien feeling to me even after all these years of heavy Swift usage. Damn you, C++ for corrupting my brain :grinning:.

Well. Trust but verify. :slightly_smiling_face:

The only good way I can think of doing that is hand-crafting a lightning-fast implementation LLVM IR, then doing the same in Swift, decompiling the bitcode and then doing a diff. It's going to be super tedious and painful, but it seems to be the only way to prove that Swift can (hopefully, some day...) replace C++ in sheer performance potential.

Or just run a benchmark and complain if the performance isn't as good as you expect?

In the specific case of having dynamic-sized in-place-allocated value types this will absolutely work. But this raises a chicken-and-the-egg problem: which is built in what: in-place allocated dynamic-sized value types, or specifically fixed-size arrays? On one hand I'm tempted to think that value types should be able to dynamically decide (inside the initializer) the exact size of the allocated memory (no less than the static size) that they occupy (no matter if on the heap, on the stack or anywhere else), after which they'd be able to access the "leftover" memory by a pointer and do whatever they want with it. This approach seems more logical, since this is essentially how fixed-size arrays would be implemented under the hood. But on the other hand, this does make use of unsafe pointers (and no part of Swift currently relies on unsafe pointers to function), so abstracting it away behind a magical fixed-size array seems safer (with a hope that a fixed-size array of UInt8 would be optimized down to exactly the first case).

Representationally, I think we would have a builtin fixed-sized array type that. But "fixed-size" means "the size is an inherent part of the type", not "we actually know that size statically". Swift would just be able to use more optimal code-generation patterns for types whose bounds it was actually able to compute statically.

Well, yeah, knowing its size statically is not a requirement, but having a guarantee of in-place allocation is. As long as non-escaped local fixed-size arrays live on the stack, I'm happy. :slightly_smiling_face:

I figured.

John.

···

On Jul 31, 2017, at 4:37 PM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com> wrote:

On Jul 31, 2017, at 11:23 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 31, 2017, at 4:00 PM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 10:09 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 31, 2017, at 3:15 AM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 7:10 AM, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jul 30, 2017, at 11:43 PM, Daryle Walker <darylew@mac.com <mailto:darylew@mac.com>> wrote:

John.

Value equality would still affect the type-checker, but I think we could pretty easily just say that all bound expressions are assumed to potentially resolve unequally unless they are literals or references to the same 'let' constant.

Shouldn't the type-checker use the Equatable protocol conformance to test for equality?

The Equatable protocol does guarantee reflexivity.

Moreover, as far as I know, Equatable is not recognized by the compiler in any way, so it's just a regular protocol.

That's not quite true: we synthesize Equatable instances in several places.

What would make it special? Some types would implement operator == to compare themselves to other types, that's beyond the scope of Equatable. What about those? And how are custom operator implementations going to serve this purpose at compile-time? Or will it just ignore the semantics of the type and reduce it to a sequence of bits? Or maybe only a few hand-picked types will be supported?

The seemingly simple generic value parameter concept gets vastly complicated and/or poorly designed without an elaborate compile-time execution system... Unless I'm missing an obvious way out.

The only thing the compiler really *needs* to know is whether two types are known to be the same, i.e. whether two values are known to be the same.

I think having arbitrary value-type literals would be a great place to start. Currently there are only these types of literals:
  * nil
  * boolean
  * integer
  * floating-point
  * string, extended grapheme cluster, unicode scalar
  * array
  * dictionary

The last three of which are kinda weird because they're not really literals, because they can contains dynamically generated values.
If value types were permitted to have a special kind of initializer (I'll call it literal initializer for now), which only allows directly assigning to its stored properties or self form parameters with no operations, then that initializer could be used to produce a compile-time literal of that value type. A similar special equality operator would only allow directly comparing stored properties between two literal-capable value types.

struct Foo {
  
  literal init(one: Int, two: Float) {
    self.one = one
    self.two = two
  }

  let one: Int

  let two: Float

}

literal static func == (_ some: Foo, _ other: Foo) -> Bool {
  return some.one == other.one && some.two == other.two
}

only assignment would be allowed in the initializer and only equality check and boolean operations would be allowed inside the equality operator. These limitations would guarantee completely deterministic literal creation and equality conformance at compile-time.
Types that conform to _BuiltinExpressibleBy*Literal would be magically equipped with both of these.
String, array and dictionary literals would be unavailable.

An elaborate compile-time execution system would not be sufficient here, because again, Swift is not C or C++: we need to be able to answer that question even in generic code rather than relying on the ability to fold all computations statically. We do not want to add an algebraic solver to the type-checker. The obvious alternative is to simply be conservatively correct by treating independent complex expressions as always yielding different values.

How exactly does generic type resolution happen? Obviously, it's not all compile-time, since it has to deal with existential containers. Without customizable generic resolution, I don't see a way to implement satisfactory generic value parameters. But if we settle on magical fixed-size arrays, we wouldn't need generic value parameters, we would only need to support constraining the size of the array with Comparable operators:

func foo<T>(_ array: T) where T: [Int], T.count == 5 {
  // ...
}

let array: [5 of Int] = [1, 2, 3, 4, 5]
foo(array)

The only hard constraint is that types need to be consistent, but that just means that we need to have a model in which bound expressions are evaluated exactly once at runtime (and of course typically folded at compile time).

What exactly would it take to be able to execute select piece of code at compile-time? Taking the AST, converting it to LLVM IR and feeding it to the MCJIT engine seems to be easy enough. But I'm pretty sure it's more tricky than that. Is there a special assumption or two made about the code that prevents this from happening?

We already have the ability to fold simple expressions in SIL; we would just make sure that could handle anything that we considered really important and allow everything else to be handled dynamically.

So, with some minor adjustments, we could get a well-defined subset of Swift that can be executed at compile-time to yield values that would pass as literals in any context?
This would some day allow relaxing the limitations on literal initializers and literal equality operators by pre-computing and caching values at compile-time outside the scope of the type checker, allowing the type checker to stay simple, while essentially allowing generics with complex resolution logic.

John.

John.

Or do you mean that the bounds are integer literals? (That's what I have in the design document now.)

Sent from my iPhone

On Jul 30, 2017, at 8:51 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 29, 2017, at 7:01 PM, Daryle Walker via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
The “constexpr” facility from C++ allows users to define constants and functions that are determined and usable at compile-time, for compile-time constructs but still usable at run-time. The facility is a key step for value-based generic parameters (and fixed-size arrays if you don’t want to be stuck with integer literals for bounds). Can figuring out Swift’s story here be part of Swift 5?

Note that there's no particular reason that value-based generic parameters, including fixed-size arrays, actually need to be constant expressions in Swift.

John.

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

The parameters for a fixed-size array type determine the type's size/stride, so how could the bounds not be needed during compile-time? The compiler can't layout objects otherwise.

Swift is not C; it is perfectly capable of laying out objects at run time. It already has to do that for generic types and types with resilient members. That does, of course, have performance consequences, and those performance consequences might be unacceptable to you; but the fact that we can handle it means that we don't ultimately require a semantic concept of a constant expression, except inasmuch as we want to allow users to explicitly request guarantees about static layout.

Doesn't this defeat the purpose of generic value parameters? We might as well use a regular parameter if there's no compile-time evaluation involved. In that case, fixed-sized arrays will be useless, because they'll be normal arrays with resizing disabled.

You're making huge leaps here. The primary purpose of a fixed-size array feature is to allow the array to be allocated "inline" in its context instead of "out-of-line" using heap-allocated copy-on-write buffers. There is no reason that that representation would not be supportable just because the array's bound is not statically known; the only thing that matters is whether the bound is consistent for all instances of the container.

That is, it would not be okay to have a type like:
struct Widget {
   let length: Int
   var array: [length x Int]
}
because the value of the bound cannot be computed independently of a specific value.

But it is absolutely okay to have a type like:
struct Widget {
   var array: [(isRunningOnIOS15() ? 20 : 10) x Int]
}
It just means that the bound would get computed at runtime and, presumably, cached. The fact that this type's size isn't known statically does mean that the compiler has to be more pessimistic, but its values would still get allocated inline into their containers and even on the stack, using pretty much the same techniques as C99 VLAs.

I see your point. Dynamically-sized in-place allocation is something that completely escaped me when I was thinking of fixed-size arrays. I can say with confidence that a large portion of private-class-copy-on-write value types would greatly benefit from this and would finally be able to become true value types.

To be clear, it's not obvious that using an inline array is always a good move for performance! But it would be a tool available for use when people felt it was important.

That's why I'm trying to push for compile-time execution system. All these problems (among many others) could be designed out of existence and the compiler would be incredibly simple in the light of all the different specific features that the community is asking for. But I do feel your urge to avoid inventing a bulldozer factory just for digging a hole in a sandbox. It doesn't have to be relied upon by the type checker or generic resolution mechanism. It would be purely auxiliary. But that would single-handedly move a large chunk of the compiler into stdlib and a huge portion of various little incidental proposals would fade away because they can now easily be implemented in Swift for specific purposes.

My point here had nothing to do with compile-time vs. dynamic-time evaluation of array bounds. Inline array storage is not a performance panacea even if everything about them is static. The exact balance point will vary by element type, machine, and the overall load on the memory system in your program, but even for an array of bytes, as the size of the array grows it will eventually become the case that retaining a buffer pointer will be cheaper than copying the buffer contents.

Yeah, the compile-time stuff is completely off-topic at this point. I'll make a new thread with a more elaborate explanation of details some time in the near future. For now, considering the above conversation, I think implementing fixed-size arrays as a magical (and the only) way of in-place dynamic allocation is a great idea that would require very (relatively) little prerequisite work.

As far as I know, the pinnacle of uses for fixed-size arrays is having a compile-time pre-allocated space of the necessary size (either literally at compile-time if that's a static variable, or added to the pre-computed offset of the stack pointer in case of a local variable).

The difference between having to use dynamic offsets + alloca() and static offsets + a normal stack slot is noticeable but not nearly as extreme as you're imagining. And again, in most common cases we would absolutely be able to fold a bound statically and fall into the optimal path you're talking about. The critical guarantee, that the array does not get heap-allocated, is still absolutely intact.

Yet again, Swift (specifically - you in this case) is teaching me to trust the compiler to optimize, which is still an alien feeling to me even after all these years of heavy Swift usage. Damn you, C++ for corrupting my brain :grinning:.

Well. Trust but verify. :slightly_smiling_face:

The only good way I can think of doing that is hand-crafting a lightning-fast implementation LLVM IR, then doing the same in Swift, decompiling the bitcode and then doing a diff. It's going to be super tedious and painful, but it seems to be the only way to prove that Swift can (hopefully, some day...) replace C++ in sheer performance potential.

Or just run a benchmark and complain if the performance isn't as good as you expect?

That will work too, but field testing is dependent on the hardware and code equivalence can prove performance superiority on a theoretical level. But that's probably overkill if the goal is to convert Crusty's cousin Bernie the pre-standard C++ guy to start using Swift. :slightly_smiling_face:

···

On Jul 31, 2017, at 11:52 PM, John McCall <rjmccall@apple.com> wrote:

On Jul 31, 2017, at 4:37 PM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 11:23 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 31, 2017, at 4:00 PM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 10:09 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 31, 2017, at 3:15 AM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 7:10 AM, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jul 30, 2017, at 11:43 PM, Daryle Walker <darylew@mac.com <mailto:darylew@mac.com>> wrote:

In the specific case of having dynamic-sized in-place-allocated value types this will absolutely work. But this raises a chicken-and-the-egg problem: which is built in what: in-place allocated dynamic-sized value types, or specifically fixed-size arrays? On one hand I'm tempted to think that value types should be able to dynamically decide (inside the initializer) the exact size of the allocated memory (no less than the static size) that they occupy (no matter if on the heap, on the stack or anywhere else), after which they'd be able to access the "leftover" memory by a pointer and do whatever they want with it. This approach seems more logical, since this is essentially how fixed-size arrays would be implemented under the hood. But on the other hand, this does make use of unsafe pointers (and no part of Swift currently relies on unsafe pointers to function), so abstracting it away behind a magical fixed-size array seems safer (with a hope that a fixed-size array of UInt8 would be optimized down to exactly the first case).

Representationally, I think we would have a builtin fixed-sized array type that. But "fixed-size" means "the size is an inherent part of the type", not "we actually know that size statically". Swift would just be able to use more optimal code-generation patterns for types whose bounds it was actually able to compute statically.

Well, yeah, knowing its size statically is not a requirement, but having a guarantee of in-place allocation is. As long as non-escaped local fixed-size arrays live on the stack, I'm happy. :slightly_smiling_face:

I figured.

John.

John.

Value equality would still affect the type-checker, but I think we could pretty easily just say that all bound expressions are assumed to potentially resolve unequally unless they are literals or references to the same 'let' constant.

Shouldn't the type-checker use the Equatable protocol conformance to test for equality?

The Equatable protocol does guarantee reflexivity.

Moreover, as far as I know, Equatable is not recognized by the compiler in any way, so it's just a regular protocol.

That's not quite true: we synthesize Equatable instances in several places.

What would make it special? Some types would implement operator == to compare themselves to other types, that's beyond the scope of Equatable. What about those? And how are custom operator implementations going to serve this purpose at compile-time? Or will it just ignore the semantics of the type and reduce it to a sequence of bits? Or maybe only a few hand-picked types will be supported?

The seemingly simple generic value parameter concept gets vastly complicated and/or poorly designed without an elaborate compile-time execution system... Unless I'm missing an obvious way out.

The only thing the compiler really *needs* to know is whether two types are known to be the same, i.e. whether two values are known to be the same.

I think having arbitrary value-type literals would be a great place to start. Currently there are only these types of literals:
  * nil
  * boolean
  * integer
  * floating-point
  * string, extended grapheme cluster, unicode scalar
  * array
  * dictionary

The last three of which are kinda weird because they're not really literals, because they can contains dynamically generated values.
If value types were permitted to have a special kind of initializer (I'll call it literal initializer for now), which only allows directly assigning to its stored properties or self form parameters with no operations, then that initializer could be used to produce a compile-time literal of that value type. A similar special equality operator would only allow directly comparing stored properties between two literal-capable value types.

struct Foo {
  
  literal init(one: Int, two: Float) {
    self.one = one
    self.two = two
  }

  let one: Int

  let two: Float

}

literal static func == (_ some: Foo, _ other: Foo) -> Bool {
  return some.one == other.one && some.two == other.two
}

only assignment would be allowed in the initializer and only equality check and boolean operations would be allowed inside the equality operator. These limitations would guarantee completely deterministic literal creation and equality conformance at compile-time.
Types that conform to _BuiltinExpressibleBy*Literal would be magically equipped with both of these.
String, array and dictionary literals would be unavailable.

An elaborate compile-time execution system would not be sufficient here, because again, Swift is not C or C++: we need to be able to answer that question even in generic code rather than relying on the ability to fold all computations statically. We do not want to add an algebraic solver to the type-checker. The obvious alternative is to simply be conservatively correct by treating independent complex expressions as always yielding different values.

How exactly does generic type resolution happen? Obviously, it's not all compile-time, since it has to deal with existential containers. Without customizable generic resolution, I don't see a way to implement satisfactory generic value parameters. But if we settle on magical fixed-size arrays, we wouldn't need generic value parameters, we would only need to support constraining the size of the array with Comparable operators:

func foo<T>(_ array: T) where T: [Int], T.count == 5 {
  // ...
}

let array: [5 of Int] = [1, 2, 3, 4, 5]
foo(array)

The only hard constraint is that types need to be consistent, but that just means that we need to have a model in which bound expressions are evaluated exactly once at runtime (and of course typically folded at compile time).

What exactly would it take to be able to execute select piece of code at compile-time? Taking the AST, converting it to LLVM IR and feeding it to the MCJIT engine seems to be easy enough. But I'm pretty sure it's more tricky than that. Is there a special assumption or two made about the code that prevents this from happening?

We already have the ability to fold simple expressions in SIL; we would just make sure that could handle anything that we considered really important and allow everything else to be handled dynamically.

So, with some minor adjustments, we could get a well-defined subset of Swift that can be executed at compile-time to yield values that would pass as literals in any context?
This would some day allow relaxing the limitations on literal initializers and literal equality operators by pre-computing and caching values at compile-time outside the scope of the type checker, allowing the type checker to stay simple, while essentially allowing generics with complex resolution logic.

John.

John.

Or do you mean that the bounds are integer literals? (That's what I have in the design document now.)

Sent from my iPhone

On Jul 30, 2017, at 8:51 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 29, 2017, at 7:01 PM, Daryle Walker via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
The “constexpr” facility from C++ allows users to define constants and functions that are determined and usable at compile-time, for compile-time constructs but still usable at run-time. The facility is a key step for value-based generic parameters (and fixed-size arrays if you don’t want to be stuck with integer literals for bounds). Can figuring out Swift’s story here be part of Swift 5?

Note that there's no particular reason that value-based generic parameters, including fixed-size arrays, actually need to be constant expressions in Swift.

John.

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

I was neutral on this, but after waking up I realized a problem. I want to use the LLVM type primitives to implement fixed-size arrays. Doing a run-time determination of layout and implementing it with alloca forfeits that (AFAIK). Unless the Swift run-time library comes with LLVM (which I doubt). Which means we do need compile-time constants after all.

···

On Jul 31, 2017, at 4:37 PM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com> wrote:

On Jul 31, 2017, at 11:23 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 31, 2017, at 4:00 PM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 10:09 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 31, 2017, at 3:15 AM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 7:10 AM, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jul 30, 2017, at 11:43 PM, Daryle Walker <darylew@mac.com <mailto:darylew@mac.com>> wrote:
The parameters for a fixed-size array type determine the type's size/stride, so how could the bounds not be needed during compile-time? The compiler can't layout objects otherwise.

Swift is not C; it is perfectly capable of laying out objects at run time. It already has to do that for generic types and types with resilient members. That does, of course, have performance consequences, and those performance consequences might be unacceptable to you; but the fact that we can handle it means that we don't ultimately require a semantic concept of a constant expression, except inasmuch as we want to allow users to explicitly request guarantees about static layout.

Doesn't this defeat the purpose of generic value parameters? We might as well use a regular parameter if there's no compile-time evaluation involved. In that case, fixed-sized arrays will be useless, because they'll be normal arrays with resizing disabled.

You're making huge leaps here. The primary purpose of a fixed-size array feature is to allow the array to be allocated "inline" in its context instead of "out-of-line" using heap-allocated copy-on-write buffers. There is no reason that that representation would not be supportable just because the array's bound is not statically known; the only thing that matters is whether the bound is consistent for all instances of the container.

That is, it would not be okay to have a type like:
struct Widget {
   let length: Int
   var array: [length x Int]
}
because the value of the bound cannot be computed independently of a specific value.

But it is absolutely okay to have a type like:
struct Widget {
   var array: [(isRunningOnIOS15() ? 20 : 10) x Int]
}
It just means that the bound would get computed at runtime and, presumably, cached. The fact that this type's size isn't known statically does mean that the compiler has to be more pessimistic, but its values would still get allocated inline into their containers and even on the stack, using pretty much the same techniques as C99 VLAs.

I see your point. Dynamically-sized in-place allocation is something that completely escaped me when I was thinking of fixed-size arrays. I can say with confidence that a large portion of private-class-copy-on-write value types would greatly benefit from this and would finally be able to become true value types.

To be clear, it's not obvious that using an inline array is always a good move for performance! But it would be a tool available for use when people felt it was important.

That's why I'm trying to push for compile-time execution system. All these problems (among many others) could be designed out of existence and the compiler would be incredibly simple in the light of all the different specific features that the community is asking for. But I do feel your urge to avoid inventing a bulldozer factory just for digging a hole in a sandbox. It doesn't have to be relied upon by the type checker or generic resolution mechanism. It would be purely auxiliary. But that would single-handedly move a large chunk of the compiler into stdlib and a huge portion of various little incidental proposals would fade away because they can now easily be implemented in Swift for specific purposes.

As far as I know, the pinnacle of uses for fixed-size arrays is having a compile-time pre-allocated space of the necessary size (either literally at compile-time if that's a static variable, or added to the pre-computed offset of the stack pointer in case of a local variable).

The difference between having to use dynamic offsets + alloca() and static offsets + a normal stack slot is noticeable but not nearly as extreme as you're imagining. And again, in most common cases we would absolutely be able to fold a bound statically and fall into the optimal path you're talking about. The critical guarantee, that the array does not get heap-allocated, is still absolutely intact.

Yet again, Swift (specifically - you in this case) is teaching me to trust the compiler to optimize, which is still an alien feeling to me even after all these years of heavy Swift usage. Damn you, C++ for corrupting my brain :grinning:.

Well. Trust but verify. :slightly_smiling_face:

The only good way I can think of doing that is hand-crafting a lightning-fast implementation LLVM IR, then doing the same in Swift, decompiling the bitcode and then doing a diff. It's going to be super tedious and painful, but it seems to be the only way to prove that Swift can (hopefully, some day...) replace C++ in sheer performance potential.

In the specific case of having dynamic-sized in-place-allocated value types this will absolutely work. But this raises a chicken-and-the-egg problem: which is built in what: in-place allocated dynamic-sized value types, or specifically fixed-size arrays? On one hand I'm tempted to think that value types should be able to dynamically decide (inside the initializer) the exact size of the allocated memory (no less than the static size) that they occupy (no matter if on the heap, on the stack or anywhere else), after which they'd be able to access the "leftover" memory by a pointer and do whatever they want with it. This approach seems more logical, since this is essentially how fixed-size arrays would be implemented under the hood. But on the other hand, this does make use of unsafe pointers (and no part of Swift currently relies on unsafe pointers to function), so abstracting it away behind a magical fixed-size array seems safer (with a hope that a fixed-size array of UInt8 would be optimized down to exactly the first case).

Representationally, I think we would have a builtin fixed-sized array type that. But "fixed-size" means "the size is an inherent part of the type", not "we actually know that size statically". Swift would just be able to use more optimal code-generation patterns for types whose bounds it was actually able to compute statically.

Well, yeah, knowing its size statically is not a requirement, but having a guarantee of in-place allocation is. As long as non-escaped local fixed-size arrays live on the stack, I'm happy. :slightly_smiling_face:


Daryle Walker
Mac, Internet, and Video Game Junkie
darylew AT mac DOT com

`[Int x N]` solves all of the problems that I mentioned, and I'm happy with that syntax. In fact, I'm championing its merits versus an approach that doesn't include the number of elements. :)

Unless I got things wrong this entire time, the proposed spelling <https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20170731/038341.html&gt; was `fixed [Int]`, with no mention of the number of elements.

Félix

···

Le 2 août 2017 à 09:00, John McCall <rjmccall@apple.com> a écrit :

var foo = fixed [Int]()
for i in 0..<param {
  foo.append(i)
}

Arrays can't work if elements don't have a fixed size. How big is an element in this example?

This is not the idea. The idea is more like

  let n = ...
  var foo = [Int x n](repeating: 13)

The bound value is still fundamentally part of the type of the variable; it's just that the actual value is not known statically.

John.

`[Int x N]` solves all of the problems that I mentioned, and I'm happy with that syntax. In fact, I'm championing its merits versus an approach that doesn't include the number of elements. :)

Unless I got things wrong this entire time, the proposed spelling <https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20170731/038341.html&gt; was `fixed [Int]`, with no mention of the number of elements.

I agree that an array with a dynamic, value-specific but fixed bound seems basically pointless. It's more of a minor optimization hint than a real type.

John.

···

On Aug 2, 2017, at 12:17 PM, Félix Cloutier <felixcloutier@icloud.com> wrote:

Félix

Le 2 août 2017 à 09:00, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> a écrit :

var foo = fixed [Int]()
for i in 0..<param {
  foo.append(i)
}

Arrays can't work if elements don't have a fixed size. How big is an element in this example?

This is not the idea. The idea is more like

  let n = ...
  var foo = [Int x n](repeating: 13)

The bound value is still fundamentally part of the type of the variable; it's just that the actual value is not known statically.

John.