[Planning][Request] "constexpr" for Swift 5

The bound value is still fundamentally part of the type of the variable; it's just that the actual value is not known statically.

I don't know enough about the internals to prove such a conclusion ;-), but my intuition said that this would be possible…

My intuition also says that this might add little value, but much confusion ;-)

The parameters for a fixed-size array type determine the type's size/stride, so how could the bounds not be needed during compile-time? The compiler can't layout objects otherwise.

Swift is not C; it is perfectly capable of laying out objects at run time. It already has to do that for generic types and types with resilient members. That does, of course, have performance consequences, and those performance consequences might be unacceptable to you; but the fact that we can handle it means that we don't ultimately require a semantic concept of a constant expression, except inasmuch as we want to allow users to explicitly request guarantees about static layout.

Doesn't this defeat the purpose of generic value parameters? We might as well use a regular parameter if there's no compile-time evaluation involved. In that case, fixed-sized arrays will be useless, because they'll be normal arrays with resizing disabled.

You're making huge leaps here. The primary purpose of a fixed-size array feature is to allow the array to be allocated "inline" in its context instead of "out-of-line" using heap-allocated copy-on-write buffers. There is no reason that that representation would not be supportable just because the array's bound is not statically known; the only thing that matters is whether the bound is consistent for all instances of the container.

That is, it would not be okay to have a type like:
struct Widget {
   let length: Int
   var array: [length x Int]
}
because the value of the bound cannot be computed independently of a specific value.

But it is absolutely okay to have a type like:
struct Widget {
   var array: [(isRunningOnIOS15() ? 20 : 10) x Int]
}
It just means that the bound would get computed at runtime and, presumably, cached. The fact that this type's size isn't known statically does mean that the compiler has to be more pessimistic, but its values would still get allocated inline into their containers and even on the stack, using pretty much the same techniques as C99 VLAs.

Do we really want to make that guarantee about heap/stack allocation? C99’s VLAs are not very loop-friendly:

echo "int main() {
        for(int i = 0; i<1000000; i++) {
          int myArray[i * 1000]; myArray[0] = 32;
        }
        return 0;
      }" | clang -x c - && ./a.out

Segmentation Fault: 11

C compilers also do not inline code with VLAs by default. If you force it, you expose yourself to possible stack overflows:

echo "static inline void doSomething(int i) {
        int myArray[i * 1000]; myArray[0] = 32;
      }
      int main() {
        for(int i = 0; i<1000000; i++) {
          doSomething(i);
        }
      return 0;
      }" | clang -x c - && ./a.out

Segmentation Fault: 11

I wouldn’t like us to import these kinds of issues in to Swift

We probably would not make an absolute guarantee of stack allocation, no.

Although I will note that the problem in your example has nothing to do with it being a loop and everything to do with it asking for an almost 4GB array. :)

John.

Yeah, apologies - it was a bit of a poorly-written example.

The root cause, of course, is that the VLAs require new stack allocations each time, and the stack is only deallocated as one lump when the frame ends.

That is true of alloca(), but not of VLAs. VLAs are freed when they go out of scope. Your example crashes only because it is doing a huge stack allocation.

#include <unistd.h>
#include <fcntl.h>

void foo(size_t n) {
  int fd = open("/dev/null", O_RDONLY);
  for (int i = 0; i < 10000000; ++i) {
    char buffer[n];
    read(fd, buffer, n);
  }
  close(fd);
}

int main() {
  foo(100000);
}

A fixed-size object could be allocated once and re-used. Inlining prevents new stack frames being created and hence defers deallocation of those objects until the outer function ends, pushing up the high-water mark of the stack.

The problem also happens with an outer loop of only 10_000, so only 38MB. Still, enough to blow it up.

Operating systems generally impose limits on both the total size of the stack and the amount by which it can grow at once; 38MB is still likely large enough.

John.

···

On Aug 2, 2017, at 6:29 PM, Karl Wagner <razielim@gmail.com> wrote:

On 3. Aug 2017, at 00:21, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Aug 2, 2017, at 6:10 PM, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Aug 2, 2017, at 5:56 PM, Karl Wagner <razielim@gmail.com <mailto:razielim@gmail.com>> wrote:

On 31. Jul 2017, at 21:09, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jul 31, 2017, at 3:15 AM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 7:10 AM, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jul 30, 2017, at 11:43 PM, Daryle Walker <darylew@mac.com <mailto:darylew@mac.com>> wrote:

The root cause, of course, is that the VLAs require new stack allocations each time, and the stack is only deallocated as one lump when the frame ends.

That is true of alloca(), but not of VLAs. VLAs are freed when they go out of scope.

Learned something today.

Anyway, if the goal is stack allocation, I would prefer that we explored other ways to achieve it before jumping to a new array-type. I’m not really a fan of a future where [3; Double] is one type and (Double, Double, Double) is something else, and Array<Double> is yet another thing.

From what I’ve read so far, the problem with stack-allocating some Array that you can pass to another function and which otherwise does not escape, is that the function may make an escaping reference (e.g. assigning it to an ivar or global, or capturing it in a closure).

How about if the compiler treated every Array it receives in a function as being potentially stack-allocated. The first time you capture it, it will check and copy to the heap if necessary. All subsequent escapes (including passing to other functions) use the Array known to be allocated on the heap, avoiding further checking or copying within the function.

The same goes for Dictionary, and really any arbitrary value-type with COW storage. The memory that those types allocate is part of the value, so it would be cool if we could treat it like that.

- Karl

We are not going to design the Swift language around the goal of producing exact LLVM IR sequences. If you can't phrase this in real terms, it is irrelevant.

John.

···

On Aug 1, 2017, at 9:53 AM, Daryle Walker <darylew@mac.com> wrote:

On Jul 31, 2017, at 4:37 PM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 11:23 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 31, 2017, at 4:00 PM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 10:09 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 31, 2017, at 3:15 AM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 7:10 AM, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jul 30, 2017, at 11:43 PM, Daryle Walker <darylew@mac.com <mailto:darylew@mac.com>> wrote:
The parameters for a fixed-size array type determine the type's size/stride, so how could the bounds not be needed during compile-time? The compiler can't layout objects otherwise.

Swift is not C; it is perfectly capable of laying out objects at run time. It already has to do that for generic types and types with resilient members. That does, of course, have performance consequences, and those performance consequences might be unacceptable to you; but the fact that we can handle it means that we don't ultimately require a semantic concept of a constant expression, except inasmuch as we want to allow users to explicitly request guarantees about static layout.

Doesn't this defeat the purpose of generic value parameters? We might as well use a regular parameter if there's no compile-time evaluation involved. In that case, fixed-sized arrays will be useless, because they'll be normal arrays with resizing disabled.

You're making huge leaps here. The primary purpose of a fixed-size array feature is to allow the array to be allocated "inline" in its context instead of "out-of-line" using heap-allocated copy-on-write buffers. There is no reason that that representation would not be supportable just because the array's bound is not statically known; the only thing that matters is whether the bound is consistent for all instances of the container.

That is, it would not be okay to have a type like:
struct Widget {
   let length: Int
   var array: [length x Int]
}
because the value of the bound cannot be computed independently of a specific value.

But it is absolutely okay to have a type like:
struct Widget {
   var array: [(isRunningOnIOS15() ? 20 : 10) x Int]
}
It just means that the bound would get computed at runtime and, presumably, cached. The fact that this type's size isn't known statically does mean that the compiler has to be more pessimistic, but its values would still get allocated inline into their containers and even on the stack, using pretty much the same techniques as C99 VLAs.

I see your point. Dynamically-sized in-place allocation is something that completely escaped me when I was thinking of fixed-size arrays. I can say with confidence that a large portion of private-class-copy-on-write value types would greatly benefit from this and would finally be able to become true value types.

To be clear, it's not obvious that using an inline array is always a good move for performance! But it would be a tool available for use when people felt it was important.

That's why I'm trying to push for compile-time execution system. All these problems (among many others) could be designed out of existence and the compiler would be incredibly simple in the light of all the different specific features that the community is asking for. But I do feel your urge to avoid inventing a bulldozer factory just for digging a hole in a sandbox. It doesn't have to be relied upon by the type checker or generic resolution mechanism. It would be purely auxiliary. But that would single-handedly move a large chunk of the compiler into stdlib and a huge portion of various little incidental proposals would fade away because they can now easily be implemented in Swift for specific purposes.

As far as I know, the pinnacle of uses for fixed-size arrays is having a compile-time pre-allocated space of the necessary size (either literally at compile-time if that's a static variable, or added to the pre-computed offset of the stack pointer in case of a local variable).

The difference between having to use dynamic offsets + alloca() and static offsets + a normal stack slot is noticeable but not nearly as extreme as you're imagining. And again, in most common cases we would absolutely be able to fold a bound statically and fall into the optimal path you're talking about. The critical guarantee, that the array does not get heap-allocated, is still absolutely intact.

Yet again, Swift (specifically - you in this case) is teaching me to trust the compiler to optimize, which is still an alien feeling to me even after all these years of heavy Swift usage. Damn you, C++ for corrupting my brain :grinning:.

Well. Trust but verify. :slightly_smiling_face:

The only good way I can think of doing that is hand-crafting a lightning-fast implementation LLVM IR, then doing the same in Swift, decompiling the bitcode and then doing a diff. It's going to be super tedious and painful, but it seems to be the only way to prove that Swift can (hopefully, some day...) replace C++ in sheer performance potential.

In the specific case of having dynamic-sized in-place-allocated value types this will absolutely work. But this raises a chicken-and-the-egg problem: which is built in what: in-place allocated dynamic-sized value types, or specifically fixed-size arrays? On one hand I'm tempted to think that value types should be able to dynamically decide (inside the initializer) the exact size of the allocated memory (no less than the static size) that they occupy (no matter if on the heap, on the stack or anywhere else), after which they'd be able to access the "leftover" memory by a pointer and do whatever they want with it. This approach seems more logical, since this is essentially how fixed-size arrays would be implemented under the hood. But on the other hand, this does make use of unsafe pointers (and no part of Swift currently relies on unsafe pointers to function), so abstracting it away behind a magical fixed-size array seems safer (with a hope that a fixed-size array of UInt8 would be optimized down to exactly the first case).

Representationally, I think we would have a builtin fixed-sized array type that. But "fixed-size" means "the size is an inherent part of the type", not "we actually know that size statically". Swift would just be able to use more optimal code-generation patterns for types whose bounds it was actually able to compute statically.

Well, yeah, knowing its size statically is not a requirement, but having a guarantee of in-place allocation is. As long as non-escaped local fixed-size arrays live on the stack, I'm happy. :slightly_smiling_face:

I was neutral on this, but after waking up I realized a problem. I want to use the LLVM type primitives to implement fixed-size arrays. Doing a run-time determination of layout and implementing it with alloca forfeits that (AFAIK). Unless the Swift run-time library comes with LLVM (which I doubt). Which means we do need compile-time constants after all.

The parameters for a fixed-size array type determine the type's size/stride, so how could the bounds not be needed during compile-time? The compiler can't layout objects otherwise.

Swift is not C; it is perfectly capable of laying out objects at run time. It already has to do that for generic types and types with resilient members. That does, of course, have performance consequences, and those performance consequences might be unacceptable to you; but the fact that we can handle it means that we don't ultimately require a semantic concept of a constant expression, except inasmuch as we want to allow users to explicitly request guarantees about static layout.

Doesn't this defeat the purpose of generic value parameters? We might as well use a regular parameter if there's no compile-time evaluation involved. In that case, fixed-sized arrays will be useless, because they'll be normal arrays with resizing disabled.

You're making huge leaps here. The primary purpose of a fixed-size array feature is to allow the array to be allocated "inline" in its context instead of "out-of-line" using heap-allocated copy-on-write buffers. There is no reason that that representation would not be supportable just because the array's bound is not statically known; the only thing that matters is whether the bound is consistent for all instances of the container.

That is, it would not be okay to have a type like:
struct Widget {
   let length: Int
   var array: [length x Int]
}
because the value of the bound cannot be computed independently of a specific value.

But it is absolutely okay to have a type like:
struct Widget {
   var array: [(isRunningOnIOS15() ? 20 : 10) x Int]
}
It just means that the bound would get computed at runtime and, presumably, cached. The fact that this type's size isn't known statically does mean that the compiler has to be more pessimistic, but its values would still get allocated inline into their containers and even on the stack, using pretty much the same techniques as C99 VLAs.

I see your point. Dynamically-sized in-place allocation is something that completely escaped me when I was thinking of fixed-size arrays. I can say with confidence that a large portion of private-class-copy-on-write value types would greatly benefit from this and would finally be able to become true value types.

To be clear, it's not obvious that using an inline array is always a good move for performance! But it would be a tool available for use when people felt it was important.

That's why I'm trying to push for compile-time execution system. All these problems (among many others) could be designed out of existence and the compiler would be incredibly simple in the light of all the different specific features that the community is asking for. But I do feel your urge to avoid inventing a bulldozer factory just for digging a hole in a sandbox. It doesn't have to be relied upon by the type checker or generic resolution mechanism. It would be purely auxiliary. But that would single-handedly move a large chunk of the compiler into stdlib and a huge portion of various little incidental proposals would fade away because they can now easily be implemented in Swift for specific purposes.

As far as I know, the pinnacle of uses for fixed-size arrays is having a compile-time pre-allocated space of the necessary size (either literally at compile-time if that's a static variable, or added to the pre-computed offset of the stack pointer in case of a local variable).

The difference between having to use dynamic offsets + alloca() and static offsets + a normal stack slot is noticeable but not nearly as extreme as you're imagining. And again, in most common cases we would absolutely be able to fold a bound statically and fall into the optimal path you're talking about. The critical guarantee, that the array does not get heap-allocated, is still absolutely intact.

Yet again, Swift (specifically - you in this case) is teaching me to trust the compiler to optimize, which is still an alien feeling to me even after all these years of heavy Swift usage. Damn you, C++ for corrupting my brain :grinning:.

Well. Trust but verify. :slightly_smiling_face:

The only good way I can think of doing that is hand-crafting a lightning-fast implementation LLVM IR, then doing the same in Swift, decompiling the bitcode and then doing a diff. It's going to be super tedious and painful, but it seems to be the only way to prove that Swift can (hopefully, some day...) replace C++ in sheer performance potential.

In the specific case of having dynamic-sized in-place-allocated value types this will absolutely work. But this raises a chicken-and-the-egg problem: which is built in what: in-place allocated dynamic-sized value types, or specifically fixed-size arrays? On one hand I'm tempted to think that value types should be able to dynamically decide (inside the initializer) the exact size of the allocated memory (no less than the static size) that they occupy (no matter if on the heap, on the stack or anywhere else), after which they'd be able to access the "leftover" memory by a pointer and do whatever they want with it. This approach seems more logical, since this is essentially how fixed-size arrays would be implemented under the hood. But on the other hand, this does make use of unsafe pointers (and no part of Swift currently relies on unsafe pointers to function), so abstracting it away behind a magical fixed-size array seems safer (with a hope that a fixed-size array of UInt8 would be optimized down to exactly the first case).

Representationally, I think we would have a builtin fixed-sized array type that. But "fixed-size" means "the size is an inherent part of the type", not "we actually know that size statically". Swift would just be able to use more optimal code-generation patterns for types whose bounds it was actually able to compute statically.

Well, yeah, knowing its size statically is not a requirement, but having a guarantee of in-place allocation is. As long as non-escaped local fixed-size arrays live on the stack, I'm happy. :slightly_smiling_face:

I was neutral on this, but after waking up I realized a problem. I want to use the LLVM type primitives to implement fixed-size arrays. Doing a run-time determination of layout and implementing it with alloca forfeits that (AFAIK). Unless the Swift run-time library comes with LLVM (which I doubt). Which means we do need compile-time constants after all.

Yay! Welcome to the club! And by that, I mean: please take a look at the new thread I started about compile-time facilities. :slightly_smiling_face:

···

On Aug 1, 2017, at 4:53 PM, Daryle Walker <darylew@mac.com> wrote:

On Jul 31, 2017, at 4:37 PM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 11:23 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 31, 2017, at 4:00 PM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 10:09 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Jul 31, 2017, at 3:15 AM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

On Jul 31, 2017, at 7:10 AM, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jul 30, 2017, at 11:43 PM, Daryle Walker <darylew@mac.com <mailto:darylew@mac.com>> wrote:


Daryle Walker
Mac, Internet, and Video Game Junkie
darylew AT mac DOT com

The root cause, of course, is that the VLAs require new stack allocations
each time, and the stack is only deallocated as one lump when the frame
ends.

That is true of alloca(), but not of VLAs. VLAs are freed when they go
out of scope.

Learned something today.

Anyway, if the goal is stack allocation, I would prefer that we explored
other ways to achieve it before jumping to a new array-type. I’m not really
a fan of a future where [3; Double] is one type and (Double, Double,
Double) is something else, and Array<Double> is yet another thing.

They are completely different things.

[3; Double] is three *contiguous* Doubles which may or may not live on the
stack.

(Double, Double, Double) is three Doubles bound to a single variable *name*,
which the compiler can rearrange for optimal performance and may or may not
live on the stack.

Array<Double> is an vector of Doubles that can dynamically grow and always
lives in the heap.

From what I’ve read so far, the problem with stack-allocating some Array
that you can pass to another function and which otherwise does not escape,
is that the function may make an escaping reference (e.g. assigning it to
an ivar or global, or capturing it in a closure).

How about if the compiler treated every Array it receives in a function as
being potentially stack-allocated. The first time you capture it, it will
check and copy to the heap if necessary. All subsequent escapes (including
passing to other functions) use the Array known to be allocated on the
heap, avoiding further checking or copying within the function.

The same goes for Dictionary, and really any arbitrary value-type with COW
storage. The memory that those types allocate is part of the value, so it
would be cool if we could treat it like that.

This is not true. FSAs have nothing to do with automatic storage, their
static size only makes them *eligible* to live on the stack, as tuples are
now. The defining quality of FSAs is that they are static and contiguous.

···

On Thu, Aug 3, 2017 at 8:20 PM, Karl Wagner via swift-evolution < swift-evolution@swift.org> wrote:

To be clear, there is no Swift value type that guarantees that the order in which fields are laid out is the same as the order in which they're declared. It's not just tuples. (Structs imported from C are always laid out with their C layout, of course.)

···

Le 3 août 2017 à 17:44, Taylor Swift via swift-evolution <swift-evolution@swift.org> a écrit :

On Thu, Aug 3, 2017 at 8:20 PM, Karl Wagner via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

The root cause, of course, is that the VLAs require new stack allocations each time, and the stack is only deallocated as one lump when the frame ends.

That is true of alloca(), but not of VLAs. VLAs are freed when they go out of scope.

Learned something today.

Anyway, if the goal is stack allocation, I would prefer that we explored other ways to achieve it before jumping to a new array-type. I’m not really a fan of a future where [3; Double] is one type and (Double, Double, Double) is something else, and Array<Double> is yet another thing.

They are completely different things.

[3; Double] is three contiguous Doubles which may or may not live on the stack.

(Double, Double, Double) is three Doubles bound to a single variable name, which the compiler can rearrange for optimal performance and may or may not live on the stack.

As far as I can tell, currently, all arrays live on the heap.

···

Le 3 août 2017 à 19:03, Robert Bennett via swift-evolution <swift-evolution@swift.org> a écrit :

Where do constant Arrays currently live? I hope the answer is on the stack, since their size doesn’t change.

On Aug 3, 2017, at 8:44 PM, Taylor Swift via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Thu, Aug 3, 2017 at 8:20 PM, Karl Wagner via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

The root cause, of course, is that the VLAs require new stack allocations each time, and the stack is only deallocated as one lump when the frame ends.

That is true of alloca(), but not of VLAs. VLAs are freed when they go out of scope.

Learned something today.

Anyway, if the goal is stack allocation, I would prefer that we explored other ways to achieve it before jumping to a new array-type. I’m not really a fan of a future where [3; Double] is one type and (Double, Double, Double) is something else, and Array<Double> is yet another thing.

They are completely different things.

[3; Double] is three contiguous Doubles which may or may not live on the stack.

(Double, Double, Double) is three Doubles bound to a single variable name, which the compiler can rearrange for optimal performance and may or may not live on the stack.

Array<Double> is an vector of Doubles that can dynamically grow and always lives in the heap.

From what I’ve read so far, the problem with stack-allocating some Array that you can pass to another function and which otherwise does not escape, is that the function may make an escaping reference (e.g. assigning it to an ivar or global, or capturing it in a closure).

How about if the compiler treated every Array it receives in a function as being potentially stack-allocated. The first time you capture it, it will check and copy to the heap if necessary. All subsequent escapes (including passing to other functions) use the Array known to be allocated on the heap, avoiding further checking or copying within the function.

The same goes for Dictionary, and really any arbitrary value-type with COW storage. The memory that those types allocate is part of the value, so it would be cool if we could treat it like that.

This is not true. FSAs have nothing to do with automatic storage, their static size only makes them eligible to live on the stack, as tuples are now. The defining quality of FSAs is that they are static and contiguous.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Where do constant Arrays currently live? I hope the answer is on the stack, since their size doesn’t change.

···

On Aug 3, 2017, at 8:44 PM, Taylor Swift via swift-evolution <swift-evolution@swift.org> wrote:

On Thu, Aug 3, 2017 at 8:20 PM, Karl Wagner via swift-evolution <swift-evolution@swift.org> wrote:

The root cause, of course, is that the VLAs require new stack allocations each time, and the stack is only deallocated as one lump when the frame ends.

That is true of alloca(), but not of VLAs. VLAs are freed when they go out of scope.

Learned something today.

Anyway, if the goal is stack allocation, I would prefer that we explored other ways to achieve it before jumping to a new array-type. I’m not really a fan of a future where [3; Double] is one type and (Double, Double, Double) is something else, and Array<Double> is yet another thing.

They are completely different things.

[3; Double] is three contiguous Doubles which may or may not live on the stack.

(Double, Double, Double) is three Doubles bound to a single variable name, which the compiler can rearrange for optimal performance and may or may not live on the stack.

Array<Double> is an vector of Doubles that can dynamically grow and always lives in the heap.

From what I’ve read so far, the problem with stack-allocating some Array that you can pass to another function and which otherwise does not escape, is that the function may make an escaping reference (e.g. assigning it to an ivar or global, or capturing it in a closure).

How about if the compiler treated every Array it receives in a function as being potentially stack-allocated. The first time you capture it, it will check and copy to the heap if necessary. All subsequent escapes (including passing to other functions) use the Array known to be allocated on the heap, avoiding further checking or copying within the function.

The same goes for Dictionary, and really any arbitrary value-type with COW storage. The memory that those types allocate is part of the value, so it would be cool if we could treat it like that.

This is not true. FSAs have nothing to do with automatic storage, their static size only makes them eligible to live on the stack, as tuples are now. The defining quality of FSAs is that they are static and contiguous.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

The root cause, of course, is that the VLAs require new stack allocations each time, and the stack is only deallocated as one lump when the frame ends.

That is true of alloca(), but not of VLAs. VLAs are freed when they go out of scope.

Learned something today.

Anyway, if the goal is stack allocation, I would prefer that we explored other ways to achieve it before jumping to a new array-type. I’m not really a fan of a future where [3; Double] is one type and (Double, Double, Double) is something else, and Array<Double> is yet another thing.

They are completely different things.

[3; Double] is three contiguous Doubles which may or may not live on the stack.

(Double, Double, Double) is three Doubles bound to a single variable name, which the compiler can rearrange for optimal performance and may or may not live on the stack.

Array<Double> is an vector of Doubles that can dynamically grow and always lives in the heap.

Yeah, I understand that — the problem I have is that I’m not sure it’s going to be obvious to everybody else when they should use which. We need to balance semantic purity against simplicity and ease-of-learning.

For example, I’m not sure many users are aware that tuple elements in Swift don’t have an ordered relationship; they certainly give that impression. This is especially confusing as "In mathematics a tuple is a finite ordered list (sequence) of elements.” [https://en.wikipedia.org/wiki/Tuple\]

From what I’ve read so far, the problem with stack-allocating some Array that you can pass to another function and which otherwise does not escape, is that the function may make an escaping reference (e.g. assigning it to an ivar or global, or capturing it in a closure).

How about if the compiler treated every Array it receives in a function as being potentially stack-allocated. The first time you capture it, it will check and copy to the heap if necessary. All subsequent escapes (including passing to other functions) use the Array known to be allocated on the heap, avoiding further checking or copying within the function.

The same goes for Dictionary, and really any arbitrary value-type with COW storage. The memory that those types allocate is part of the value, so it would be cool if we could treat it like that.

This is not true. FSAs have nothing to do with automatic storage, their static size only makes them eligible to live on the stack, as tuples are now. The defining quality of FSAs is that they are static and contiguous.

Really, the only practical difference between a FSA and a tuple is the memory layout. Do most users really need to care about that? For those users that do, wouldn’t an @-attribute be a less intrusive change to the language than a whole new list-style type?

For example, the benefit to having Collection-conforming tuples as our FSAs would be that they don’t necessarily have to be contiguous. If you have a large multi-dimensional list of Bools or some other tiny type, you might also benefit from those layout optimisations.

I like the simplicity in telling people:

- If you need a dynamically-sized list, use an Array
- If you need a fixed-sized list, use a tuple (and it will get an optimised layout. You can override this with an attribute, similar to @fixed_layout or @inlineable).

- Karl

···

On 4. Aug 2017, at 02:44, Taylor Swift via swift-evolution <swift-evolution@swift.org> wrote:
On Thu, Aug 3, 2017 at 8:20 PM, Karl Wagner via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Don’t small arrays live on the stack?

···

On 4 Aug 2017, at 06:35, Félix Cloutier via swift-evolution <swift-evolution@swift.org> wrote:

As far as I can tell, currently, all arrays live on the heap.

Le 3 août 2017 à 19:03, Robert Bennett via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> a écrit :

Where do constant Arrays currently live? I hope the answer is on the stack, since their size doesn’t change.

On Aug 3, 2017, at 8:44 PM, Taylor Swift via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Thu, Aug 3, 2017 at 8:20 PM, Karl Wagner via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

The root cause, of course, is that the VLAs require new stack allocations each time, and the stack is only deallocated as one lump when the frame ends.

That is true of alloca(), but not of VLAs. VLAs are freed when they go out of scope.

Learned something today.

Anyway, if the goal is stack allocation, I would prefer that we explored other ways to achieve it before jumping to a new array-type. I’m not really a fan of a future where [3; Double] is one type and (Double, Double, Double) is something else, and Array<Double> is yet another thing.

They are completely different things.

[3; Double] is three contiguous Doubles which may or may not live on the stack.

(Double, Double, Double) is three Doubles bound to a single variable name, which the compiler can rearrange for optimal performance and may or may not live on the stack.

Array<Double> is an vector of Doubles that can dynamically grow and always lives in the heap.

From what I’ve read so far, the problem with stack-allocating some Array that you can pass to another function and which otherwise does not escape, is that the function may make an escaping reference (e.g. assigning it to an ivar or global, or capturing it in a closure).

How about if the compiler treated every Array it receives in a function as being potentially stack-allocated. The first time you capture it, it will check and copy to the heap if necessary. All subsequent escapes (including passing to other functions) use the Array known to be allocated on the heap, avoiding further checking or copying within the function.

The same goes for Dictionary, and really any arbitrary value-type with COW storage. The memory that those types allocate is part of the value, so it would be cool if we could treat it like that.

This is not true. FSAs have nothing to do with automatic storage, their static size only makes them eligible to live on the stack, as tuples are now. The defining quality of FSAs is that they are static and contiguous.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Just about every system programming language has all three of these, since you can’t really stop these “similar” types from co-existing. The third type uses remote storage, while the first two are scoped storage. A heterogenous product type template has to include homogenous product types as a subset. And instruction generators can produce different code between tuples and arrays; are you willing to forfeit one set of optimizations?

···

On Aug 3, 2017, at 8:20 PM, Karl Wagner via swift-evolution <swift-evolution@swift.org> wrote:

The root cause, of course, is that the VLAs require new stack allocations each time, and the stack is only deallocated as one lump when the frame ends.

That is true of alloca(), but not of VLAs. VLAs are freed when they go out of scope.

Learned something today.

Anyway, if the goal is stack allocation, I would prefer that we explored other ways to achieve it before jumping to a new array-type. I’m not really a fan of a future where [3; Double] is one type and (Double, Double, Double) is something else, and Array<Double> is yet another thing.


Daryle Walker
Mac, Internet, and Video Game Junkie
darylew AT mac DOT com

Actually, if you do a lot of graphics programming like I do, the memory
layout is very, *very* important. Swift may not care about layout, but many
APIs that it interacts with do.

Is @fixed_layout actually planned to be part of the language? I was under
the impression it’s just a placeholder attribute. Either way, I’d
appreciate not having to write Float sixteen times for a 4x4 matrix type.

···

On Thu, Aug 3, 2017 at 11:17 PM, Karl Wagner <razielim@gmail.com> wrote:

On 4. Aug 2017, at 02:44, Taylor Swift via swift-evolution < > swift-evolution@swift.org> wrote:

On Thu, Aug 3, 2017 at 8:20 PM, Karl Wagner via swift-evolution < > swift-evolution@swift.org> wrote:

The root cause, of course, is that the VLAs require new stack allocations
each time, and the stack is only deallocated as one lump when the frame
ends.

That is true of alloca(), but not of VLAs. VLAs are freed when they go
out of scope.

Learned something today.

Anyway, if the goal is stack allocation, I would prefer that we explored
other ways to achieve it before jumping to a new array-type. I’m not really
a fan of a future where [3; Double] is one type and (Double, Double,
Double) is something else, and Array<Double> is yet another thing.

They are completely different things.

[3; Double] is three *contiguous* Doubles which may or may not live on
the stack.

(Double, Double, Double) is three Doubles bound to a single variable
*name*, which the compiler can rearrange for optimal performance and may
or may not live on the stack.

Array<Double> is an vector of Doubles that can dynamically grow and always
lives in the heap.

Yeah, I understand that — the problem I have is that I’m not sure it’s
going to be obvious to everybody *else* when they should use which. We
need to balance semantic purity against simplicity and ease-of-learning.

For example, I’m not sure many users are aware that tuple elements in
Swift don’t have an ordered relationship; they certainly give that
impression. This is especially confusing as "In mathematics a tuple is a
finite ordered list (sequence) of elements.” [https://en.wikipedia.org/
wiki/Tuple]

From what I’ve read so far, the problem with stack-allocating some Array
that you can pass to another function and which otherwise does not escape,
is that the function may make an escaping reference (e.g. assigning it to
an ivar or global, or capturing it in a closure).

How about if the compiler treated every Array it receives in a function
as being potentially stack-allocated. The first time you capture it, it
will check and copy to the heap if necessary. All subsequent escapes
(including passing to other functions) use the Array known to be allocated
on the heap, avoiding further checking or copying within the function.

The same goes for Dictionary, and really any arbitrary value-type with
COW storage. The memory that those types allocate is part of the value, so
it would be cool if we could treat it like that.

This is not true. FSAs have nothing to do with automatic storage, their
static size only makes them *eligible* to live on the stack, as tuples
are now. The defining quality of FSAs is that they are static and
contiguous.

Really, the only *practical* difference between a FSA and a tuple is the
memory layout. Do most users really need to care about that? For those
users that do, wouldn’t an @-attribute be a less intrusive change to the
language than a whole new list-style type?

For example, the benefit to having Collection-conforming tuples as our
FSAs would be that they don’t necessarily have to be contiguous. If you
have a large multi-dimensional list of Bools or some other tiny type, you
might also *benefit* from those layout optimisations.

I like the simplicity in telling people:

- If you need a dynamically-sized list, use an Array
- If you need a fixed-sized list, use a tuple (and it will get an
optimised layout. You can override this with an attribute, similar to
@fixed_layout or @inlineable).

- Karl

Actually, if you do a lot of graphics programming like I do, the memory layout is very, very important. Swift may not care about layout, but many APIs that it interacts with do.

Sure; I’m well-aware of how important it can be to decide on an appropriate memory layout. I’m very much in favour of opting-in to contiguous layout for tuples.

Is @fixed_layout actually planned to be part of the language? I was under the impression it’s just a placeholder attribute. Either way, I’d appreciate not having to write Float sixteen times for a 4x4 matrix type.

AFAIK @fixed_layout is a placeholder attribute. And I’m also very much in favour of a shorthand for declaring a fixed-size list.

I just don’t see why we need to introduce this new kind of list-like thing in order to get what we need. It makes it harder to project a coherent message about when to use which data-type.

- Karl

I've never seen the Swift compiler put array storage on automatic storage, even for small arrays. I don't think that it has much to do with their size, though (for any array that is not incredibly large).

···

Le 3 août 2017 à 23:18, David Hart <david@hartbit.com> a écrit :

Don’t small arrays live on the stack?

On 4 Aug 2017, at 06:35, Félix Cloutier via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

As far as I can tell, currently, all arrays live on the heap.

Le 3 août 2017 à 19:03, Robert Bennett via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> a écrit :

Where do constant Arrays currently live? I hope the answer is on the stack, since their size doesn’t change.

On Aug 3, 2017, at 8:44 PM, Taylor Swift via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Thu, Aug 3, 2017 at 8:20 PM, Karl Wagner via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

The root cause, of course, is that the VLAs require new stack allocations each time, and the stack is only deallocated as one lump when the frame ends.

That is true of alloca(), but not of VLAs. VLAs are freed when they go out of scope.

Learned something today.

Anyway, if the goal is stack allocation, I would prefer that we explored other ways to achieve it before jumping to a new array-type. I’m not really a fan of a future where [3; Double] is one type and (Double, Double, Double) is something else, and Array<Double> is yet another thing.

They are completely different things.

[3; Double] is three contiguous Doubles which may or may not live on the stack.

(Double, Double, Double) is three Doubles bound to a single variable name, which the compiler can rearrange for optimal performance and may or may not live on the stack.

Array<Double> is an vector of Doubles that can dynamically grow and always lives in the heap.

From what I’ve read so far, the problem with stack-allocating some Array that you can pass to another function and which otherwise does not escape, is that the function may make an escaping reference (e.g. assigning it to an ivar or global, or capturing it in a closure).

How about if the compiler treated every Array it receives in a function as being potentially stack-allocated. The first time you capture it, it will check and copy to the heap if necessary. All subsequent escapes (including passing to other functions) use the Array known to be allocated on the heap, avoiding further checking or copying within the function.

The same goes for Dictionary, and really any arbitrary value-type with COW storage. The memory that those types allocate is part of the value, so it would be cool if we could treat it like that.

This is not true. FSAs have nothing to do with automatic storage, their static size only makes them eligible to live on the stack, as tuples are now. The defining quality of FSAs is that they are static and contiguous.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

So, I’m getting into this thread kind of late, and I’ve only skimmed most of it, but…

A special FSA on the stack seems like the wrong direction. Wouldn’t it make more sense to have *all* value types that don’t change in size — including `let` Arrays — live on the stack? In which case, FSA would merely act like a normal `let` Array, without RangeReplaceableCollection conformance, whose elements could be changed via subscripting. I know nothing about the underlying implementation details of Swift, so I may be way off base here.

···

On Aug 4, 2017, at 2:18 AM, David Hart <david@hartbit.com> wrote:

Don’t small arrays live on the stack?

On 4 Aug 2017, at 06:35, Félix Cloutier via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

As far as I can tell, currently, all arrays live on the heap.

Le 3 août 2017 à 19:03, Robert Bennett via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> a écrit :

Where do constant Arrays currently live? I hope the answer is on the stack, since their size doesn’t change.

On Aug 3, 2017, at 8:44 PM, Taylor Swift via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Thu, Aug 3, 2017 at 8:20 PM, Karl Wagner via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

The root cause, of course, is that the VLAs require new stack allocations each time, and the stack is only deallocated as one lump when the frame ends.

That is true of alloca(), but not of VLAs. VLAs are freed when they go out of scope.

Learned something today.

Anyway, if the goal is stack allocation, I would prefer that we explored other ways to achieve it before jumping to a new array-type. I’m not really a fan of a future where [3; Double] is one type and (Double, Double, Double) is something else, and Array<Double> is yet another thing.

They are completely different things.

[3; Double] is three contiguous Doubles which may or may not live on the stack.

(Double, Double, Double) is three Doubles bound to a single variable name, which the compiler can rearrange for optimal performance and may or may not live on the stack.

Array<Double> is an vector of Doubles that can dynamically grow and always lives in the heap.

From what I’ve read so far, the problem with stack-allocating some Array that you can pass to another function and which otherwise does not escape, is that the function may make an escaping reference (e.g. assigning it to an ivar or global, or capturing it in a closure).

How about if the compiler treated every Array it receives in a function as being potentially stack-allocated. The first time you capture it, it will check and copy to the heap if necessary. All subsequent escapes (including passing to other functions) use the Array known to be allocated on the heap, avoiding further checking or copying within the function.

The same goes for Dictionary, and really any arbitrary value-type with COW storage. The memory that those types allocate is part of the value, so it would be cool if we could treat it like that.

This is not true. FSAs have nothing to do with automatic storage, their static size only makes them eligible to live on the stack, as tuples are now. The defining quality of FSAs is that they are static and contiguous.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

It isn’t being LLVM-specific, but for any similar system. The instruction generator has certain primitives, like 16-bit integers or 32-bit floats. LLVM (and probably rivals) also has aggregate primitives, heterogenous and homogenous (and the latter as standard and vector-unit). I want to use those primitives when possible. Saving sizing allocations until run-time, after it’s too late for sized-array-specific generated instructions, means that the array is probably implemented with general buffer pointer and length instructions. Any opportunities for IR-level optimization of the types is gone.

How often do you expect a statically sized array to need said size determined at run-time (with a function) versus a compile-time specification (with an integer literal or “constexpr” expression)? This may enable a 1% solution that anti-optimizes the 99% case.

···

On Aug 1, 2017, at 2:58 PM, John McCall <rjmccall@apple.com> wrote:

On Aug 1, 2017, at 9:53 AM, Daryle Walker <darylew@mac.com <mailto:darylew@mac.com>> wrote:

On Jul 31, 2017, at 4:37 PM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

Well, yeah, knowing its size statically is not a requirement, but having a guarantee of in-place allocation is. As long as non-escaped local fixed-size arrays live on the stack, I'm happy. :slightly_smiling_face:

I was neutral on this, but after waking up I realized a problem. I want to use the LLVM type primitives to implement fixed-size arrays. Doing a run-time determination of layout and implementing it with alloca forfeits that (AFAIK). Unless the Swift run-time library comes with LLVM (which I doubt). Which means we do need compile-time constants after all.

We are not going to design the Swift language around the goal of producing exact LLVM IR sequences. If you can't phrase this in real terms, it is irrelevant.


Daryle Walker
Mac, Internet, and Video Game Junkie
darylew AT mac DOT com

For what it’s worth, I’d be happy with just subscripts on tuples and some
form of shorthand for their size. Maybe

(Float ... 5)

or something like that. That would obviate the need for an attribute too.

···

On Thu, Aug 3, 2017 at 11:48 PM, Karl Wagner <razielim@gmail.com> wrote:

Actually, if you do a lot of graphics programming like I do, the memory
layout is very, *very* important. Swift may not care about layout, but
many APIs that it interacts with do.

Sure; I’m well-aware of how important it can be to decide on an
appropriate memory layout. I’m very much in favour of opting-in to
contiguous layout for tuples.

Is @fixed_layout actually planned to be part of the language? I was under
the impression it’s just a placeholder attribute. Either way, I’d
appreciate not having to write Float sixteen times for a 4x4 matrix type.

AFAIK @fixed_layout is a placeholder attribute. And I’m also very much in
favour of a shorthand for declaring a fixed-size list.

I just don’t see why we need to introduce this new kind of list-like thing
in order to get what we need. It makes it harder to project a coherent
message about when to use which data-type.

- Karl

No, that doesn’t work. In many cases you want to mutate the elements of the
array without changing its size. For example, a Camera struct which
contains a matrix buffer, and some of the matrices get updated on each
frame that the camera moves. The matrix buffer also stores all of the
camera’s stored properties, so what would be conceptually stored properties
are actually computed properties that get and set a Float at an offset into
the buffer. Of course this could all be avoided if we had fixed layout
guarantees in the language, and then the Camera struct could *be* the
matrix buffer and dispense with the getters and setters instead of managing
a heap buffer.

···

On Fri, Aug 4, 2017 at 11:02 AM, Robert Bennett via swift-evolution < swift-evolution@swift.org> wrote:

So, I’m getting into this thread kind of late, and I’ve only skimmed most
of it, but…

A special FSA on the stack seems like the wrong direction. Wouldn’t it
make more sense to have *all* value types that don’t change in size —
including `let` Arrays — live on the stack? In which case, FSA would merely
act like a normal `let` Array, without RangeReplaceableCollection
conformance, whose elements could be changed via subscripting. I know
nothing about the underlying implementation details of Swift, so I may be
way off base here.

On Aug 4, 2017, at 2:18 AM, David Hart <david@hartbit.com> wrote:

Don’t small arrays live on the stack?

On 4 Aug 2017, at 06:35, Félix Cloutier via swift-evolution < > swift-evolution@swift.org> wrote:

As far as I can tell, currently, all arrays live on the heap.

Le 3 août 2017 à 19:03, Robert Bennett via swift-evolution < > swift-evolution@swift.org> a écrit :

Where do constant Arrays currently live? I hope the answer is on the
stack, since their size doesn’t change.

On Aug 3, 2017, at 8:44 PM, Taylor Swift via swift-evolution < > swift-evolution@swift.org> wrote:

On Thu, Aug 3, 2017 at 8:20 PM, Karl Wagner via swift-evolution < > swift-evolution@swift.org> wrote:

The root cause, of course, is that the VLAs require new stack allocations
each time, and the stack is only deallocated as one lump when the frame
ends.

That is true of alloca(), but not of VLAs. VLAs are freed when they go
out of scope.

Learned something today.

Anyway, if the goal is stack allocation, I would prefer that we explored
other ways to achieve it before jumping to a new array-type. I’m not really
a fan of a future where [3; Double] is one type and (Double, Double,
Double) is something else, and Array<Double> is yet another thing.

They are completely different things.

[3; Double] is three *contiguous* Doubles which may or may not live on
the stack.

(Double, Double, Double) is three Doubles bound to a single variable
*name*, which the compiler can rearrange for optimal performance and may
or may not live on the stack.

Array<Double> is an vector of Doubles that can dynamically grow and always
lives in the heap.

From what I’ve read so far, the problem with stack-allocating some Array
that you can pass to another function and which otherwise does not escape,
is that the function may make an escaping reference (e.g. assigning it to
an ivar or global, or capturing it in a closure).

How about if the compiler treated every Array it receives in a function
as being potentially stack-allocated. The first time you capture it, it
will check and copy to the heap if necessary. All subsequent escapes
(including passing to other functions) use the Array known to be allocated
on the heap, avoiding further checking or copying within the function.

The same goes for Dictionary, and really any arbitrary value-type with
COW storage. The memory that those types allocate is part of the value, so
it would be cool if we could treat it like that.

This is not true. FSAs have nothing to do with automatic storage, their
static size only makes them *eligible* to live on the stack, as tuples
are now. The defining quality of FSAs is that they are static and
contiguous.

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

If the array type is ultimately written with a constant bound, it will reliably end up having a constant static size for the same reason that (Either<Int?, String>, Float) has a constant static size despite tuples, optionals, and Either all being generic types: the compiler automatically does this sort of deep substitution when it's computing type layouts.

Now, a generic method on all bounded array types would not know the size of 'self', for two reasons: it wouldn't know the bound, and it wouldn't know the layout of the element type. But of course we do have optimizations to generate specialized implementations of generic functions, and the specialized implementation would obviously be able to compute a static size of 'self' again. Moreover, a language design which required bounds to always be constant would only help this situation in an essentially trivial way: by outlawing such a method from being defined in the first place.

John.

···

On Aug 6, 2017, at 11:59 PM, Daryle Walker <darylew@mac.com> wrote:

On Aug 1, 2017, at 2:58 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

On Aug 1, 2017, at 9:53 AM, Daryle Walker <darylew@mac.com <mailto:darylew@mac.com>> wrote:

On Jul 31, 2017, at 4:37 PM, Gor Gyolchanyan <gor.f.gyolchanyan@icloud.com <mailto:gor.f.gyolchanyan@icloud.com>> wrote:

Well, yeah, knowing its size statically is not a requirement, but having a guarantee of in-place allocation is. As long as non-escaped local fixed-size arrays live on the stack, I'm happy. :slightly_smiling_face:

I was neutral on this, but after waking up I realized a problem. I want to use the LLVM type primitives to implement fixed-size arrays. Doing a run-time determination of layout and implementing it with alloca forfeits that (AFAIK). Unless the Swift run-time library comes with LLVM (which I doubt). Which means we do need compile-time constants after all.

We are not going to design the Swift language around the goal of producing exact LLVM IR sequences. If you can't phrase this in real terms, it is irrelevant.

It isn’t being LLVM-specific, but for any similar system. The instruction generator has certain primitives, like 16-bit integers or 32-bit floats. LLVM (and probably rivals) also has aggregate primitives, heterogenous and homogenous (and the latter as standard and vector-unit). I want to use those primitives when possible. Saving sizing allocations until run-time, after it’s too late for sized-array-specific generated instructions, means that the array is probably implemented with general buffer pointer and length instructions. Any opportunities for IR-level optimization of the types is gone.

How often do you expect a statically sized array to need said size determined at run-time (with a function) versus a compile-time specification (with an integer literal or “constexpr” expression)? This may enable a 1% solution that anti-optimizes the 99% case.