floating point numbers implicit conversion

Hello,

Appr. a year ago I suggested to allow implicit conversion between floating point number types,
that is between Float, CGFloat Double..Float80 …
(Note that CGFloat IS a Double on 64-bit systems btw),

E.g I am currently making things using SceneKit 3D, wich obviously involves a lot
of floating point arithmetic, using functions from several libraries where
some use Floats, others Double or CGFloat etc..
That wouldn't be so bad, were it not that all the work that I do contains
a lot of of unavoidable explicit Floating point conversions like so:

let ypos = CGFloat(1080.0 - (yGravity * yfactor)) // double in expression to CGFloat

camera1.reorientate (
                    SCNVector3(x: Float(motionGravityY * -0.08),
                               y: Float(motionGravityX * -0.1),
                               z: roll ) )

This is tedious and makes source less readable.

With implicit floating point number conversion It could be like this

var float1,float2,float3: Float
var double1,double2,double3: Double
var cgfloat1, cgfloat2, ccgfloat3 CGFloat

float1 = cgfloat2 * double3 + float1 // implicit conversion should be allowed (whereby everything
in the expression should be promoted to the highest precision var in the expression (Double here)
which then would be type wise:
Float = CGFloat(implicitly. promoted to Double) * Double) + Float (imp. promoted to Double)

Also, implicit conversion when passing function parameters would be very convenient as well e.g.

This function:
       func someMath(p1: Float, p2: Float, result: Inout Float) {…}

could then be called without explicit conversion like so:

        someMath(p1: double1, p2: cgfloat1, result: &double3)

// yes, also on inout parameters. This is done 2 times during the call and when returning.

As I vaguely remember there were objections to this implicit conversion of FP numbers,
because this was (as viewed back then) too complicated to implement?

Note that people that regularly work with floating point numbers are
well aware about having a precision loss when e.g. when converting
from Double to Float, or indeed between other numerical types as well
no problem.

For those not desiring such flexibility, there could be a new compiler option(s)
that disallows this freedom of implicit floating point number conversion.

or have at least (suppressible) compiler warnings about precision loss,
e.g when doing this: float = double..

?

Kind Regards from Speyer, Germany
TedvG

Implicit promotion has been brought up on the list before, many times over
many years. The scale and implications of the change not to be
underestimated.

To give a taste of what would be involved, consider that new integer
protocols were recently implemented that allow heterogeneous comparison;
these have proved to be tricky to implement in a way that preserves user
expectations in the context of integer literals. (I will write shortly with
thoughts on revisiting certain specifics.)

Implicit promotion would be much more complicated. There is little point in
discussing whether such a feature in the abstract is desirable or not. It
would be necessary to have a detailed proposed design and evaluate whether
the specific design is desirable, in light of its interactions with other
parts of the system. For one, I think it’s important that no code that is
currently legal produce a different result: this in itself is not trivial
to achieve.

···

On Fri, Jun 16, 2017 at 11:08 Ted F.A. van Gaalen via swift-evolution < swift-evolution@swift.org> wrote:

Hello,

Appr. a year ago I suggested to allow implicit conversion between floating
point number types,
that is between Float, CGFloat Double..Float80 …
(Note that CGFloat IS a Double on 64-bit systems btw),

E.g I am currently making things using SceneKit 3D, wich obviously
involves a lot
of floating point arithmetic, using functions from several libraries where
some use Floats, others Double or CGFloat etc..
That wouldn't be so bad, were it not that all the work that I do contains
a lot of of unavoidable explicit Floating point conversions like so:

let ypos = CGFloat(1080.0 - (yGravity * yfactor)) // double in expression
to CGFloat

camera1.reorientate (
                    SCNVector3(x: Float(motionGravityY * -0.08),
                               y: Float(motionGravityX * -0.1),
                               z: roll ) )

This is tedious and makes source less readable.

With implicit floating point number conversion It could be like this

var float1,float2,float3: Float
var double1,double2,double3: Double
var cgfloat1, cgfloat2, ccgfloat3 CGFloat

float1 = cgfloat2 * double3 + float1 // implicit conversion should be
allowed (whereby everything
in the expression should be promoted to the highest precision var in the
expression (Double here)
which then would be type wise:
Float = CGFloat(implicitly. promoted to Double) * Double) + Float (imp.
promoted to Double)

Also, implicit conversion when passing function parameters would be very
convenient as well e.g.

This function:
       func someMath(p1: Float, p2: Float, result: Inout Float) {…}

could then be called without explicit conversion like so:

        someMath(p1: double1, p2: cgfloat1, result: &double3)

// yes, also on inout parameters. This is done 2 times during the call and
when returning.

As I vaguely remember there were objections to this implicit conversion
of FP numbers,
because this was (as viewed back then) too complicated to implement?

Note that people that regularly work with floating point numbers are
well aware about having a precision loss when e.g. when converting
from Double to Float, or indeed between other numerical types as well
no problem.

For those not desiring such flexibility, there could be a new compiler
option(s)
that disallows this freedom of implicit floating point number conversion.

or have at least (suppressible) compiler warnings about precision loss,
e.g when doing this: float = double..

?

Kind Regards from Speyer, Germany
TedvG

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Has it been proposed to eventually make CGFloat into a typealias for Double to reduce the amount of explicit conversion necessary? (I realize as a proposal this would be better suited for corelibs-libfoundation)

- For Apple platforms, eventually CGFloat will *always* be a Double value, as Swift does not have a 32-bit OS X runtime, and my understanding is that new builds are no longer accepted via the App Store for 32 bit IOS versions. I believe this would limit Apple platform impact to 32-bit iOS apps shipped outside the App Store which have upgraded to some future version of XCode.

- Within corelibs-libfoundation, I believe CGFloat is only used for CGPoint and NSAfflineTransform. How useful is it to have these be 32 bit on a 32 bit target, where they aren’t being defined as floats for compatibility?

-DW

···

On Jun 16, 2017, at 10:19 AM, Xiaodi Wu via swift-evolution <swift-evolution@swift.org> wrote:

Implicit promotion has been brought up on the list before, many times over many years. The scale and implications of the change not to be underestimated.

To give a taste of what would be involved, consider that new integer protocols were recently implemented that allow heterogeneous comparison; these have proved to be tricky to implement in a way that preserves user expectations in the context of integer literals. (I will write shortly with thoughts on revisiting certain specifics.)

Implicit promotion would be much more complicated. There is little point in discussing whether such a feature in the abstract is desirable or not. It would be necessary to have a detailed proposed design and evaluate whether the specific design is desirable, in light of its interactions with other parts of the system. For one, I think it’s important that no code that is currently legal produce a different result: this in itself is not trivial to achieve.

On Fri, Jun 16, 2017 at 11:08 Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
Hello,

Appr. a year ago I suggested to allow implicit conversion between floating point number types,
that is between Float, CGFloat Double..Float80 …
(Note that CGFloat IS a Double on 64-bit systems btw),

E.g I am currently making things using SceneKit 3D, wich obviously involves a lot
of floating point arithmetic, using functions from several libraries where
some use Floats, others Double or CGFloat etc..
That wouldn't be so bad, were it not that all the work that I do contains
a lot of of unavoidable explicit Floating point conversions like so:

let ypos = CGFloat(1080.0 - (yGravity * yfactor)) // double in expression to CGFloat

camera1.reorientate (
                    SCNVector3(x: Float(motionGravityY * -0.08),
                               y: Float(motionGravityX * -0.1),
                               z: roll ) )

This is tedious and makes source less readable.

With implicit floating point number conversion It could be like this

var float1,float2,float3: Float
var double1,double2,double3: Double
var cgfloat1, cgfloat2, ccgfloat3 CGFloat

float1 = cgfloat2 * double3 + float1 // implicit conversion should be allowed (whereby everything
in the expression should be promoted to the highest precision var in the expression (Double here)
which then would be type wise:
Float = CGFloat(implicitly. promoted to Double) * Double) + Float (imp. promoted to Double)

Also, implicit conversion when passing function parameters would be very convenient as well e.g.

This function:
       func someMath(p1: Float, p2: Float, result: Inout Float) {…}

could then be called without explicit conversion like so:

        someMath(p1: double1, p2: cgfloat1, result: &double3)

// yes, also on inout parameters. This is done 2 times during the call and when returning.

As I vaguely remember there were objections to this implicit conversion of FP numbers,
because this was (as viewed back then) too complicated to implement?

Note that people that regularly work with floating point numbers are
well aware about having a precision loss when e.g. when converting
from Double to Float, or indeed between other numerical types as well
no problem.

For those not desiring such flexibility, there could be a new compiler option(s)
that disallows this freedom of implicit floating point number conversion.

or have at least (suppressible) compiler warnings about precision loss,
e.g when doing this: float = double..

?

Kind Regards from Speyer, Germany
TedvG

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

watchOS is still 32-bit. That makes "eventually" be longer that you would like, I think.

···

On Jun 16, 2017, at 10:35 AM, David Waite via swift-evolution <swift-evolution@swift.org> wrote:

Has it been proposed to eventually make CGFloat into a typealias for Double to reduce the amount of explicit conversion necessary? (I realize as a proposal this would be better suited for corelibs-libfoundation)

- For Apple platforms, eventually CGFloat will *always* be a Double value, as Swift does not have a 32-bit OS X runtime, and my understanding is that new builds are no longer accepted via the App Store for 32 bit IOS versions. I believe this would limit Apple platform impact to 32-bit iOS apps shipped outside the App Store which have upgraded to some future version of XCode.

--
Greg Parker gparker@apple.com <mailto:gparker@apple.com> Runtime Wrangler

Meh, thats a pretty long eventually. Thanks for shooting it down gently ;-)

-DW

···

On Jun 16, 2017, at 4:34 PM, Greg Parker <gparker@apple.com <mailto:gparker@apple.com>> wrote:

On Jun 16, 2017, at 10:35 AM, David Waite via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Has it been proposed to eventually make CGFloat into a typealias for Double to reduce the amount of explicit conversion necessary? (I realize as a proposal this would be better suited for corelibs-libfoundation)

- For Apple platforms, eventually CGFloat will *always* be a Double value, as Swift does not have a 32-bit OS X runtime, and my understanding is that new builds are no longer accepted via the App Store for 32 bit IOS versions. I believe this would limit Apple platform impact to 32-bit iOS apps shipped outside the App Store which have upgraded to some future version of XCode.

watchOS is still 32-bit. That makes "eventually" be longer that you would like, I think.

Hi Xiaodi

in 1966-1970 PL/I, a static-typed procedural block-structured programming language
far ahead of its time, with the purpose of giving "all things to all programmers”
was introduced by IBM as their main (mainframe) programming flagship.

I’ve worked with PL/I very often during my life. It is truly general purpose and has tons
of features and is because of that quite overwhelming for starters like me in 1976 after
Fortran as my first PL. PL/I is a bit like riding a gigantic powerful motor cycle without
much instruction from the start and without training wheels back then of course.
Took some time to master PL/I (no screens let alone IDEs back then), however, with
the great reward to be in control of one of the most powerful PLs on the planet.

You might ask, what’s the context of this somewhat nostalgic emission: :o)

PL/I also has many data types like Binary, Bit, Character Complex, Decimal
Fixed, Float,Picture etc. of various lengths and storage
.
Implicit data type conversion (coercion) between almost all PL/I data types has
been available in PL/I right from the start. No problem - that is if you know what you are doing.
What exactly to expect from conversions in PL/I is always predictable, also
because it is described in detail in the programming manuals.
Furthermore PL/I diagnostics and warnings are excellent.

So, implicit data type conversion (coercion) has been successfully implemented and used in an
already complex programming language (in some cases far more advanced as Swift - apart from
OOP) almost 40 years ago…
It is now 2017 and you’re telling me that coercion is too difficult to implement in Swift ??

Implicit promotion has been brought up on the list before, many times over many years. The scale and implications of the change not to be underestimated.

It should be not so difficult I think… as it simply replaces
explicit casts and the compiler can detect implicit conversions easily;
be it in assignments, with expression operands or as call parameters.

To give a taste of what would be involved, consider that new integer protocols were recently implemented that allow heterogeneous comparison; these have proved to be tricky to implement in a way that preserves user expectations in the context of integer literals. (I will write shortly with thoughts on revisiting certain specifics.)

Yes, can’t react on this need more information, I don’t see this in context of “user expectations” either.

btw, I was merely discussing Floating Point conversion <=> Integers?

Implicit promotion would be much more complicated.

Why do you think so?
float = double // compiler inferred type conversion, what’s so difficult about this assignment?
Currently you’d have to use
float = Float(double) // and you’re doing (explicitly) exactly the same thing.

There is little point in discussing whether such a feature in the abstract is desirable or not.

Abstract? I don’t think so and have described this subject fairly concrete,
illustrated with some examples.
To me and certainly also many others this feature is desirable.
As your say yourself
“it has been brought forward so many times”
Obviously many desire this feature and find this important.

It would be necessary to have a detailed proposed design and evaluate whether the specific design is desirable, in light of its interactions with other parts of the system.

after gathering more insight, yes.

For one, I think it’s important that no code that is currently legal produce a different result: this in itself is not trivial to achieve.

As you know, Swift currently does not facilitate implicit conversion (coercion),
which implies that current code (which of course does not contain implicit conversions)
will function exactly as it does now, when it was compiled in a Swift version in which
implicit conversion were implemented.

Kind Regards,
TedvG

···

On 16. Jun 2017, at 18:19, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Fri, Jun 16, 2017 at 11:08 Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
Hello,

Appr. a year ago I suggested to allow implicit conversion between floating point number types,
that is between Float, CGFloat Double..Float80 …
(Note that CGFloat IS a Double on 64-bit systems btw),

E.g I am currently making things using SceneKit 3D, wich obviously involves a lot
of floating point arithmetic, using functions from several libraries where
some use Floats, others Double or CGFloat etc..
That wouldn't be so bad, were it not that all the work that I do contains
a lot of of unavoidable explicit Floating point conversions like so:

let ypos = CGFloat(1080.0 - (yGravity * yfactor)) // double in expression to CGFloat

camera1.reorientate (
                    SCNVector3(x: Float(motionGravityY * -0.08),
                               y: Float(motionGravityX * -0.1),
                               z: roll ) )

This is tedious and makes source less readable.

With implicit floating point number conversion It could be like this

var float1,float2,float3: Float
var double1,double2,double3: Double
var cgfloat1, cgfloat2, ccgfloat3 CGFloat

float1 = cgfloat2 * double3 + float1 // implicit conversion should be allowed (whereby everything
in the expression should be promoted to the highest precision var in the expression (Double here)
which then would be type wise:
Float = CGFloat(implicitly. promoted to Double) * Double) + Float (imp. promoted to Double)

Also, implicit conversion when passing function parameters would be very convenient as well e.g.

This function:
       func someMath(p1: Float, p2: Float, result: Inout Float) {…}

could then be called without explicit conversion like so:

        someMath(p1: double1, p2: cgfloat1, result: &double3)

// yes, also on inout parameters. This is done 2 times during the call and when returning.

As I vaguely remember there were objections to this implicit conversion of FP numbers,
because this was (as viewed back then) too complicated to implement?

Note that people that regularly work with floating point numbers are
well aware about having a precision loss when e.g. when converting
from Double to Float, or indeed between other numerical types as well
no problem.

For those not desiring such flexibility, there could be a new compiler option(s)
that disallows this freedom of implicit floating point number conversion.

or have at least (suppressible) compiler warnings about precision loss,
e.g when doing this: float = double..

?

Kind Regards from Speyer, Germany
TedvG

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

Hi Xiaodi

in 1966-1970 PL/I, a static-typed procedural block-structured programming
language
far ahead of its time, with the purpose of giving "all things to all
programmers”
was introduced by IBM as their main (mainframe) programming flagship.

I’ve worked with PL/I very often during my life. It is truly general
purpose and has tons
of features and is because of that quite overwhelming for starters like me
in 1976 after
Fortran as my first PL. PL/I is a bit like riding a gigantic powerful
motor cycle without
much instruction from the start and without training wheels back then of
course.
Took some time to master PL/I (no screens let alone IDEs back then),
however, with
the great reward to be in control of one of the most powerful PLs on the
planet.

You might ask, what’s the context of this somewhat nostalgic emission:
:o)

PL/I also has many data types like Binary, Bit, Character Complex, Decimal
Fixed, Float,Picture etc. of various lengths and storage
.
Implicit data type conversion (coercion) between almost all PL/I data
types has
been available in PL/I right from the start. No problem - that is if you
know what you are doing.
What exactly to expect from conversions in PL/I is always predictable, also
because it is described in detail in the programming manuals.
Furthermore PL/I diagnostics and warnings are excellent.

So, implicit data type conversion (coercion) has been successfully
implemented and used in an
already complex programming language (in some cases far more advanced as
Swift - apart from
OOP) almost 40 years ago…
It is now 2017 and you’re telling me that coercion is too difficult to
implement in Swift ??

Implicit promotion has been brought up on the list before, many times over
many years. The scale and implications of the change not to be
underestimated.

It should be not so difficult I think… as it simply replaces
explicit casts and the compiler can detect implicit conversions easily;
be it in assignments, with expression operands or as call parameters.

To give a taste of what would be involved, consider that new integer
protocols were recently implemented that allow heterogeneous comparison;
these have proved to be tricky to implement in a way that preserves user
expectations in the context of integer literals. (I will write shortly with
thoughts on revisiting certain specifics.)

Yes, can’t react on this need more information, I don’t see this in
context of “user expectations” either.

btw, I was merely discussing Floating Point conversion <=> Integers?

Implicit promotion would be much more complicated.

Why do you think so?
float = double // compiler inferred type conversion, what’s so difficult
about this assignment?
Currently you’d have to use
float = Float(double) // and you’re doing (explicitly) exactly the same
thing.

There is little point in discussing whether such a feature in the abstract
is desirable or not.

Abstract? I don’t think so and have described this subject fairly
concrete,
illustrated with some examples.
To me and certainly also many others this feature is desirable.
As your say yourself
“it has been brought forward so many times”
Obviously many desire this feature and find this important.

Ted, you'll find no disagreement from me that this feature is desirable.
Now, having agreed on that, how do you intend to design and implement it?
How would you expand the type system to accommodate these rules? How would
you formalize the revised rules for type inference? What new protocols
would you add that allow non-builtin types to participate? What existing
code would break, and how would you migrate it? There's no way to proceed
with the conversation until someone proposes a design.

It would be necessary to have a detailed proposed design and evaluate
whether the specific design is desirable, in light of its interactions with
other parts of the system.

after gathering more insight, yes.

For one, I think it’s important that no code that is currently legal
produce a different result: this in itself is not trivial to achieve.

As you know, Swift currently does not facilitate implicit conversion
(coercion),
which implies that current code (which of course does not contain implicit
conversions)
will function exactly as it does now,

That is not implied. Swift has type inference, which means that, with the
implementation of implicit promotion, the inferred type of certain
expressions that are legal today will change. For instance, integer
literals have no type of their own and are inferred to be of some type or
another based on context. When combined with bitwise operators, the
presence or absence of overloads that allow heterogeneous operations can
change the result of code that compiles both before and after the
implementation of the feature. This is just one example of why implicit
integer promotion in a strictly typed language with type inference and
without certain generics features is very hard.

when it was compiled in a Swift version in which

···

On Sat, Jun 17, 2017 at 3:21 PM, Ted F.A. van Gaalen <tedvgiosdev@gmail.com> wrote:

On 16. Jun 2017, at 18:19, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
implicit conversion were implemented.

Kind Regards,
TedvG
www.tedvg.com

I'm not sure which generic features you're referring to. Would you (or anyone else who knows) mind elaborating?

- Dave Sweeris

···

On Jun 17, 2017, at 16:16, Xiaodi Wu via swift-evolution <swift-evolution@swift.org> wrote:

On Sat, Jun 17, 2017 at 3:21 PM, Ted F.A. van Gaalen <tedvgiosdev@gmail.com> wrote:

As you know, Swift currently does not facilitate implicit conversion (coercion),
which implies that current code (which of course does not contain implicit conversions)
will function exactly as it does now,

That is not implied. Swift has type inference, which means that, with the implementation of implicit promotion, the inferred type of certain expressions that are legal today will change. For instance, integer literals have no type of their own and are inferred to be of some type or another based on context. When combined with bitwise operators, the presence or absence of overloads that allow heterogeneous operations can change the result of code that compiles both before and after the implementation of the feature. This is just one example of why implicit integer promotion in a strictly typed language with type inference and without certain generics features is very hard.

In Swift, all types and all operators are implemented in the standard
library. How do you express the idea that, when you add values of disparate
types T and U, the result should be of the type with greater precision? You
need to be able to spell this somehow.

···

On Sat, Jun 17, 2017 at 22:39 David Sweeris <davesweeris@mac.com> wrote:

On Jun 17, 2017, at 16:16, Xiaodi Wu via swift-evolution < > swift-evolution@swift.org> wrote:

On Sat, Jun 17, 2017 at 3:21 PM, Ted F.A. van Gaalen < > tedvgiosdev@gmail.com> wrote:

As you know, Swift currently does not facilitate implicit conversion

(coercion),
which implies that current code (which of course does not contain
implicit conversions)
will function exactly as it does now,

That is not implied. Swift has type inference, which means that, with the
implementation of implicit promotion, the inferred type of certain
expressions that are legal today will change. For instance, integer
literals have no type of their own and are inferred to be of some type or
another based on context. When combined with bitwise operators, the
presence or absence of overloads that allow heterogeneous operations can
change the result of code that compiles both before and after the
implementation of the feature. This is just one example of why implicit
integer promotion in a strictly typed language with type inference and
without certain generics features is very hard.

I'm not sure which generic features you're referring to. Would you (or
anyone else who knows) mind elaborating?

- Dave Sweeris

Oh, ok... I thought you meant "conditional conformance" or something concrete :-D

Off the top of my head, with "literals as generic parameters",
protocol Addable {
  associatedtype BitsOfPrecision: IntegerLiteral
  static func + <T: Addable> (_: Self, _: T) -> T where T.BitsOfPrecision >
BitsOfPrecision
  static func + <T: Addable> (_: Self, _: T) -> Self where T.BitsOfPrecision <= BitsOfPrecision
}

Although, come to think of it, I suppose that's a bit more than simply using literals as types. Still, it's all information that's available at compile time, though.

- Dave Sweeris

···

On Jun 17, 2017, at 20:43, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

In Swift, all types and all operators are implemented in the standard library. How do you express the idea that, when you add values of disparate types T and U, the result should be of the type with greater precision? You need to be able to spell this somehow.

To be slightly less coy: You need to be able to say that one value type is a subtype of another. Then you can say Int8 is a subtype of Int16, and the compiler knows that it can convert any Int8 to an Int16 but not vice versa. This adds lots of complexity and makes parts of the compiler that are currently far too slow even slower, but it's not difficult to imagine, just difficult to practically implement given the current state of the Swift compiler.

···

On Jun 17, 2017, at 8:43 PM, Xiaodi Wu via swift-evolution <swift-evolution@swift.org> wrote:

How do you express the idea that, when you add values of disparate types T and U, the result should be of the type with greater precision? You need to be able to spell this somehow.

--
Brent Royal-Gordon
Architechies

2 Likes

And, without integer literals as generic parameters, how would you express
this operation?

···

On Sat, Jun 17, 2017 at 23:01 David Sweeris <davesweeris@mac.com> wrote:

On Jun 17, 2017, at 20:43, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

In Swift, all types and all operators are implemented in the standard
library. How do you express the idea that, when you add values of disparate
types T and U, the result should be of the type with greater precision? You
need to be able to spell this somehow.

Oh, ok... I thought you meant "conditional conformance" or something
*concrete* :-D

Off the top of my head, with "literals as generic parameters",
protocol Addable {
  associatedtype BitsOfPrecision: IntegerLiteral
  static func + <T: Addable> (_: Self, _: T) -> T where T.BitsOfPrecision
>
BitsOfPrecision
  static func + <T: Addable> (_: Self, _: T) -> Self where
T.BitsOfPrecision <= BitsOfPrecision
}

Although, come to think of it, I suppose that's a bit more than simply
using literals as types. Still, it's all information that's available at
compile time, though.

- Dave Sweeris

How do you express the idea that, when you add values of disparate types T
and U, the result should be of the type with greater precision? You need to
be able to spell this somehow.

To be slightly less coy:

:)

You need to be able to say that one value type is a subtype of another.

Then you can say Int8 is a subtype of Int16, and the compiler knows that it
can convert any Int8 to an Int16 but not vice versa. This adds lots of
complexity and makes parts of the compiler that are currently far too slow
even slower, but it's not difficult to imagine, just difficult to
practically implement given the current state of the Swift compiler.

And then there are follow-on issues. As but a single example, consider
integer literals, which are of course commonly used with these operations.
Take the following statements:

var x = 42 as UInt32
let y = x + 2

What is the inferred type of 2? Currently, that would be UInt32. What is
the inferred type of y? That would also be UInt32. So, it's tempting to
say, let's keep this rule that integer literals are inferred to be of the
same type. But now:

let z = x + (-2)

What is the inferred type of -2? If it's UInt32, then this expression is a
compile-time error and we've ruled out integer promotion for a common use
case. If OTOH it's the default IntegerLiteralType (Int), what is the type
of z? It would have to be Int.

Now suppose x were instead of type UInt64. What would be the type of z, if
-2 is inferred to be of type Int? The answer would have to be
DoubleWidth<Int64>. That is clearly overkill for subtracting 2. So let's
say instead that the literal is inferred to be of the smallest type that
can represent the value (i.e. Int8). If so, then what is the result of this
computation?

let a = x / ~0

Currently, in Swift 3, ~0 is equal to UInt32.max. But if we have a rule
that the literal should be inferred to be the smallest type that can
represent the value, then the result of this computation _changes_. That
won't do. So let's say instead that the literal is inferred to be of the
same type as the other operand, unless it is not representable as such, in
which case it is then of the smallest type that can represent the value.
Firstly, and critically, this is not very easy to reason about. Secondly,
it still does not solve another problem with the smallest-type rule.
Consider this example:

let b = x / -64

If I import a library that exposes Int7 (the standard library itself has an
internal Int63 type and, I think, other Int{2**n - 1} types as well), then
the type of b would change!

Of all the alternatives here, it would seem that disallowing integer
promotion with literals is after all the most straightforward answer.
However, it is not a satisfying one.

My point here, at this point, is not to drive at a consensus answer for
this particular issue, but to point out that what we are discussing is
essentially a major change to the type system. As such, and because Swift
already has so many rich features, _numerous_ such questions will arise
about how any such change interacts with other parts of the type system.
For these questions, sometimes there is not an "obvious" solution, and
there is no guarantee that there will even be a single fully satisfying
solution even after full consideration. That makes this a _very_ difficult
topic, very difficult indeed.

To evaluate whether any such undertaking is a good idea, we would need to
discuss a fully thought-out design and consider very carefully how it
changes all the other moving parts of the type system; it is not enough to
say merely that the feature is a good one.

···

On Sun, Jun 18, 2017 at 00:02 Brent Royal-Gordon <brent@architechies.com> wrote:

On Jun 17, 2017, at 8:43 PM, Xiaodi Wu via swift-evolution < > swift-evolution@swift.org> wrote:

Off the top of my head? As the language stands now, maybe a ton of extensions so that it never actually hits the fully generic version?

extension Addable where Self == Int8 {...}
extension Addable where Self == Int16 {...}
extension Addable where Self == Int32 {
  static func + <T: Addable> (lhs: Self, rhs: Int8) -> Self {...}
  static func + <T: Addable> (lhs: Self, rhs: Int16) -> Self {...}
  static func + <T: Addable> (lhs: Self, rhs: Int32) -> Self {...}
  static func + <T: Addable> (lhs: Self, rhs: Int64) -> Int64 {...}
}
extension Addable where Self == Int64 {...}

Dunno if that'll compile... it might need an `_IntNNType` protocol for each integer type so that the where clause could be "where Self: _Int32Type" instead of "where Self == Int32" (of course, if such a thing were actually done, the obvious next step would be to make `UInt64` conform to `_Int8Type`, and see just how close you can get to wat https://www.destroyallsoftware.com/talks/wat\).

Even if that works, though, it'll all come crashing down as soon as someone makes an `Int128`... oh, nuts! I forgot about `DoubleWidth`!

Yeah, I don't think I can just pull that particular implementation out of the air.

- Dave Sweeris

···

On Jun 17, 2017, at 21:18, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

And, without integer literals as generic parameters, how would you express this operation?

On Sat, Jun 17, 2017 at 23:01 David Sweeris <davesweeris@mac.com> wrote:

On Jun 17, 2017, at 20:43, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

In Swift, all types and all operators are implemented in the standard library. How do you express the idea that, when you add values of disparate types T and U, the result should be of the type with greater precision? You need to be able to spell this somehow.

Oh, ok... I thought you meant "conditional conformance" or something concrete :-D

Off the top of my head, with "literals as generic parameters",
protocol Addable {
  associatedtype BitsOfPrecision: IntegerLiteral
  static func + <T: Addable> (_: Self, _: T) -> T where T.BitsOfPrecision >
BitsOfPrecision
  static func + <T: Addable> (_: Self, _: T) -> Self where T.BitsOfPrecision <= BitsOfPrecision
}

Although, come to think of it, I suppose that's a bit more than simply using literals as types. Still, it's all information that's available at compile time, though.

- Dave Sweeris

How do you express the idea that, when you add values of disparate types T and U, the result should be of the type with greater precision? You need to be able to spell this somehow.

To be slightly less coy:

:)

what if one would allow all numeric conversions,
also to those with smaller storage, and from signed to unsigned,
(Of course with clear compiler warnings)
but trap (throwable) conversion errors at runtime?
Runtime errors would then be things like:
-assigning a negative value to an unsigned integer
-assigning a too large integer to an integer e.g UInt8 = 43234
E.g. doing this:

let n1 = 255 // inferred Int
var u1:UInt8
u1 = n1 //this would be OK because: 0 <=n1 <= 255
(currently this is not possible in Swift: "cannot assign value of type ‘Int’ to type ‘UInt8’)

Personally, I’d prefer that this should be allowed and cause a runtime error when conversion is not possible.
Compile time warning could be “Warning: Overflow might occur when assigning value from type ‘Int’ to type ‘UInt8’”
Same for floats if out of magnitude.

Then this would not be necessary:

You need to be able to say that one value type is a subtype of another. Then you can say Int8 is a subtype of Int16, and the compiler knows that it can convert any Int8 to an Int16 but not vice versa. This adds lots of complexity and makes parts of the compiler that are currently far too slow even slower, but it's not difficult to imagine, just difficult to practically implement given the current state of the Swift compiler.

And then there are follow-on issues. As but a single example, consider integer literals, which are of course commonly used with these operations. Take the following statements:

var x = 42 as UInt32
let y = x + 2

What is the inferred type of 2? Currently, that would be UInt32. What is the inferred type of y? That would also be UInt32. So, it's tempting to say, let's keep this rule that integer literals are inferred to be of the same type. But now:

let z = x + (-2)

What is the inferred type of -2? If it's UInt32, then this expression is a compile-time error and we've ruled out integer promotion for a common use case. If OTOH it's the default IntegerLiteralType (Int), what is the type of z? It would have to be Int.

One of the things imho which I would change in Swift:
I’d prefer this rule:
*** within the scope of an expression, individual operands (vars and literals)
      should all implicitly be promoted to lowest possible precision
     with which the complete expression can be evaluated ***
ergo:
the smallest-type rule as it is now should be replaced by the above rule
or in other words
  the smallest type possible within the scope of the complete expression.

Your above example would then not result in a compilation error. (currently it does)

in your example, with this rule, x would be implicitly promoted to Int
and the result *z* would be an inferred Int.

another example of wat could be an allowable expression:

var result: Float = 0.0
result = float * integer * uint8 + double
// here, all operands should be implicitly promoted to Double before the complete expression evaluation.

//the evaluation of the expression results in a Double, which then is converted to a float during assignment to “result”

To summarise: Allow implicit conversions, but trap impossible things at runtime,
unless of course the compiler can already figure out that it doesn’t work e.g in some cases with literals:
let n1 = 234543
var u1:UInt8 = n1 // overflow error

Now suppose x were instead of type UInt64. What would be the type of z, if -2 is inferred to be of type Int? The answer would have to be DoubleWidth<Int64>. That is clearly overkill for subtracting 2. So let's say instead that the literal is inferred to be of the smallest type that can represent the value (i.e. Int8). If so, then what is the result of this computation?

let a = x / ~0

Currently, in Swift 3, ~0 is equal to UInt32.max. But if we have a rule that the literal should be inferred to be the smallest type that can represent the value, then the result of this computation _changes_. That won't do. So let's say instead that the literal is inferred to be of the same type as the other operand, unless it is not representable as such, in which case it is then of the smallest type that can represent the value. Firstly, and critically, this is not very easy to reason about. Secondly, it still does not solve another problem with the smallest-type rule. Consider this example:

let b = x / -64

If I import a library that exposes Int7 (the standard library itself has an internal Int63 type and, I think, other Int{2**n - 1} types as well), then the type of b would change!

Of all the alternatives here, it would seem that disallowing integer promotion with literals is after all the most straightforward answer. However, it is not a satisfying one.

As written, I think that the other (better?) option is catching conversion errors at runtime,
… this would require programmers having some common sense and awareness of what they’re doing :o)

My point here, at this point, is not to drive at a consensus answer for this particular issue, but to point out that what we are discussing is essentially a major change to the type system. As such, and because Swift already has so many rich features, _numerous_ such questions will arise about how any such change interacts with other parts of the type system. For these questions, sometimes there is not an "obvious" solution, and there is no guarantee that there will even be a single fully satisfying solution even after full consideration. That makes this a _very_ difficult topic, very difficult indeed.

Yes, agree, it is. It is one of the difficulties of a static typed language (but a static type pre-compiled language is -whether I like it or not - currently still the best option for fast apps) in a static typed (OOP) language you need generics and protocols, but if going too far with these, one might paint him/herself into a corner, as increasingly more features become interdependent.. Looks like this could become problematic in further improving Swift. Also decisions of what are possible improvements and what not.

To evaluate whether any such undertaking is a good idea, we would need to discuss a fully thought-out design and consider very carefully how it changes all the other moving parts of the type system; it is not enough to say merely that the feature is a good one.

thanks
TedvG

···

On 18. Jun 2017, at 08:04, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
On Sun, Jun 18, 2017 at 00:02 Brent Royal-Gordon <brent@architechies.com <mailto:brent@architechies.com>> wrote:

On Jun 17, 2017, at 8:43 PM, Xiaodi Wu via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

You would have this produce different results than:

  let temp = float * integer * uint8
  result = temp + double

That would be extremely surprising to many unsuspecting users.

Don’t get me wrong; I *really want* implicit promotions (I proposed one scheme for them way back when Swift was first unveiled publicly). But there’s a lot more subtlety around them than it seems (for example the C and C++ implicit promotion rules can easily be described on a half-sheet of paper, but are the source of *innumerable* bugs). I would rather have no implicit promotions than half-baked implicit promotions.

– Steve

···

On Jun 19, 2017, at 11:46 AM, Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org> wrote:

var result: Float = 0.0
result = float * integer * uint8 + double
// here, all operands should be implicitly promoted to Double before the complete expression evaluation.

3 Likes

How expensive is it?

- Dave Sweeris

···

Sent from my iPhone

On Jun 19, 2017, at 13:44, John McCall via swift-evolution <swift-evolution@swift.org> wrote:

On Jun 19, 2017, at 1:58 PM, Stephen Canon via swift-evolution <swift-evolution@swift.org> wrote:
On Jun 19, 2017, at 11:46 AM, Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org> wrote:

var result: Float = 0.0
result = float * integer * uint8 + double
// here, all operands should be implicitly promoted to Double before the complete expression evaluation.

You would have this produce different results than:

  let temp = float * integer * uint8
  result = temp + double

That would be extremely surprising to many unsuspecting users.

Don’t get me wrong; I *really want* implicit promotions (I proposed one scheme for them way back when Swift was first unveiled publicly).

I don't! At least not for floating point. It is important for both reliable behavior and performance that programmers understand and minimize the conversions they do between different floating-point types.

I don't! At least not for floating point. It is important for both reliable behavior and performance that programmers understand and minimize the conversions they do between different floating-point types.

John.

···

On Jun 19, 2017, at 1:58 PM, Stephen Canon via swift-evolution <swift-evolution@swift.org> wrote:

On Jun 19, 2017, at 11:46 AM, Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

var result: Float = 0.0
result = float * integer * uint8 + double
// here, all operands should be implicitly promoted to Double before the complete expression evaluation.

You would have this produce different results than:

  let temp = float * integer * uint8
  result = temp + double

That would be extremely surprising to many unsuspecting users.

Don’t get me wrong; I *really want* implicit promotions (I proposed one scheme for them way back when Swift was first unveiled publicly).

1 Like

If memory serves, it's not usually ruinously expensive on its own, but there tend to not be very many functional units for it, and it doesn't get pipelined very well. Essentially, micro-architects often assume that well-written FP code is not doing a significant number of FP conversions. Even if it were very cheap, it would still be an unnecessary operation in the pipeline.

It's a well-known source of performance bugs in C to accidentally use 1.0 instead of 1.0f in the middle of some complex expression that's heavily working with floats. A bunch of intermediate computations ends up getting done in double, and unlike the analogous situation with integers, it's not really possible for the compiler to automatically figure out that it can do them in float instead.

John.

···

On Jun 19, 2017, at 5:43 PM, David Sweeris <davesweeris@mac.com> wrote:
Sent from my iPhone
On Jun 19, 2017, at 13:44, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jun 19, 2017, at 1:58 PM, Stephen Canon via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jun 19, 2017, at 11:46 AM, Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

var result: Float = 0.0
result = float * integer * uint8 + double
// here, all operands should be implicitly promoted to Double before the complete expression evaluation.

You would have this produce different results than:

  let temp = float * integer * uint8
  result = temp + double

That would be extremely surprising to many unsuspecting users.

Don’t get me wrong; I *really want* implicit promotions (I proposed one scheme for them way back when Swift was first unveiled publicly).

I don't! At least not for floating point. It is important for both reliable behavior and performance that programmers understand and minimize the conversions they do between different floating-point types.

How expensive is it?

On most contemporary hardware, it’s comparable to a floating-point add or multiply. On current generation Intel, it’s actually a little bit more expensive than that. Not catastrophic, but expensive enough that you are throwing away half or more of your performance if you incur spurious conversions on every operation.

This is really common in C and C++ where a naked floating-point literal like 1.2 is double:

  float x;
  x *= 1.2;

Instead of a bare multiplication (current generation x86 hardware: 1 µop and 4 cycles latency) this produces a convert-to-double, multiplication, and convert-to-float (5 µops and 14 cycles latency per Agner Fog).

–Steve

···

On Jun 19, 2017, at 5:43 PM, David Sweeris <davesweeris@mac.com> wrote:

Sent from my iPhone
On Jun 19, 2017, at 13:44, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jun 19, 2017, at 1:58 PM, Stephen Canon via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jun 19, 2017, at 11:46 AM, Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

var result: Float = 0.0
result = float * integer * uint8 + double
// here, all operands should be implicitly promoted to Double before the complete expression evaluation.

You would have this produce different results than:

  let temp = float * integer * uint8
  result = temp + double

That would be extremely surprising to many unsuspecting users.

Don’t get me wrong; I *really want* implicit promotions (I proposed one scheme for them way back when Swift was first unveiled publicly).

I don't! At least not for floating point. It is important for both reliable behavior and performance that programmers understand and minimize the conversions they do between different floating-point types.

How expensive is it?