Implicit Type Conversion For Numerics Where Possible

Currently, one has to deal with explicit conversion between numerical types,
which in many cases is unnecessary and costing time to code
for things that are quite obvious,
and cluttering the source, making it less readable.

Especially dealing all the time with often unavoidable intermixing
of floating point types CGFloat, Float, and Double
is really very annoying.

Conversion beween floating point types is always harmless as
floating point types are essentially the same.
They differ only in precision.

Therefore, I would recommend allowing the following implicit type conversions:

-between all floating point types e.g. Double, Float, CGFloat

-from any integer type to floating point types

-Also, personally, I wouldn’t mind assigning from a float to a (signed) integer
because I know what I am doing: that the fraction is lost
and that assigning a too large float to an Integer would then cause
a run time error, which I can try/catch, of course.

-from unsigned integer to signed integer
(nothing is lost here, but overflow should cause a run time error)

but no implicit conversion for:
- from integer to unsigned integer (loosing sign here)
- from a larger integer type to a smaller one e.g. Int32 <- Int64 (truncation)

Note however, that the compiler should issue warnings
when you do implicit conversions, but these warnings
are for most programmers of the “Yeah I know, don’t bug me.”
type, so one should be able to switch off these type of warnings.

Even a programmer with little experience simply knows
that bringing integers into the floating point domain
causes precision loss.
He/she also knows that assigning a Double to a smaller floating
point type also cause precision loss.
the reverse is not true.

Very much interested in your opinion!

···

----
N.B. the above does not yet include
the fixed decimal numerical type as this type is not yet
available in Swift. However, it should be implemented
*as soon as possible* because the fixed decimal type
is really needed for applications working with financial data!
E.g.
var depositPromille: Decimal(10,3)
typealias Money = Decimal(20,2)
  
For more info on how this could be implemented
in Swift. please read a PL/1 manual, ( i grew up in this world)
like this one:

especially under sub-topic “Data elements”

(however, don’t take everything for granted, PL/1 is still a very young language :o)
Unfortunately OOP never made it into PL/1 because with it, it would be nearly perfect.)

Should I make a new swift-evolution topic for fixed decimal?

Kind Regards
TedvG

What you describe, all those cases where one fixes losing precision by simply "ignoring it", that's part of why I'm hesitant about simply throwing in C-like promotion rules into any language. Once you add implicit type coercions, even just between integer or floating point types, your language gains a hundred unspoken rules and little guard rails you have to cling to lest you slip and hit the next pitfall. Though you may be dismissive of information loss, it is a serious issue in coercions, and one with implications that are never completely grokked by experts and serve as yet another hindrance to novices trying to adopt the language.

So, I don't think coercion under this scheme is the complete end-all-be-all solution to this problem, [though it may certainly feel right]. Sure, it is always defined behavior to "downcast" a value of a lower bitwidth to one of a higher bitwidth, but to dismiss Int -> Float, Float -> Int, and Double -> Float, etc. coercions as mere trifles is an attitude I don't want enshrined in the language's type system.

Perhaps there is a middle ground. Say, one could declare conformance to a special kind of protocol declaring safe implicit convertibility (see: Idris' solution of having an `implicit` conversion mechanism). Or perhaps a good first step may be to not deal with information loss at all, and only keep the parts of this proposal that are always defined behavior.

~Robert Widmann

2016/03/30 8:01、Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org> のメッセージ:

···

Currently, one has to deal with explicit conversion between numerical types,
which in many cases is unnecessary and costing time to code
for things that are quite obvious,
and cluttering the source, making it less readable.

Especially dealing all the time with often unavoidable intermixing
of floating point types CGFloat, Float, and Double
is really very annoying.

Conversion beween floating point types is always harmless as
floating point types are essentially the same.
They differ only in precision.

Therefore, I would recommend allowing the following implicit type conversions:

-between all floating point types e.g. Double, Float, CGFloat

-from any integer type to floating point types

-Also, personally, I wouldn’t mind assigning from a float to a (signed) integer
because I know what I am doing: that the fraction is lost
and that assigning a too large float to an Integer would then cause
a run time error, which I can try/catch, of course.

-from unsigned integer to signed integer
(nothing is lost here, but overflow should cause a run time error)

but no implicit conversion for:
- from integer to unsigned integer (loosing sign here)
- from a larger integer type to a smaller one e.g. Int32 <- Int64 (truncation)

Note however, that the compiler should issue warnings
when you do implicit conversions, but these warnings
are for most programmers of the “Yeah I know, don’t bug me.”
type, so one should be able to switch off these type of warnings.

Even a programmer with little experience simply knows
that bringing integers into the floating point domain
causes precision loss.
He/she also knows that assigning a Double to a smaller floating
point type also cause precision loss.
the reverse is not true.

Very much interested in your opinion!

----
N.B. the above does not yet include
the fixed decimal numerical type as this type is not yet
available in Swift. However, it should be implemented
*as soon as possible* because the fixed decimal type
is really needed for applications working with financial data!
E.g.
var depositPromille: Decimal(10,3)
typealias Money = Decimal(20,2)
  
For more info on how this could be implemented
in Swift. please read a PL/1 manual, ( i grew up in this world)
like this one:

IBM Documentation

especially under sub-topic “Data elements”

(however, don’t take everything for granted, PL/1 is still a very young language :o)
Unfortunately OOP never made it into PL/1 because with it, it would be nearly perfect.)

Should I make a new swift-evolution topic for fixed decimal?

Kind Regards
TedvG

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Sometimes, it's definitely desirable to have implicit conversion… but sometimes, it's not, so I think the current behavior is good.
I guess most of us have a zero tolerance policy for warnings, so those would be treated like the current errors.

It is possible to define operators that take numerics with different types as a workaround, and imho it wouldn't hurt to have a module that implements those operations.

Yes, I would also like to get to a model where “smaller things” can implicitly promote to “larger things”, when there is no loss of data (and in the case of CGFloat, we can be more loose with precision IMO).

This is covered under the guise of being able to define a subtype relationship between structs, which is something that I and many other people would love to have, but it is outside the scope of Swift 3.

-Chris

···

On Mar 30, 2016, at 5:01 AM, Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org> wrote:

Currently, one has to deal with explicit conversion between numerical types,
which in many cases is unnecessary and costing time to code
for things that are quite obvious,
and cluttering the source, making it less readable.

I’m in favour of implicit conversion for integers where no data can be lost (UInt32 to Int64, Int32 to Int64 etc.), in fact I posted a similar thread a little while ago but can’t find it; there’s something being done with numbers so this may be partly in the works.

I definitely think that implicit conversion for floating point should be avoided, as it can’t be guaranteed except in certain edge cases; for example, Javascript actually technically uses a double for all of its numeric types, effectively giving it a 52-bit (iirc) integer type, so in theory conversion of Int32 to Double is fine, and Int16 to Float might be as well, but I’m not certain if it’s a good idea or not, as it’s not quite the same as just extending the value.

···

On 30 Mar 2016, at 14:57, Developer via swift-evolution <swift-evolution@swift.org> wrote:

What you describe, all those cases where one fixes losing precision by simply "ignoring it", that's part of why I'm hesitant about simply throwing in C-like promotion rules into any language. Once you add implicit type coercions, even just between integer or floating point types, your language gains a hundred unspoken rules and little guard rails you have to cling to lest you slip and hit the next pitfall. Though you may be dismissive of information loss, it is a serious issue in coercions, and one with implications that are never completely grokked by experts and serve as yet another hindrance to novices trying to adopt the language.

So, I don't think coercion under this scheme is the complete end-all-be-all solution to this problem, [though it may certainly feel right]. Sure, it is always defined behavior to "downcast" a value of a lower bitwidth to one of a higher bitwidth, but to dismiss Int -> Float, Float -> Int, and Double -> Float, etc. coercions as mere trifles is an attitude I don't want enshrined in the language's type system.

Perhaps there is a middle ground. Say, one could declare conformance to a special kind of protocol declaring safe implicit convertibility (see: Idris' solution of having an `implicit` conversion mechanism). Or perhaps a good first step may be to not deal with information loss at all, and only keep the parts of this proposal that are always defined behavior.

~Robert Widmann

2016/03/30 8:01、Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> のメッセージ:

Currently, one has to deal with explicit conversion between numerical types,
which in many cases is unnecessary and costing time to code
for things that are quite obvious,
and cluttering the source, making it less readable.

Especially dealing all the time with often unavoidable intermixing
of floating point types CGFloat, Float, and Double
is really very annoying.

Conversion beween floating point types is always harmless as
floating point types are essentially the same.
They differ only in precision.

Therefore, I would recommend allowing the following implicit type conversions:

-between all floating point types e.g. Double, Float, CGFloat

-from any integer type to floating point types

-Also, personally, I wouldn’t mind assigning from a float to a (signed) integer
because I know what I am doing: that the fraction is lost
and that assigning a too large float to an Integer would then cause
a run time error, which I can try/catch, of course.

-from unsigned integer to signed integer
(nothing is lost here, but overflow should cause a run time error)

but no implicit conversion for:
- from integer to unsigned integer (loosing sign here)
- from a larger integer type to a smaller one e.g. Int32 <- Int64 (truncation)

Note however, that the compiler should issue warnings
when you do implicit conversions, but these warnings
are for most programmers of the “Yeah I know, don’t bug me.”
type, so one should be able to switch off these type of warnings.

Even a programmer with little experience simply knows
that bringing integers into the floating point domain
causes precision loss.
He/she also knows that assigning a Double to a smaller floating
point type also cause precision loss.
the reverse is not true.

Very much interested in your opinion!

----
N.B. the above does not yet include
the fixed decimal numerical type as this type is not yet
available in Swift. However, it should be implemented
*as soon as possible* because the fixed decimal type
is really needed for applications working with financial data!
E.g.
var depositPromille: Decimal(10,3)
typealias Money = Decimal(20,2)
  
For more info on how this could be implemented
in Swift. please read a PL/1 manual, ( i grew up in this world)
like this one:

IBM Documentation

especially under sub-topic “Data elements”

(however, don’t take everything for granted, PL/1 is still a very young language :o)
Unfortunately OOP never made it into PL/1 because with it, it would be nearly perfect.)

Should I make a new swift-evolution topic for fixed decimal?

Kind Regards
TedvG

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Hi Tino

Sometimes, it's definitely desirable to have implicit conversion… but sometimes, it's not, so I think the current behavior is good.
I guess most of us have a zero tolerance policy for warnings, so those would be treated like the current errors.

e.g. this should never be a problem at run time:
       aFloat = anInteger
neither is this:
       aCGFloat = aDouble

A warning is just that: a warning. simply notify the programmer, just in case he/she does not fully understand the implications..

Eg. in PL/1 these are not warnings but Information type messages.

It is possible to define operators that take numerics with different types as a workaround, and imho it wouldn't hurt to have a module that implements those operations.

Imho, that would add to much noise to Swift

You could of course handle conversion errors with a try/catch.

There is also extensive info how implicit data type conversion could be handled here:

http://odl.sysworks.biz/disk$vaxdocsep953/decw$book/d3ndaa10.p77.decw$book
(it is relatively new, from 1992, conversion rules of ca. 1970, but don’t let that bother you)

I’ve worked with many other languages too, but most of my reference
material is from PL/1,the parent of all procedural programming languages.

However writing a proposal with
“Replace Swift by PL/1 and add OOP features”
goes perhaps a bit to far :o)

TedvG

···

On 30.03.2016, at 15:07, Tino Heth <2th@gmx.de> wrote:

This is covered under the guise of being able to define a subtype relationship between structs, which is something that I and many other people would love to have, but it is outside the scope of Swift 3.

Should this be on the "Commonly Proposed" list or in the "Out of Scope" list in the Github repo? This topic keeps coming up.

···

--
Brent Royal-Gordon
Architechies

Thank you, Robert & Haravikk
Please allow me to respond in-line hereunder, thanks.
Ted.

I’m in favour of implicit conversion for integers where no data can be lost (UInt32 to Int64, Int32 to Int64 etc.), in fact I posted a similar thread a little while ago but can’t find it; there’s something being done with numbers so this may be partly in the works.

I definitely think that implicit conversion for floating point should be avoided, as it can’t be guaranteed

Why? and What cannot be guaranteed?

except in certain edge cases; for example, Javascript actually technically uses a double for all of its numeric types, effectively giving it a 52-bit (iirc) integer type,

awful, didn’t know that

so in theory conversion of Int32 to Double is fine, and Int16 to Float might be as well, but I’m not certain if it’s a good idea or not, as it’s not quite the same as just extending the value.

It simply would cause a float with less precision as an integer like
10000 -becomes e.g - 9999.999999, (depending on magnitude, of course)
but that is normal in a floating point domain; E.g. also with:
     var v:Double = 10000.0 // Double to Double

v would have the same imprecision… and could be anywhere between 9999.9998…10000.00001
(rough estimation, depending on magnitude and the floating point type used)

What you describe, all those cases where one fixes losing precision by simply "ignoring it", that's part of why I'm hesitant about simply throwing in C-like promotion rules into any language.

E.g. if I assign an Int to a Double, then I know very well what I am doing.
often occurring simple example here:
  for i in 0..<10
        {
                dTemperature = dInterval * i / / Double = Double * Int (not possible yet in Swift)
               foo(dTemperature)
         }

      Here I still have to write:
                   dTemperature = dInterval * Double(i)

      However, Swift will accept:
                   dTemperature = dInterval * 3 // 3 inferred to Double. could be regarded as an implicit conversion?

Once you add implicit type coercions, even just between integer or floating point types, your language gains a hundred unspoken rules

Could you please explain these “unspoken rules” you mention more in detail?

and little guard rails you have to cling to lest you slip and hit the next pitfall.

I am counting on the average intelligence of programmers.

Though you may be dismissive of information loss, it is a serious issue in coercions, and one with implications that are never completely grokked by experts

In practice, the implications/effects/behavior of a programming language
cannot be fully predicted and understood, there are simply too many possibilities,
Functional Programming attempts to solve this, trying to make/do everything mathematically
correct but fails for the aforementioned reason.

and serve as yet another hindrance to novices trying to adopt the language.

I don’t agree here. Even novices should have a good understanding
of the basic data types of a programming language,
Also note that concepts of integer, natural, rational, irrational numbers etc.
is very basic mathematics as learned in high school.
or your country’s equivalent education.

So aDouble = anInt should -in the programmer’s mind-
appear as an explicit conversion, that is, he/she should realize the consequences.
The same applies also doing it explicitly like so:
       
     aDouble = Double(anInt)
Same effect: even a fool can use this as well and not knowing the implications.

So, I don't think coercion under this scheme is the complete end-all-be-all solution to this problem, [though it may certainly feel right]. Sure, it is always defined behavior to "downcast" a value of a lower bitwidth to one of a higher bitwidth, but to dismiss Int -> Float, Float -> Int,

I wrote that I don’t want implicit conversion for Float -> Int.

and Double -> Float, etc. coercions as mere trifles is an attitude I don't want enshrined in the language's type system.

Could you give me an example where Double -> Float is problematic (apart from loosing precision) ?

Perhaps there is a middle ground. Say, one could declare conformance to a special kind of protocol declaring safe implicit convertibility (see: Idris' solution of having an `implicit` conversion mechanism).

Please spare me from this kind of contraptions.

  -=side note: =-
Thanks for bringing Idris to my attention. Investigating...
Idris is a FP language. I am not against it, but to me, FP is almost unreadable.
I doubt if I will ever use it.
I use strictly OOD/OOP. It’s natural. Like in Smalltalk. Proven. Solid.
For now, the only reason I use protocols in Swift are to accommodate delegating/callbacks.
  -= end side note =-

Or perhaps a good first step may be to not deal with information loss at all, and only keep the parts of this proposal that are always defined behavior.

To me, there is no unintended information loss, because I know what I am doing regarding implicit conversion.
Then again, in all the cases for which I suggested implicit data type conversion, there is no data loss (apart from precision)

TedvG

···

On 30.03.2016, at 16:15, Haravikk <swift-evolution@haravikk.me> wrote:

On 30 Mar 2016, at 14:57, Developer via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

~Robert Widmann

2016/03/30 8:01、Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> のメッセージ:

Currently, one has to deal with explicit conversion between numerical types,
which in many cases is unnecessary and costing time to code
for things that are quite obvious,
and cluttering the source, making it less readable.

Especially dealing all the time with often unavoidable intermixing
of floating point types CGFloat, Float, and Double
is really very annoying.

Conversion beween floating point types is always harmless as
floating point types are essentially the same.
They differ only in precision.

Therefore, I would recommend allowing the following implicit type conversions:

-between all floating point types e.g. Double, Float, CGFloat

-from any integer type to floating point types

-Also, personally, I wouldn’t mind assigning from a float to a (signed) integer
because I know what I am doing: that the fraction is lost
and that assigning a too large float to an Integer would then cause
a run time error, which I can try/catch, of course.

-from unsigned integer to signed integer
(nothing is lost here, but overflow should cause a run time error)

but no implicit conversion for:
- from integer to unsigned integer (loosing sign here)
- from a larger integer type to a smaller one e.g. Int32 <- Int64 (truncation)

Note however, that the compiler should issue warnings
when you do implicit conversions, but these warnings
are for most programmers of the “Yeah I know, don’t bug me.”
type, so one should be able to switch off these type of warnings.

Even a programmer with little experience simply knows
that bringing integers into the floating point domain
causes precision loss.
He/she also knows that assigning a Double to a smaller floating
point type also cause precision loss.
the reverse is not true.

Very much interested in your opinion!

----
N.B. the above does not yet include
the fixed decimal numerical type as this type is not yet
available in Swift. However, it should be implemented
*as soon as possible* because the fixed decimal type
is really needed for applications working with financial data!
E.g.
var depositPromille: Decimal(10,3)
typealias Money = Decimal(20,2)
  
For more info on how this could be implemented
in Swift. please read a PL/1 manual, ( i grew up in this world)
like this one:

IBM Documentation

especially under sub-topic “Data elements”

(however, don’t take everything for granted, PL/1 is still a very young language :o)
Unfortunately OOP never made it into PL/1 because with it, it would be nearly perfect.)

Should I make a new swift-evolution topic for fixed decimal?

Kind Regards
TedvG

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

Currently, one has to deal with explicit conversion between numerical types,
which in many cases is unnecessary and costing time to code
for things that are quite obvious,
and cluttering the source, making it less readable.

Yes, I would also like to get to a model where “smaller things” can implicitly promote to “larger things”, when there is no loss of data (and in the case of CGFloat, we can be more loose with precision IMO).

Yes, I think so too.

This is covered under the guise of being able to define a subtype relationship between structs, which is something that I and many other people would love to have, but it is outside the scope of Swift 3.

Technically, I don’t quite comprehend this yet, but will look it up :o)

-Chris

Thanks, Chris, I will now clear my head’s contents clean by listening to Led Zeppelin
(definitely not ISO conform) through my headphones at ca. 110 dB…
this of course blows right through all scope terminators known to man :o)

···

On 31.03.2016, at 00:12, Chris Lattner <clattner@apple.com> wrote:

On Mar 30, 2016, at 5:01 AM, Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org> wrote:

I believe section 6.3 of the ISO/C99 specification describes its integer promotion rules and Appendix J describes undefined behavior as a consequence of integer and floating point coercion. I refer to these when I speak of "rules".

As long as data loss is an "unintended" effect of a certain class of coercions, I don't believe it deserves to be implicit. If you "know what you're doing", the preference so far has been to tell the compiler that and use the constructors provided in the Swift Standard Library to perform explicit truncation. Even in C, if you can be more specific with a cast in cases where you intend data loss, you probably should be.

~Robert Widmann

2016/03/30 13:57、Ted F.A. van Gaalen <tedvgiosdev@gmail.com> のメッセージ:

···

Thank you, Robert & Haravikk
Please allow me to respond in-line hereunder, thanks.
Ted.

On 30.03.2016, at 16:15, Haravikk <swift-evolution@haravikk.me> wrote:

I’m in favour of implicit conversion for integers where no data can be lost (UInt32 to Int64, Int32 to Int64 etc.), in fact I posted a similar thread a little while ago but can’t find it; there’s something being done with numbers so this may be partly in the works.

I definitely think that implicit conversion for floating point should be avoided, as it can’t be guaranteed

Why? and What cannot be guaranteed?

except in certain edge cases; for example, Javascript actually technically uses a double for all of its numeric types, effectively giving it a 52-bit (iirc) integer type,

awful, didn’t know that

so in theory conversion of Int32 to Double is fine, and Int16 to Float might be as well, but I’m not certain if it’s a good idea or not, as it’s not quite the same as just extending the value.

It simply would cause a float with less precision as an integer like
10000 -becomes e.g - 9999.999999, (depending on magnitude, of course)
but that is normal in a floating point domain; E.g. also with:
     var v:Double = 10000.0 // Double to Double

v would have the same imprecision… and could be anywhere between 9999.9998…10000.00001
(rough estimation, depending on magnitude and the floating point type used)

On 30 Mar 2016, at 14:57, Developer via swift-evolution <swift-evolution@swift.org> wrote:

What you describe, all those cases where one fixes losing precision by simply "ignoring it", that's part of why I'm hesitant about simply throwing in C-like promotion rules into any language.

E.g. if I assign an Int to a Double, then I know very well what I am doing.
often occurring simple example here:
  for i in 0..<10
        {
                dTemperature = dInterval * i / / Double = Double * Int (not possible yet in Swift)
               foo(dTemperature)
         }

      Here I still have to write:
                   dTemperature = dInterval * Double(i)

      However, Swift will accept:
                   dTemperature = dInterval * 3 // 3 inferred to Double. could be regarded as an implicit conversion?

Once you add implicit type coercions, even just between integer or floating point types, your language gains a hundred unspoken rules

Could you please explain these “unspoken rules” you mention more in detail?

and little guard rails you have to cling to lest you slip and hit the next pitfall.

I am counting on the average intelligence of programmers.

Though you may be dismissive of information loss, it is a serious issue in coercions, and one with implications that are never completely grokked by experts

In practice, the implications/effects/behavior of a programming language
cannot be fully predicted and understood, there are simply too many possibilities,
Functional Programming attempts to solve this, trying to make/do everything mathematically
correct but fails for the aforementioned reason.

and serve as yet another hindrance to novices trying to adopt the language.

I don’t agree here. Even novices should have a good understanding
of the basic data types of a programming language,
Also note that concepts of integer, natural, rational, irrational numbers etc.
is very basic mathematics as learned in high school.
or your country’s equivalent education.

So aDouble = anInt should -in the programmer’s mind-
appear as an explicit conversion, that is, he/she should realize the consequences.
The same applies also doing it explicitly like so:
       
     aDouble = Double(anInt)
Same effect: even a fool can use this as well and not knowing the implications.

So, I don't think coercion under this scheme is the complete end-all-be-all solution to this problem, [though it may certainly feel right]. Sure, it is always defined behavior to "downcast" a value of a lower bitwidth to one of a higher bitwidth, but to dismiss Int -> Float, Float -> Int,

I wrote that I don’t want implicit conversion for Float -> Int.

and Double -> Float, etc. coercions as mere trifles is an attitude I don't want enshrined in the language's type system.

Could you give me an example where Double -> Float is problematic (apart from loosing precision) ?

Perhaps there is a middle ground. Say, one could declare conformance to a special kind of protocol declaring safe implicit convertibility (see: Idris' solution of having an `implicit` conversion mechanism).

Please spare me from this kind of contraptions.

  -=side note: =-
Thanks for bringing Idris to my attention. Investigating...
Idris is a FP language. I am not against it, but to me, FP is almost unreadable.
I doubt if I will ever use it.
I use strictly OOD/OOP. It’s natural. Like in Smalltalk. Proven. Solid.
For now, the only reason I use protocols in Swift are to accommodate delegating/callbacks.
  -= end side note =-

Or perhaps a good first step may be to not deal with information loss at all, and only keep the parts of this proposal that are always defined behavior.

To me, there is no unintended information loss, because I know what I am doing regarding implicit conversion.
Then again, in all the cases for which I suggested implicit data type conversion, there is no data loss (apart from precision)

TedvG

~Robert Widmann

2016/03/30 8:01、Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org> のメッセージ:

Currently, one has to deal with explicit conversion between numerical types,
which in many cases is unnecessary and costing time to code
for things that are quite obvious,
and cluttering the source, making it less readable.

Especially dealing all the time with often unavoidable intermixing
of floating point types CGFloat, Float, and Double
is really very annoying.

Conversion beween floating point types is always harmless as
floating point types are essentially the same.
They differ only in precision.

Therefore, I would recommend allowing the following implicit type conversions:

-between all floating point types e.g. Double, Float, CGFloat

-from any integer type to floating point types

-Also, personally, I wouldn’t mind assigning from a float to a (signed) integer
because I know what I am doing: that the fraction is lost
and that assigning a too large float to an Integer would then cause
a run time error, which I can try/catch, of course.

-from unsigned integer to signed integer
(nothing is lost here, but overflow should cause a run time error)

but no implicit conversion for:
- from integer to unsigned integer (loosing sign here)
- from a larger integer type to a smaller one e.g. Int32 <- Int64 (truncation)

Note however, that the compiler should issue warnings
when you do implicit conversions, but these warnings
are for most programmers of the “Yeah I know, don’t bug me.”
type, so one should be able to switch off these type of warnings.

Even a programmer with little experience simply knows
that bringing integers into the floating point domain
causes precision loss.
He/she also knows that assigning a Double to a smaller floating
point type also cause precision loss.
the reverse is not true.

Very much interested in your opinion!

----
N.B. the above does not yet include
the fixed decimal numerical type as this type is not yet
available in Swift. However, it should be implemented
*as soon as possible* because the fixed decimal type
is really needed for applications working with financial data!
E.g.
var depositPromille: Decimal(10,3)
typealias Money = Decimal(20,2)
  
For more info on how this could be implemented
in Swift. please read a PL/1 manual, ( i grew up in this world)
like this one:

IBM Documentation

especially under sub-topic “Data elements”

(however, don’t take everything for granted, PL/1 is still a very young language :o)
Unfortunately OOP never made it into PL/1 because with it, it would be nearly perfect.)

Should I make a new swift-evolution topic for fixed decimal?

Kind Regards
TedvG

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

These all fail with a runtime error ("Playground execution aborted: Execution was interrupted, reason: EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)”, if you’re curious):
Int(Double.infinity)
Int(Double.NaN)
Int(Double.quietNaN)

And this loop runs for 1024 iterations before finding just one instance where the conversion from Int to Double and back to Int actually gives the right answer:
var i = UInt(Int.max) + 1
var remainder = UInt()
repeat {
    i -= 1
    let DoubleI = UInt(Double(i))
    remainder = DoubleI > i ? DoubleI - i : i - DoubleI
} while remainder != 0

- Dave Sweeris

···

On Mar 30, 2016, at 12:57 PM, Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org> wrote:

Thank you, Robert & Haravikk
Please allow me to respond in-line hereunder, thanks.
Ted.

On 30.03.2016, at 16:15, Haravikk <swift-evolution@haravikk.me <mailto:swift-evolution@haravikk.me>> wrote:

I’m in favour of implicit conversion for integers where no data can be lost (UInt32 to Int64, Int32 to Int64 etc.), in fact I posted a similar thread a little while ago but can’t find it; there’s something being done with numbers so this may be partly in the works.

I definitely think that implicit conversion for floating point should be avoided, as it can’t be guaranteed

Why? and What cannot be guaranteed?

You’re right! Feel free to send a PR for the swift-evolution repo, I’d be happy to merge it.

-Chris

···

On Mar 30, 2016, at 4:41 PM, Brent Royal-Gordon <brent@architechies.com> wrote:

This is covered under the guise of being able to define a subtype relationship between structs, which is something that I and many other people would love to have, but it is outside the scope of Swift 3.

Should this be on the "Commonly Proposed" list or in the "Out of Scope" list in the Github repo? This topic keeps coming up.

I believe section 6.3 of the ISO/C99 specification describes its integer promotion rules and Appendix J describes undefined behavior as a consequence of integer and floating point coercion. I refer to these when I speak of "rules”.

Although ISO compliance makes sense in a lot of cases, for programming languages,
these rules are extremely bureaucratic, restricting and always far behind fast developments in
IT. Would you like to see Swift to be ISO compliant?
Then you could throw away perhaps more than half the language constructs
currently present in Swift?

@Chris:

is there a desire/requirement to make Swift ISO compliant?
and thus therewith restricting Swift’s flexibility? If so, to what extent?

As long as data loss is an "unintended" effect of a certain class of coercions, I don't believe it deserves to be implicit. If you "know what you're doing", the preference so far has been to tell the compiler that and use the constructors provided in the Swift Standard Library to perform explicit truncation. Even in C, if you can be more specific with a cast in cases where you intend data loss, you probably should be.

With all due respect, Robert, Imho, I find this all too theoretical and bureaucratic and tons of unnecessary overhead.
and I am telling the compiler implicitly:
aFloat = anInt // The compiler will use a builtin function to do the conversion. what can be wrong with that?
Again, in the cases I mentioned there is no data loss. (precision excluded)

···

On 30.03.2016, at 20:29, Developer <devteam.codafi@gmail.com> wrote:

~Robert Widmann

2016/03/30 13:57、Ted F.A. van Gaalen <tedvgiosdev@gmail.com <mailto:tedvgiosdev@gmail.com>> のメッセージ:

Thank you, Robert & Haravikk
Please allow me to respond in-line hereunder, thanks.
Ted.

On 30.03.2016, at 16:15, Haravikk <swift-evolution@haravikk.me <mailto:swift-evolution@haravikk.me>> wrote:

I’m in favour of implicit conversion for integers where no data can be lost (UInt32 to Int64, Int32 to Int64 etc.), in fact I posted a similar thread a little while ago but can’t find it; there’s something being done with numbers so this may be partly in the works.

I definitely think that implicit conversion for floating point should be avoided, as it can’t be guaranteed

Why? and What cannot be guaranteed?

except in certain edge cases; for example, Javascript actually technically uses a double for all of its numeric types, effectively giving it a 52-bit (iirc) integer type,

awful, didn’t know that

so in theory conversion of Int32 to Double is fine, and Int16 to Float might be as well, but I’m not certain if it’s a good idea or not, as it’s not quite the same as just extending the value.

It simply would cause a float with less precision as an integer like
10000 -becomes e.g - 9999.999999, (depending on magnitude, of course)
but that is normal in a floating point domain; E.g. also with:
     var v:Double = 10000.0 // Double to Double

v would have the same imprecision… and could be anywhere between 9999.9998…10000.00001
(rough estimation, depending on magnitude and the floating point type used)

On 30 Mar 2016, at 14:57, Developer via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

What you describe, all those cases where one fixes losing precision by simply "ignoring it", that's part of why I'm hesitant about simply throwing in C-like promotion rules into any language.

E.g. if I assign an Int to a Double, then I know very well what I am doing.
often occurring simple example here:
  for i in 0..<10
        {
                dTemperature = dInterval * i / / Double = Double * Int (not possible yet in Swift)
               foo(dTemperature)
         }

      Here I still have to write:
                   dTemperature = dInterval * Double(i)

      However, Swift will accept:
                   dTemperature = dInterval * 3 // 3 inferred to Double. could be regarded as an implicit conversion?

Once you add implicit type coercions, even just between integer or floating point types, your language gains a hundred unspoken rules

Could you please explain these “unspoken rules” you mention more in detail?

and little guard rails you have to cling to lest you slip and hit the next pitfall.

I am counting on the average intelligence of programmers.

Though you may be dismissive of information loss, it is a serious issue in coercions, and one with implications that are never completely grokked by experts

In practice, the implications/effects/behavior of a programming language
cannot be fully predicted and understood, there are simply too many possibilities,
Functional Programming attempts to solve this, trying to make/do everything mathematically
correct but fails for the aforementioned reason.

and serve as yet another hindrance to novices trying to adopt the language.

I don’t agree here. Even novices should have a good understanding
of the basic data types of a programming language,
Also note that concepts of integer, natural, rational, irrational numbers etc.
is very basic mathematics as learned in high school.
or your country’s equivalent education.

So aDouble = anInt should -in the programmer’s mind-
appear as an explicit conversion, that is, he/she should realize the consequences.
The same applies also doing it explicitly like so:
       
     aDouble = Double(anInt)
Same effect: even a fool can use this as well and not knowing the implications.

So, I don't think coercion under this scheme is the complete end-all-be-all solution to this problem, [though it may certainly feel right]. Sure, it is always defined behavior to "downcast" a value of a lower bitwidth to one of a higher bitwidth, but to dismiss Int -> Float, Float -> Int,

I wrote that I don’t want implicit conversion for Float -> Int.

and Double -> Float, etc. coercions as mere trifles is an attitude I don't want enshrined in the language's type system.

Could you give me an example where Double -> Float is problematic (apart from loosing precision) ?

Perhaps there is a middle ground. Say, one could declare conformance to a special kind of protocol declaring safe implicit convertibility (see: Idris' solution of having an `implicit` conversion mechanism).

Please spare me from this kind of contraptions.

  -=side note: =-
Thanks for bringing Idris to my attention. Investigating...
Idris is a FP language. I am not against it, but to me, FP is almost unreadable.
I doubt if I will ever use it.
I use strictly OOD/OOP. It’s natural. Like in Smalltalk. Proven. Solid.
For now, the only reason I use protocols in Swift are to accommodate delegating/callbacks.
  -= end side note =-

Or perhaps a good first step may be to not deal with information loss at all, and only keep the parts of this proposal that are always defined behavior.

To me, there is no unintended information loss, because I know what I am doing regarding implicit conversion.
Then again, in all the cases for which I suggested implicit data type conversion, there is no data loss (apart from precision)

TedvG

~Robert Widmann

2016/03/30 8:01、Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> のメッセージ:

Currently, one has to deal with explicit conversion between numerical types,
which in many cases is unnecessary and costing time to code
for things that are quite obvious,
and cluttering the source, making it less readable.

Especially dealing all the time with often unavoidable intermixing
of floating point types CGFloat, Float, and Double
is really very annoying.

Conversion beween floating point types is always harmless as
floating point types are essentially the same.
They differ only in precision.

Therefore, I would recommend allowing the following implicit type conversions:

-between all floating point types e.g. Double, Float, CGFloat

-from any integer type to floating point types

-Also, personally, I wouldn’t mind assigning from a float to a (signed) integer
because I know what I am doing: that the fraction is lost
and that assigning a too large float to an Integer would then cause
a run time error, which I can try/catch, of course.

-from unsigned integer to signed integer
(nothing is lost here, but overflow should cause a run time error)

but no implicit conversion for:
- from integer to unsigned integer (loosing sign here)
- from a larger integer type to a smaller one e.g. Int32 <- Int64 (truncation)

Note however, that the compiler should issue warnings
when you do implicit conversions, but these warnings
are for most programmers of the “Yeah I know, don’t bug me.”
type, so one should be able to switch off these type of warnings.

Even a programmer with little experience simply knows
that bringing integers into the floating point domain
causes precision loss.
He/she also knows that assigning a Double to a smaller floating
point type also cause precision loss.
the reverse is not true.

Very much interested in your opinion!

----
N.B. the above does not yet include
the fixed decimal numerical type as this type is not yet
available in Swift. However, it should be implemented
*as soon as possible* because the fixed decimal type
is really needed for applications working with financial data!
E.g.
var depositPromille: Decimal(10,3)
typealias Money = Decimal(20,2)
  
For more info on how this could be implemented
in Swift. please read a PL/1 manual, ( i grew up in this world)
like this one:

IBM Documentation

especially under sub-topic “Data elements”

(however, don’t take everything for granted, PL/1 is still a very young language :o)
Unfortunately OOP never made it into PL/1 because with it, it would be nearly perfect.)

Should I make a new swift-evolution topic for fixed decimal?

Kind Regards
TedvG

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

Rounding in a conversion is data loss, full stop. It will be extremely difficult (impossible, really) to convince folks that explicit conversion should not be required when the value might be changed.

If you restrict to cases where it is exact, implicit conversion is much more palatable, but you’re then left with a system in which Int32 can implicitly convert to Double, but not to Float. The added confusion of such a system is significant, but I can at least imagine that one could make a case for it.

What you’re much more likely to get traction with is reducing the need for conversions at all, implicit or explicit. The Integers prototype that Dave and Max have been working on (test/Prototypes/Integers.swift.gyb) starts to move in this direction by allowing heterogeneous integer types in shifts and comparisons. This is a great approach because it eliminates the need for conversion entirely, rather than making them implicit. Shifts and comparisons cover a large portion of the cases where the need for conversions is most painful, and this doesn’t introduce risk because the result type and value are clearly and unambiguously defined by the values involved (and the lhs type for shifts).

– Steve

PS: personally as someone who has a firm grasp of low-level numerics, I find the need for explicit conversions annoying, but it’s only a very minor annoyance, and I appreciate the safety that it provides for users who wisely don’t want to waste mental capacity on the precise details of numeric ranges and conversions. On balance I think requiring them to be explicit is a significant win.

I would also be remiss if I didn’t observe that excessive conversions, implicit or explicit, are a significant performance hazard. Even in the best-case scenario, they often require a sign- or zero-extension instruction. They may also involve testing for out-of-range values and trapping, or (as with unsigned integer to floating-point on x86) a sequence of several instructions. To some extent, this is a case of “if it hurts, don’t do it”.

···

On Mar 30, 2016, at 1:35 PM, Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org> wrote:

As long as data loss is an "unintended" effect of a certain class of coercions, I don't believe it deserves to be implicit. If you "know what you're doing", the preference so far has been to tell the compiler that and use the constructors provided in the Swift Standard Library to perform explicit truncation. Even in C, if you can be more specific with a cast in cases where you intend data loss, you probably should be.

With all due respect, Robert, Imho, I find this all too theoretical and bureaucratic and tons of unnecessary overhead.
and I am telling the compiler implicitly:
aFloat = anInt // The compiler will use a builtin function to do the conversion. what can be wrong with that?
Again, in the cases I mentioned there is no data loss. (precision excluded)

I believe section 6.3 of the ISO/C99 specification describes its integer promotion rules and Appendix J describes undefined behavior as a consequence of integer and floating point coercion. I refer to these when I speak of "rules”.

Although ISO compliance makes sense in a lot of cases, for programming languages,
these rules are extremely bureaucratic, restricting and always far behind fast developments in
IT. Would you like to see Swift to be ISO compliant?
Then you could throw away perhaps more than half the language constructs
currently present in Swift?

@Chris:

is there a desire/requirement to make Swift ISO compliant?
and thus therewith restricting Swift’s flexibility? If so, to what extent?

This is orthogonal to the discussion at hand.

As long as data loss is an "unintended" effect of a certain class of coercions, I don't believe it deserves to be implicit. If you "know what you're doing", the preference so far has been to tell the compiler that and use the constructors provided in the Swift Standard Library to perform explicit truncation. Even in C, if you can be more specific with a cast in cases where you intend data loss, you probably should be.

With all due respect, Robert, Imho, I find this all too theoretical and bureaucratic and tons of unnecessary overhead.
and I am telling the compiler implicitly:
aFloat = anInt // The compiler will use a builtin function to do the conversion. what can be wrong with that?
Again, in the cases I mentioned there is no data loss. (precision excluded)

An example of “data loss”, then (adapted from the wonderful example given by Felix Cloutier here <Warning about data loss c++/c - Stack Overflow). Be judicious running this, it will spin for quite a while if you don’t kill it first.

import Darwin

for i in Int(INT_MAX).stride(to: 0, by: -1) {
  let value : Float = Float(i)
  let ivalue : Int = Int(value)
  if (i != ivalue) {
    print("Integer \(i) is represented as \(ivalue) in a float\n")
  }
}

You may still argue, however, that loss of precision is not as egregious as full-on truncation, but it is still data loss all the same. If it is too technical and bureaucratic to insert casts to make your intent clear in either language (rather than what I assume is just silencing -Wconversion), I’ll take bureaucracy and safety over convenience please.

~Robert Widmann

···

On Mar 30, 2016, at 4:35 PM, Ted F.A. van Gaalen <tedvgiosdev@gmail.com> wrote:

On 30.03.2016, at 20:29, Developer <devteam.codafi@gmail.com <mailto:devteam.codafi@gmail.com>> wrote:

~Robert Widmann

2016/03/30 13:57、Ted F.A. van Gaalen <tedvgiosdev@gmail.com <mailto:tedvgiosdev@gmail.com>> のメッセージ:

Thank you, Robert & Haravikk
Please allow me to respond in-line hereunder, thanks.
Ted.

On 30.03.2016, at 16:15, Haravikk <swift-evolution@haravikk.me <mailto:swift-evolution@haravikk.me>> wrote:

I’m in favour of implicit conversion for integers where no data can be lost (UInt32 to Int64, Int32 to Int64 etc.), in fact I posted a similar thread a little while ago but can’t find it; there’s something being done with numbers so this may be partly in the works.

I definitely think that implicit conversion for floating point should be avoided, as it can’t be guaranteed

Why? and What cannot be guaranteed?

except in certain edge cases; for example, Javascript actually technically uses a double for all of its numeric types, effectively giving it a 52-bit (iirc) integer type,

awful, didn’t know that

so in theory conversion of Int32 to Double is fine, and Int16 to Float might be as well, but I’m not certain if it’s a good idea or not, as it’s not quite the same as just extending the value.

It simply would cause a float with less precision as an integer like
10000 -becomes e.g - 9999.999999, (depending on magnitude, of course)
but that is normal in a floating point domain; E.g. also with:
     var v:Double = 10000.0 // Double to Double

v would have the same imprecision… and could be anywhere between 9999.9998…10000.00001
(rough estimation, depending on magnitude and the floating point type used)

On 30 Mar 2016, at 14:57, Developer via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

What you describe, all those cases where one fixes losing precision by simply "ignoring it", that's part of why I'm hesitant about simply throwing in C-like promotion rules into any language.

E.g. if I assign an Int to a Double, then I know very well what I am doing.
often occurring simple example here:
  for i in 0..<10
        {
                dTemperature = dInterval * i / / Double = Double * Int (not possible yet in Swift)
               foo(dTemperature)
         }

      Here I still have to write:
                   dTemperature = dInterval * Double(i)

      However, Swift will accept:
                   dTemperature = dInterval * 3 // 3 inferred to Double. could be regarded as an implicit conversion?

Once you add implicit type coercions, even just between integer or floating point types, your language gains a hundred unspoken rules

Could you please explain these “unspoken rules” you mention more in detail?

and little guard rails you have to cling to lest you slip and hit the next pitfall.

I am counting on the average intelligence of programmers.

Though you may be dismissive of information loss, it is a serious issue in coercions, and one with implications that are never completely grokked by experts

In practice, the implications/effects/behavior of a programming language
cannot be fully predicted and understood, there are simply too many possibilities,
Functional Programming attempts to solve this, trying to make/do everything mathematically
correct but fails for the aforementioned reason.

and serve as yet another hindrance to novices trying to adopt the language.

I don’t agree here. Even novices should have a good understanding
of the basic data types of a programming language,
Also note that concepts of integer, natural, rational, irrational numbers etc.
is very basic mathematics as learned in high school.
or your country’s equivalent education.

So aDouble = anInt should -in the programmer’s mind-
appear as an explicit conversion, that is, he/she should realize the consequences.
The same applies also doing it explicitly like so:
       
     aDouble = Double(anInt)
Same effect: even a fool can use this as well and not knowing the implications.

So, I don't think coercion under this scheme is the complete end-all-be-all solution to this problem, [though it may certainly feel right]. Sure, it is always defined behavior to "downcast" a value of a lower bitwidth to one of a higher bitwidth, but to dismiss Int -> Float, Float -> Int,

I wrote that I don’t want implicit conversion for Float -> Int.

and Double -> Float, etc. coercions as mere trifles is an attitude I don't want enshrined in the language's type system.

Could you give me an example where Double -> Float is problematic (apart from loosing precision) ?

Perhaps there is a middle ground. Say, one could declare conformance to a special kind of protocol declaring safe implicit convertibility (see: Idris' solution of having an `implicit` conversion mechanism).

Please spare me from this kind of contraptions.

  -=side note: =-
Thanks for bringing Idris to my attention. Investigating...
Idris is a FP language. I am not against it, but to me, FP is almost unreadable.
I doubt if I will ever use it.
I use strictly OOD/OOP. It’s natural. Like in Smalltalk. Proven. Solid.
For now, the only reason I use protocols in Swift are to accommodate delegating/callbacks.
  -= end side note =-

Or perhaps a good first step may be to not deal with information loss at all, and only keep the parts of this proposal that are always defined behavior.

To me, there is no unintended information loss, because I know what I am doing regarding implicit conversion.
Then again, in all the cases for which I suggested implicit data type conversion, there is no data loss (apart from precision)

TedvG

~Robert Widmann

2016/03/30 8:01、Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> のメッセージ:

Currently, one has to deal with explicit conversion between numerical types,
which in many cases is unnecessary and costing time to code
for things that are quite obvious,
and cluttering the source, making it less readable.

Especially dealing all the time with often unavoidable intermixing
of floating point types CGFloat, Float, and Double
is really very annoying.

Conversion beween floating point types is always harmless as
floating point types are essentially the same.
They differ only in precision.

Therefore, I would recommend allowing the following implicit type conversions:

-between all floating point types e.g. Double, Float, CGFloat

-from any integer type to floating point types

-Also, personally, I wouldn’t mind assigning from a float to a (signed) integer
because I know what I am doing: that the fraction is lost
and that assigning a too large float to an Integer would then cause
a run time error, which I can try/catch, of course.

-from unsigned integer to signed integer
(nothing is lost here, but overflow should cause a run time error)

but no implicit conversion for:
- from integer to unsigned integer (loosing sign here)
- from a larger integer type to a smaller one e.g. Int32 <- Int64 (truncation)

Note however, that the compiler should issue warnings
when you do implicit conversions, but these warnings
are for most programmers of the “Yeah I know, don’t bug me.”
type, so one should be able to switch off these type of warnings.

Even a programmer with little experience simply knows
that bringing integers into the floating point domain
causes precision loss.
He/she also knows that assigning a Double to a smaller floating
point type also cause precision loss.
the reverse is not true.

Very much interested in your opinion!

----
N.B. the above does not yet include
the fixed decimal numerical type as this type is not yet
available in Swift. However, it should be implemented
*as soon as possible* because the fixed decimal type
is really needed for applications working with financial data!
E.g.
var depositPromille: Decimal(10,3)
typealias Money = Decimal(20,2)
  
For more info on how this could be implemented
in Swift. please read a PL/1 manual, ( i grew up in this world)
like this one:

IBM Documentation

especially under sub-topic “Data elements”

(however, don’t take everything for granted, PL/1 is still a very young language :o)
Unfortunately OOP never made it into PL/1 because with it, it would be nearly perfect.)

Should I make a new swift-evolution topic for fixed decimal?

Kind Regards
TedvG

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

Ok, Dave, Developer, Steven, Howard
Very interesting, thank you for the in depth responses!
I see the points, I am still learning everyday.
I drop my case, as they say.
Kind Regards.
TedvG

···

On 30.03.2016, at 23:38, Developer <devteam.codafi@gmail.com> wrote:

On Mar 30, 2016, at 4:35 PM, Ted F.A. van Gaalen <tedvgiosdev@gmail.com <mailto:tedvgiosdev@gmail.com>> wrote:

On 30.03.2016, at 20:29, Developer <devteam.codafi@gmail.com <mailto:devteam.codafi@gmail.com>> wrote:

I believe section 6.3 of the ISO/C99 specification describes its integer promotion rules and Appendix J describes undefined behavior as a consequence of integer and floating point coercion. I refer to these when I speak of "rules”.

Although ISO compliance makes sense in a lot of cases, for programming languages,
these rules are extremely bureaucratic, restricting and always far behind fast developments in
IT. Would you like to see Swift to be ISO compliant?
Then you could throw away perhaps more than half the language constructs
currently present in Swift?

@Chris:

is there a desire/requirement to make Swift ISO compliant?
and thus therewith restricting Swift’s flexibility? If so, to what extent?

This is orthogonal to the discussion at hand.

As long as data loss is an "unintended" effect of a certain class of coercions, I don't believe it deserves to be implicit. If you "know what you're doing", the preference so far has been to tell the compiler that and use the constructors provided in the Swift Standard Library to perform explicit truncation. Even in C, if you can be more specific with a cast in cases where you intend data loss, you probably should be.

With all due respect, Robert, Imho, I find this all too theoretical and bureaucratic and tons of unnecessary overhead.
and I am telling the compiler implicitly:
aFloat = anInt // The compiler will use a builtin function to do the conversion. what can be wrong with that?
Again, in the cases I mentioned there is no data loss. (precision excluded)

An example of “data loss”, then (adapted from the wonderful example given by Felix Cloutier here <Warning about data loss c++/c - Stack Overflow). Be judicious running this, it will spin for quite a while if you don’t kill it first.

import Darwin

for i in Int(INT_MAX).stride(to: 0, by: -1) {
  let value : Float = Float(i)
  let ivalue : Int = Int(value)
  if (i != ivalue) {
    print("Integer \(i) is represented as \(ivalue) in a float\n")
  }
}

You may still argue, however, that loss of precision is not as egregious as full-on truncation, but it is still data loss all the same. If it is too technical and bureaucratic to insert casts to make your intent clear in either language (rather than what I assume is just silencing -Wconversion), I’ll take bureaucracy and safety over convenience please.

~Robert Widmann

~Robert Widmann

2016/03/30 13:57、Ted F.A. van Gaalen <tedvgiosdev@gmail.com <mailto:tedvgiosdev@gmail.com>> のメッセージ:

Thank you, Robert & Haravikk
Please allow me to respond in-line hereunder, thanks.
Ted.

On 30.03.2016, at 16:15, Haravikk <swift-evolution@haravikk.me <mailto:swift-evolution@haravikk.me>> wrote:

I’m in favour of implicit conversion for integers where no data can be lost (UInt32 to Int64, Int32 to Int64 etc.), in fact I posted a similar thread a little while ago but can’t find it; there’s something being done with numbers so this may be partly in the works.

I definitely think that implicit conversion for floating point should be avoided, as it can’t be guaranteed

Why? and What cannot be guaranteed?

except in certain edge cases; for example, Javascript actually technically uses a double for all of its numeric types, effectively giving it a 52-bit (iirc) integer type,

awful, didn’t know that

so in theory conversion of Int32 to Double is fine, and Int16 to Float might be as well, but I’m not certain if it’s a good idea or not, as it’s not quite the same as just extending the value.

It simply would cause a float with less precision as an integer like
10000 -becomes e.g - 9999.999999, (depending on magnitude, of course)
but that is normal in a floating point domain; E.g. also with:
     var v:Double = 10000.0 // Double to Double

v would have the same imprecision… and could be anywhere between 9999.9998…10000.00001
(rough estimation, depending on magnitude and the floating point type used)

On 30 Mar 2016, at 14:57, Developer via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

What you describe, all those cases where one fixes losing precision by simply "ignoring it", that's part of why I'm hesitant about simply throwing in C-like promotion rules into any language.

E.g. if I assign an Int to a Double, then I know very well what I am doing.
often occurring simple example here:
  for i in 0..<10
        {
                dTemperature = dInterval * i / / Double = Double * Int (not possible yet in Swift)
               foo(dTemperature)
         }

      Here I still have to write:
                   dTemperature = dInterval * Double(i)

      However, Swift will accept:
                   dTemperature = dInterval * 3 // 3 inferred to Double. could be regarded as an implicit conversion?

Once you add implicit type coercions, even just between integer or floating point types, your language gains a hundred unspoken rules

Could you please explain these “unspoken rules” you mention more in detail?

and little guard rails you have to cling to lest you slip and hit the next pitfall.

I am counting on the average intelligence of programmers.

Though you may be dismissive of information loss, it is a serious issue in coercions, and one with implications that are never completely grokked by experts

In practice, the implications/effects/behavior of a programming language
cannot be fully predicted and understood, there are simply too many possibilities,
Functional Programming attempts to solve this, trying to make/do everything mathematically
correct but fails for the aforementioned reason.

and serve as yet another hindrance to novices trying to adopt the language.

I don’t agree here. Even novices should have a good understanding
of the basic data types of a programming language,
Also note that concepts of integer, natural, rational, irrational numbers etc.
is very basic mathematics as learned in high school.
or your country’s equivalent education.

So aDouble = anInt should -in the programmer’s mind-
appear as an explicit conversion, that is, he/she should realize the consequences.
The same applies also doing it explicitly like so:
       
     aDouble = Double(anInt)
Same effect: even a fool can use this as well and not knowing the implications.

So, I don't think coercion under this scheme is the complete end-all-be-all solution to this problem, [though it may certainly feel right]. Sure, it is always defined behavior to "downcast" a value of a lower bitwidth to one of a higher bitwidth, but to dismiss Int -> Float, Float -> Int,

I wrote that I don’t want implicit conversion for Float -> Int.

and Double -> Float, etc. coercions as mere trifles is an attitude I don't want enshrined in the language's type system.

Could you give me an example where Double -> Float is problematic (apart from loosing precision) ?

Perhaps there is a middle ground. Say, one could declare conformance to a special kind of protocol declaring safe implicit convertibility (see: Idris' solution of having an `implicit` conversion mechanism).

Please spare me from this kind of contraptions.

  -=side note: =-
Thanks for bringing Idris to my attention. Investigating...
Idris is a FP language. I am not against it, but to me, FP is almost unreadable.
I doubt if I will ever use it.
I use strictly OOD/OOP. It’s natural. Like in Smalltalk. Proven. Solid.
For now, the only reason I use protocols in Swift are to accommodate delegating/callbacks.
  -= end side note =-

Or perhaps a good first step may be to not deal with information loss at all, and only keep the parts of this proposal that are always defined behavior.

To me, there is no unintended information loss, because I know what I am doing regarding implicit conversion.
Then again, in all the cases for which I suggested implicit data type conversion, there is no data loss (apart from precision)

TedvG

~Robert Widmann

2016/03/30 8:01、Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> のメッセージ:

Currently, one has to deal with explicit conversion between numerical types,
which in many cases is unnecessary and costing time to code
for things that are quite obvious,
and cluttering the source, making it less readable.

Especially dealing all the time with often unavoidable intermixing
of floating point types CGFloat, Float, and Double
is really very annoying.

Conversion beween floating point types is always harmless as
floating point types are essentially the same.
They differ only in precision.

Therefore, I would recommend allowing the following implicit type conversions:

-between all floating point types e.g. Double, Float, CGFloat

-from any integer type to floating point types

-Also, personally, I wouldn’t mind assigning from a float to a (signed) integer
because I know what I am doing: that the fraction is lost
and that assigning a too large float to an Integer would then cause
a run time error, which I can try/catch, of course.

-from unsigned integer to signed integer
(nothing is lost here, but overflow should cause a run time error)

but no implicit conversion for:
- from integer to unsigned integer (loosing sign here)
- from a larger integer type to a smaller one e.g. Int32 <- Int64 (truncation)

Note however, that the compiler should issue warnings
when you do implicit conversions, but these warnings
are for most programmers of the “Yeah I know, don’t bug me.”
type, so one should be able to switch off these type of warnings.

Even a programmer with little experience simply knows
that bringing integers into the floating point domain
causes precision loss.
He/she also knows that assigning a Double to a smaller floating
point type also cause precision loss.
the reverse is not true.

Very much interested in your opinion!

----
N.B. the above does not yet include
the fixed decimal numerical type as this type is not yet
available in Swift. However, it should be implemented
*as soon as possible* because the fixed decimal type
is really needed for applications working with financial data!
E.g.
var depositPromille: Decimal(10,3)
typealias Money = Decimal(20,2)
  
For more info on how this could be implemented
in Swift. please read a PL/1 manual, ( i grew up in this world)
like this one:

IBM Documentation

especially under sub-topic “Data elements”

(however, don’t take everything for granted, PL/1 is still a very young language :o)
Unfortunately OOP never made it into PL/1 because with it, it would be nearly perfect.)

Should I make a new swift-evolution topic for fixed decimal?

Kind Regards
TedvG

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

+1

I am strictly against implicit conversions where data loss may happen.
I am even against implicit conversions which are just expensive.

With regards to the argument of counting on the developers intelligence I think that is a task for compilers just like type checking.
  
-Thorsten

···

Am 30.03.2016 um 23:38 schrieb Developer via swift-evolution <swift-evolution@swift.org>:

On Mar 30, 2016, at 4:35 PM, Ted F.A. van Gaalen <tedvgiosdev@gmail.com> wrote:

On 30.03.2016, at 20:29, Developer <devteam.codafi@gmail.com> wrote:

I believe section 6.3 of the ISO/C99 specification describes its integer promotion rules and Appendix J describes undefined behavior as a consequence of integer and floating point coercion. I refer to these when I speak of "rules”.

Although ISO compliance makes sense in a lot of cases, for programming languages,
these rules are extremely bureaucratic, restricting and always far behind fast developments in
IT. Would you like to see Swift to be ISO compliant?
Then you could throw away perhaps more than half the language constructs
currently present in Swift?

@Chris:

is there a desire/requirement to make Swift ISO compliant?
and thus therewith restricting Swift’s flexibility? If so, to what extent?

This is orthogonal to the discussion at hand.

As long as data loss is an "unintended" effect of a certain class of coercions, I don't believe it deserves to be implicit. If you "know what you're doing", the preference so far has been to tell the compiler that and use the constructors provided in the Swift Standard Library to perform explicit truncation. Even in C, if you can be more specific with a cast in cases where you intend data loss, you probably should be.

With all due respect, Robert, Imho, I find this all too theoretical and bureaucratic and tons of unnecessary overhead.
and I am telling the compiler implicitly:
aFloat = anInt // The compiler will use a builtin function to do the conversion. what can be wrong with that?
Again, in the cases I mentioned there is no data loss. (precision excluded)

An example of “data loss”, then (adapted from the wonderful example given by Felix Cloutier here). Be judicious running this, it will spin for quite a while if you don’t kill it first.

import Darwin

for i in Int(INT_MAX).stride(to: 0, by: -1) {
  let value : Float = Float(i)
  let ivalue : Int = Int(value)
  if (i != ivalue) {
    print("Integer \(i) is represented as \(ivalue) in a float\n")
  }
}

You may still argue, however, that loss of precision is not as egregious as full-on truncation, but it is still data loss all the same. If it is too technical and bureaucratic to insert casts to make your intent clear in either language (rather than what I assume is just silencing -Wconversion), I’ll take bureaucracy and safety over convenience please.

~Robert Widmann

~Robert Widmann

2016/03/30 13:57、Ted F.A. van Gaalen <tedvgiosdev@gmail.com> のメッセージ:

Thank you, Robert & Haravikk
Please allow me to respond in-line hereunder, thanks.
Ted.

On 30.03.2016, at 16:15, Haravikk <swift-evolution@haravikk.me> wrote:

I’m in favour of implicit conversion for integers where no data can be lost (UInt32 to Int64, Int32 to Int64 etc.), in fact I posted a similar thread a little while ago but can’t find it; there’s something being done with numbers so this may be partly in the works.

I definitely think that implicit conversion for floating point should be avoided, as it can’t be guaranteed

Why? and What cannot be guaranteed?

except in certain edge cases; for example, Javascript actually technically uses a double for all of its numeric types, effectively giving it a 52-bit (iirc) integer type,

awful, didn’t know that

so in theory conversion of Int32 to Double is fine, and Int16 to Float might be as well, but I’m not certain if it’s a good idea or not, as it’s not quite the same as just extending the value.

It simply would cause a float with less precision as an integer like
10000 -becomes e.g - 9999.999999, (depending on magnitude, of course)
but that is normal in a floating point domain; E.g. also with:
     var v:Double = 10000.0 // Double to Double

v would have the same imprecision… and could be anywhere between 9999.9998…10000.00001
(rough estimation, depending on magnitude and the floating point type used)

On 30 Mar 2016, at 14:57, Developer via swift-evolution <swift-evolution@swift.org> wrote:

What you describe, all those cases where one fixes losing precision by simply "ignoring it", that's part of why I'm hesitant about simply throwing in C-like promotion rules into any language.

E.g. if I assign an Int to a Double, then I know very well what I am doing.
often occurring simple example here:
  for i in 0..<10
        {
                dTemperature = dInterval * i / / Double = Double * Int (not possible yet in Swift)
               foo(dTemperature)
         }

      Here I still have to write:
                   dTemperature = dInterval * Double(i)

      However, Swift will accept:
                   dTemperature = dInterval * 3 // 3 inferred to Double. could be regarded as an implicit conversion?

Once you add implicit type coercions, even just between integer or floating point types, your language gains a hundred unspoken rules

Could you please explain these “unspoken rules” you mention more in detail?

and little guard rails you have to cling to lest you slip and hit the next pitfall.

I am counting on the average intelligence of programmers.

Though you may be dismissive of information loss, it is a serious issue in coercions, and one with implications that are never completely grokked by experts

In practice, the implications/effects/behavior of a programming language
cannot be fully predicted and understood, there are simply too many possibilities,
Functional Programming attempts to solve this, trying to make/do everything mathematically
correct but fails for the aforementioned reason.

and serve as yet another hindrance to novices trying to adopt the language.

I don’t agree here. Even novices should have a good understanding
of the basic data types of a programming language,
Also note that concepts of integer, natural, rational, irrational numbers etc.
is very basic mathematics as learned in high school.
or your country’s equivalent education.

So aDouble = anInt should -in the programmer’s mind-
appear as an explicit conversion, that is, he/she should realize the consequences.
The same applies also doing it explicitly like so:
       
     aDouble = Double(anInt)
Same effect: even a fool can use this as well and not knowing the implications.

So, I don't think coercion under this scheme is the complete end-all-be-all solution to this problem, [though it may certainly feel right]. Sure, it is always defined behavior to "downcast" a value of a lower bitwidth to one of a higher bitwidth, but to dismiss Int -> Float, Float -> Int,

I wrote that I don’t want implicit conversion for Float -> Int.

and Double -> Float, etc. coercions as mere trifles is an attitude I don't want enshrined in the language's type system.

Could you give me an example where Double -> Float is problematic (apart from loosing precision) ?

Perhaps there is a middle ground. Say, one could declare conformance to a special kind of protocol declaring safe implicit convertibility (see: Idris' solution of having an `implicit` conversion mechanism).

Please spare me from this kind of contraptions.

  -=side note: =-
Thanks for bringing Idris to my attention. Investigating...
Idris is a FP language. I am not against it, but to me, FP is almost unreadable.
I doubt if I will ever use it.
I use strictly OOD/OOP. It’s natural. Like in Smalltalk. Proven. Solid.
For now, the only reason I use protocols in Swift are to accommodate delegating/callbacks.
  -= end side note =-

Or perhaps a good first step may be to not deal with information loss at all, and only keep the parts of this proposal that are always defined behavior.

To me, there is no unintended information loss, because I know what I am doing regarding implicit conversion.
Then again, in all the cases for which I suggested implicit data type conversion, there is no data loss (apart from precision)

TedvG

~Robert Widmann

2016/03/30 8:01、Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org> のメッセージ:

Currently, one has to deal with explicit conversion between numerical types,
which in many cases is unnecessary and costing time to code
for things that are quite obvious,
and cluttering the source, making it less readable.

Especially dealing all the time with often unavoidable intermixing
of floating point types CGFloat, Float, and Double
is really very annoying.

Conversion beween floating point types is always harmless as
floating point types are essentially the same.
They differ only in precision.

Therefore, I would recommend allowing the following implicit type conversions:

-between all floating point types e.g. Double, Float, CGFloat

-from any integer type to floating point types

-Also, personally, I wouldn’t mind assigning from a float to a (signed) integer
because I know what I am doing: that the fraction is lost
and that assigning a too large float to an Integer would then cause
a run time error, which I can try/catch, of course.

-from unsigned integer to signed integer
(nothing is lost here, but overflow should cause a run time error)

but no implicit conversion for:
- from integer to unsigned integer (loosing sign here)
- from a larger integer type to a smaller one e.g. Int32 <- Int64 (truncation)

Note however, that the compiler should issue warnings
when you do implicit conversions, but these warnings
are for most programmers of the “Yeah I know, don’t bug me.”
type, so one should be able to switch off these type of warnings.

Even a programmer with little experience simply knows
that bringing integers into the floating point domain
causes precision loss.
He/she also knows that assigning a Double to a smaller floating
point type also cause precision loss.
the reverse is not true.

Very much interested in your opinion!

----
N.B. the above does not yet include
the fixed decimal numerical type as this type is not yet
available in Swift. However, it should be implemented
*as soon as possible* because the fixed decimal type
is really needed for applications working with financial data!
E.g.
var depositPromille: Decimal(10,3)
typealias Money = Decimal(20,2)
  
For more info on how this could be implemented
in Swift. please read a PL/1 manual, ( i grew up in this world)
like this one:

IBM Documentation

especially under sub-topic “Data elements”

(however, don’t take everything for granted, PL/1 is still a very young language :o)
Unfortunately OOP never made it into PL/1 because with it, it would be nearly perfect.)

Should I make a new swift-evolution topic for fixed decimal?

Kind Regards
TedvG

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Implicit conversion complicates method overloading, e. g. if you have both
m(Int) and m(Double) which does m(0) call? Sure there are rules in other
languages that deal with this but they are complicated. So my call is that
it is not worth the trouble.

···

On Thursday, 31 March 2016, Ted F.A. van Gaalen via swift-evolution < swift-evolution@swift.org> wrote:

On 30.03.2016, at 20:29, Developer <devteam.codafi@gmail.com > <javascript:_e(%7B%7D,'cvml','devteam.codafi@gmail.com');>> wrote:

I believe section 6.3 of the ISO/C99 specification describes its integer
promotion rules and Appendix J describes undefined behavior as a
consequence of integer and floating point coercion. I refer to these when
I speak of "rules”.

Although ISO compliance makes sense in a lot of cases, for programming
languages,
these rules are extremely bureaucratic, restricting and always far behind
fast developments in
IT. Would you like to see Swift to be ISO compliant?
Then you could throw away perhaps more than half the language constructs
currently present in Swift?

@Chris:

is there a desire/requirement to make Swift ISO compliant?
and thus therewith restricting Swift’s flexibility? If so, to what extent?

As long as data loss is an "unintended" effect of a certain class of
coercions, I don't believe it deserves to be implicit. If you "know what
you're doing", the preference so far has been to tell the compiler that and
use the constructors provided in the Swift Standard Library to perform
explicit truncation. Even in C, if you can be more specific with a cast in
cases where you intend data loss, you probably should be.

With all due respect, Robert, Imho, I find this all too theoretical and
bureaucratic and tons of unnecessary overhead.
and I am telling the compiler implicitly:
aFloat = anInt // The compiler will use a builtin function to do the
conversion. what can be wrong with that?
Again, in the cases I mentioned there is no data loss. (precision
excluded)

~Robert Widmann

2016/03/30 13:57、Ted F.A. van Gaalen <tedvgiosdev@gmail.com
<javascript:_e(%7B%7D,'cvml','tedvgiosdev@gmail.com');>> のメッセージ:

Thank you, Robert & Haravikk
Please allow me to respond in-line hereunder, thanks.
Ted.

On 30.03.2016, at 16:15, Haravikk <swift-evolution@haravikk.me > <javascript:_e(%7B%7D,'cvml','swift-evolution@haravikk.me');>> wrote:

I’m in favour of implicit conversion for integers where no data can be
lost (UInt32 to Int64, Int32 to Int64 etc.), in fact I posted a similar
thread a little while ago but can’t find it; there’s something being done
with numbers so this may be partly in the works.

I definitely think that implicit conversion for floating point should be
avoided, as it can’t be guaranteed

Why? and What cannot be guaranteed?

except in certain edge cases; for example, Javascript actually technically
uses a double for all of its numeric types, effectively giving it a 52-bit
(iirc) integer type,

awful, didn’t know that

so in theory conversion of Int32 to Double is fine, and Int16 to Float
might be as well, but I’m not certain if it’s a good idea or not, as it’s
not quite the same as just extending the value.

It simply would cause a float with less precision as an integer like
10000 -becomes e.g - 9999.999999, (depending on magnitude, of course)
but that is normal in a floating point domain; E.g. also with:
     var v:Double = 10000.0 // Double to Double

v would have the same imprecision… and could be anywhere between
9999.9998…10000.00001
(rough estimation, depending on magnitude and the floating point type
used)

On 30 Mar 2016, at 14:57, Developer via swift-evolution < > swift-evolution@swift.org > <javascript:_e(%7B%7D,'cvml','swift-evolution@swift.org');>> wrote:

What you describe, all those cases where one fixes losing precision by
simply "ignoring it", that's part of why I'm hesitant about simply throwing
in C-like promotion rules into any language.

E.g. if I assign an Int to a Double, then I know very well what I am doing.
often occurring simple example here:
for i in 0..<10
        {
                dTemperature = dInterval * i / / Double = Double
* Int (not possible yet in Swift)
               foo(dTemperature)
         }

      Here I still have to write:
                   dTemperature = dInterval * Double(i)

      However, Swift will accept:
                   dTemperature = dInterval * 3 // 3 inferred to
Double. could be regarded as an implicit conversion?

Once you add implicit type coercions, even just between integer or
floating point types, your language gains a hundred unspoken rules

Could you please explain these “unspoken rules” you mention more in
detail?

and little guard rails you have to cling to lest you slip and hit the next
pitfall.

I am counting on the average intelligence of programmers.

Though you may be dismissive of information loss, it is a serious issue
in coercions, and one with implications that are never completely grokked
by experts

In practice, the implications/effects/behavior of a programming language
cannot be fully predicted and understood, there are simply too many
possibilities,
Functional Programming attempts to solve this, trying to make/do
everything mathematically
correct but fails for the aforementioned reason.

and serve as yet another hindrance to novices trying to adopt the
language.

I don’t agree here. Even novices should have a good understanding
of the basic data types of a programming language,
Also note that concepts of integer, natural, rational, irrational numbers
etc.
is very basic mathematics as learned in high school.
or your country’s equivalent education.

So aDouble = anInt should -in the programmer’s mind-
appear as an explicit conversion, that is, he/she should realize the
consequences.
The same applies also doing it explicitly like so:

     aDouble = Double(anInt)
Same effect: even a fool can use this as well and not knowing the
implications.

So, I don't think coercion under this scheme is the complete
end-all-be-all solution to this problem, [though it may certainly *feel*
right]. Sure, it is always defined behavior to "downcast" a value of a
lower bitwidth to one of a higher bitwidth, but to dismiss Int -> Float,
Float -> Int,

I wrote that I don’t want implicit conversion for Float -> Int.

and Double -> Float, etc. coercions as mere trifles is an attitude I don't
want enshrined in the language's type system.

Could you give me an example where Double -> Float is problematic (apart
from loosing precision) ?

Perhaps there is a middle ground. Say, one could declare conformance to a
special kind of protocol declaring safe implicit convertibility (see:
Idris' solution of having an `implicit` conversion mechanism).

Please spare me from this kind of contraptions.

  -=side note: =-
Thanks for bringing Idris to my attention. Investigating...
Idris is a FP language. I am not against it, but to me, FP is almost
unreadable.
I doubt if I will ever use it.
I use strictly OOD/OOP. It’s natural. Like in Smalltalk. Proven. Solid.
For now, the only reason I use protocols in Swift are to accommodate
delegating/callbacks.
  -= end side note =-

Or perhaps a good first step may be to not deal with information loss at
all, and only keep the parts of this proposal that are always defined
behavior.

To me, there is no unintended information loss, because I know what I am
doing regarding implicit conversion.
Then again, in all the cases for which I suggested implicit data type
conversion, there is no data loss (apart from precision)

TedvG

~Robert Widmann

2016/03/30 8:01、Ted F.A. van Gaalen via swift-evolution <
swift-evolution@swift.org
<javascript:_e(%7B%7D,'cvml','swift-evolution@swift.org');>> のメッセージ:

Currently, one has to deal with explicit conversion between numerical
types,
which in many cases is unnecessary and costing time to code
for things that are quite obvious,
and cluttering the source, making it less readable.

Especially dealing all the time with often unavoidable intermixing
of floating point types CGFloat, Float, and Double
is really very annoying.

Conversion beween floating point types is always harmless as
floating point types are essentially the same.
They differ only in precision.

Therefore, I would recommend allowing the following implicit type
conversions:

-between all floating point types e.g. Double, Float, CGFloat

-from any integer type to floating point types

-Also, personally, I wouldn’t mind assigning from a float to a (signed)
integer
because I know what I am doing: that the fraction is lost
and that assigning a too large float to an Integer would then cause
a run time error, which I can try/catch, of course.

-from unsigned integer to signed integer
(nothing is lost here, but overflow should cause a run time error)

but no implicit conversion for:
- from integer to unsigned integer (loosing sign here)
- from a larger integer type to a smaller one e.g. Int32 <- Int64
(truncation)

Note however, that the compiler should issue warnings
when you do implicit conversions, but these warnings
are for most programmers of the “Yeah I know, don’t bug me.”
type, so one should be able to switch off these type of warnings.

Even a programmer with little experience simply knows
that bringing integers into the floating point domain
causes precision loss.
He/she also knows that assigning a Double to a smaller floating
point type also cause precision loss.
the reverse is not true.

Very much interested in your opinion!

----
N.B. the above does not yet include
the fixed decimal numerical type as this type is not yet
available in Swift. However, it should be implemented
*as soon as possible* because the fixed decimal type
is really needed for applications working with financial data!
E.g.
var depositPromille: Decimal(10,3)
typealias Money = Decimal(20,2)

For more info on how this could be implemented
in Swift. please read a PL/1 manual, ( i grew up in this world)
like this one:

IBM Documentation

especially under sub-topic “Data elements”

(however, don’t take everything for granted, PL/1 is still a very young
language :o)
Unfortunately OOP never made it into PL/1 because with it, it would be
nearly perfect.)

Should I make a new swift-evolution topic for fixed decimal?

Kind Regards
TedvG

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
<javascript:_e(%7B%7D,'cvml','swift-evolution@swift.org');>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
<javascript:_e(%7B%7D,'cvml','swift-evolution@swift.org');>
https://lists.swift.org/mailman/listinfo/swift-evolution

--
-- Howard.

Good point!

-Thorsten

···

Am 30.03.2016 um 23:40 schrieb Howard Lovatt via swift-evolution <swift-evolution@swift.org>:

Implicit conversion complicates method overloading, e. g. if you have both m(Int) and m(Double) which does m(0) call? Sure there are rules in other languages that deal with this but they are complicated. So my call is that it is not worth the trouble.

On Thursday, 31 March 2016, Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org> wrote:

On 30.03.2016, at 20:29, Developer <devteam.codafi@gmail.com> wrote:

I believe section 6.3 of the ISO/C99 specification describes its integer promotion rules and Appendix J describes undefined behavior as a consequence of integer and floating point coercion. I refer to these when I speak of "rules”.

Although ISO compliance makes sense in a lot of cases, for programming languages,
these rules are extremely bureaucratic, restricting and always far behind fast developments in
IT. Would you like to see Swift to be ISO compliant?
Then you could throw away perhaps more than half the language constructs
currently present in Swift?

@Chris:

is there a desire/requirement to make Swift ISO compliant?
and thus therewith restricting Swift’s flexibility? If so, to what extent?

As long as data loss is an "unintended" effect of a certain class of coercions, I don't believe it deserves to be implicit. If you "know what you're doing", the preference so far has been to tell the compiler that and use the constructors provided in the Swift Standard Library to perform explicit truncation. Even in C, if you can be more specific with a cast in cases where you intend data loss, you probably should be.

With all due respect, Robert, Imho, I find this all too theoretical and bureaucratic and tons of unnecessary overhead.
and I am telling the compiler implicitly:
aFloat = anInt // The compiler will use a builtin function to do the conversion. what can be wrong with that?
Again, in the cases I mentioned there is no data loss. (precision excluded)

~Robert Widmann

2016/03/30 13:57、Ted F.A. van Gaalen <tedvgiosdev@gmail.com> のメッセージ:

Thank you, Robert & Haravikk
Please allow me to respond in-line hereunder, thanks.
Ted.

On 30.03.2016, at 16:15, Haravikk <swift-evolution@haravikk.me> wrote:

I’m in favour of implicit conversion for integers where no data can be lost (UInt32 to Int64, Int32 to Int64 etc.), in fact I posted a similar thread a little while ago but can’t find it; there’s something being done with numbers so this may be partly in the works.

I definitely think that implicit conversion for floating point should be avoided, as it can’t be guaranteed

Why? and What cannot be guaranteed?

except in certain edge cases; for example, Javascript actually technically uses a double for all of its numeric types, effectively giving it a 52-bit (iirc) integer type,

awful, didn’t know that

so in theory conversion of Int32 to Double is fine, and Int16 to Float might be as well, but I’m not certain if it’s a good idea or not, as it’s not quite the same as just extending the value.

It simply would cause a float with less precision as an integer like
10000 -becomes e.g - 9999.999999, (depending on magnitude, of course)
but that is normal in a floating point domain; E.g. also with:
     var v:Double = 10000.0 // Double to Double

v would have the same imprecision… and could be anywhere between 9999.9998…10000.00001
(rough estimation, depending on magnitude and the floating point type used)

On 30 Mar 2016, at 14:57, Developer via swift-evolution <swift-evolution@swift.org> wrote:

What you describe, all those cases where one fixes losing precision by simply "ignoring it", that's part of why I'm hesitant about simply throwing in C-like promotion rules into any language.

E.g. if I assign an Int to a Double, then I know very well what I am doing.
often occurring simple example here:
  for i in 0..<10
        {
                dTemperature = dInterval * i / / Double = Double * Int (not possible yet in Swift)
               foo(dTemperature)
         }

      Here I still have to write:
                   dTemperature = dInterval * Double(i)

      However, Swift will accept:
                   dTemperature = dInterval * 3 // 3 inferred to Double. could be regarded as an implicit conversion?

Once you add implicit type coercions, even just between integer or floating point types, your language gains a hundred unspoken rules

Could you please explain these “unspoken rules” you mention more in detail?

and little guard rails you have to cling to lest you slip and hit the next pitfall.

I am counting on the average intelligence of programmers.

Though you may be dismissive of information loss, it is a serious issue in coercions, and one with implications that are never completely grokked by experts

In practice, the implications/effects/behavior of a programming language
cannot be fully predicted and understood, there are simply too many possibilities,
Functional Programming attempts to solve this, trying to make/do everything mathematically
correct but fails for the aforementioned reason.

and serve as yet another hindrance to novices trying to adopt the language.

I don’t agree here. Even novices should have a good understanding
of the basic data types of a programming language,
Also note that concepts of integer, natural, rational, irrational numbers etc.
is very basic mathematics as learned in high school.
or your country’s equivalent education.

So aDouble = anInt should -in the programmer’s mind-
appear as an explicit conversion, that is, he/she should realize the consequences.
The same applies also doing it explicitly like so:
       
     aDouble = Double(anInt)
Same effect: even a fool can use this as well and not knowing the implications.

So, I don't think coercion under this scheme is the complete end-all-be-all solution to this problem, [though it may certainly feel right]. Sure, it is always defined behavior to "downcast" a value of a lower bitwidth to one of a higher bitwidth, but to dismiss Int -> Float, Float -> Int,

I wrote that I don’t want implicit conversion for Float -> Int.

and Double -> Float, etc. coercions as mere trifles is an attitude I don't want enshrined in the language's type system.

Could you give me an example where Double -> Float is problematic (apart from loosing precision) ?

Perhaps there is a middle ground. Say, one could declare conformance to a special kind of protocol declaring safe implicit convertibility (see: Idris' solution of having an `implicit` conversion mechanism).

Please spare me from this kind of contraptions.

  -=side note: =-
Thanks for bringing Idris to my attention. Investigating...
Idris is a FP language. I am not against it, but to me, FP is almost unreadable.
I doubt if I will ever use it.
I use strictly OOD/OOP. It’s natural. Like in Smalltalk. Proven. Solid.
For now, the only reason I use protocols in Swift are to accommodate delegating/callbacks.
  -= end side note =-

Or perhaps a good first step may be to not deal with information loss at all, and only keep the parts of this proposal that are always defined behavior.

To me, there is no unintended information loss, because I know what I am doing regarding implicit conversion.
Then again, in all the cases for which I suggested implicit data type conversion, there is no data loss (apart from precision)

TedvG

~Robert Widmann

2016/03/30 8:01、Ted F.A. van Gaalen via swift-evolution <swift-evolution@swift.org> のメッセージ:

Currently, one has to deal with explicit conversion between numerical types,
which in many cases is unnecessary and costing time to code
for things that are quite obvious,
and cluttering the source, making it less readable.

Especially dealing all the time with often unavoidable intermixing
of floating point types CGFloat, Float, and Double
is really very annoying.

Conversion beween floating point types is always harmless as
floating point types are essentially the same.
They differ only in precision.

Therefore, I would recommend allowing the following implicit type conversions:

-between all floating point types e.g. Double, Float, CGFloat

-from any integer type to floating point types

-Also, personally, I wouldn’t mind assigning from a float to a (signed) integer
because I know what I am doing: that the fraction is lost
and that assigning a too large float to an Integer would then cause
a run time error, which I can try/catch, of course.

-from unsigned integer to signed integer
(nothing is lost here, but overflow should cause a run time error)

but no implicit conversion for:
- from integer to unsigned integer (loosing sign here)
- from a larger integer type to a smaller one e.g. Int32 <- Int64 (truncation)

Note however, that the compiler should issue warnings
when you do implicit conversions, but these warnings
are for most programmers of the “Yeah I know, don’t bug me.”
type, so one should be able to switch off these type of warnings.

Even a programmer with little experience simply knows
that bringing integers into the floating point domain
causes precision loss.
He/she also knows that assigning a Double to a smaller floating
point type also cause precision loss.
the reverse is not true.

Very much interested in your opinion!

----
N.B. the above does not yet include
the fixed decimal numerical type as this type is not yet
available in Swift. However, it should be implemented
*as soon as possible* because the fixed decimal type
is really needed for applications working with financial data!
E.g.
var depositPromille: Decimal(10,3)
typealias Money = Decimal(20,2)
  
For more info on how this could be implemented
in Swift. please read a PL/1 manual, ( i grew up in this world)
like this one:

IBM Documentation

especially under sub-topic “Data elements”

(however, don’t take everything for granted, PL/1 is still a very young language :o)
Unfortunately OOP never made it into PL/1 because with it, it would be nearly perfect.)

Should I make a new swift-evolution topic for fixed decimal?

Kind Regards
TedvG

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

--
-- Howard.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution