Thoughts on clarity of Double and Float type names?

There are lots of things that are unfortunate about CGFloat. I don’t see how they translate to Float and Double.

-Chris

···

On Jan 6, 2016, at 12:01 AM, Goffredo Marocchi <panajev@gmail.com> wrote:

Hello Chris,

When dealing with floating point values, wouldn't it be in our best interest to always be very specific about the accuracy of the floating point type variables we use regardless of the device it runs on? This is the biggest problem I had with CGFloat: while it is nice at first to have a type that adapts to the device word size it runs on, I prefer to always have an explicit accuracy guarantee than worrying about my CGFloat code changing in behaviour when it runs on a 32 bit device rather than a 64 bit one.

It’s worth noting that the definition of CGFloat is basically a historical curiosity. If we were starting from scratch today, CGFloat would be Double on all platforms, 32- and 64-bit. The 64-bit transition simply allowed for the ABI-breaking change of making it 64-bit to happen. There was never any desire or reason to have it match word size.

– Steve

···

On Jan 6, 2016, at 3:01 AM, Goffredo Marocchi via swift-evolution <swift-evolution@swift.org> wrote:

while it is nice at first to have a type that adapts to the device word size it runs on, I prefer to always have an explicit accuracy guarantee than worrying about my CGFloat code changing in behaviour when it runs on a 32 bit device rather than a 64 bit one.

Int is the same size as Int64 on a 64-bit machine but the same size as
Int32 on a 32-bit machine. By contrast, modern 32-bit architectures have
FPUs that handle 64-bit and even 80-bit floating point types. Therefore, it
does not make sense for Float to be Float32 on a 32-bit machine, as would
be the case in one interpretation of what it means to mirror naming
"conventions." However, if you interpret the convention to mean that Float
should be the largest floating point type supported by the FPU, Float
should actually be a typealias for Float80 even on some 32-bit machines. In
neither interpretation does it mean that Float should simply be a typealias
for what's now called Double.

Another issue to consider: a number like 42 is stored exactly regardless of
whether you're using an Int32 or an Int64. However, a number like 1.1 is
not stored exactly as a binary floating point type, and it's approximated
*differently* as a Float than as a Double. Thus, it can be essential to
consider what kind of floating point type you're using in scenarios even
when the number is small, whereas the same is not true for integer types.

···

On Mon, May 23, 2016 at 7:48 PM, David Sweeris via swift-evolution < swift-evolution@swift.org> wrote:

I'd prefer they mirror the integer type naming "conventions", that is have
an explicit "Float32" and "Float64" type, with "Float" being a typealias
for Float64.

Sent from my iPhone

On May 23, 2016, at 18:26, Adriano Ferreira via swift-evolution < > swift-evolution@swift.org> wrote:

Hi everyone,

Is there any draft/proposal related to this suggestion?

Best,

— A

On Jan 4, 2016, at 3:58 PM, Alex Johnson via swift-evolution < > swift-evolution@swift.org> wrote:

Hi all,

I'm curious how other members of the Swift community feel about the
clarity of the "Double" and "Float" type names. It seems incongruous that
the default type for integers is "Int", but the default type for floating
point numbers is not "Float".

What if the name "Float" were given to the intrinsic, 64-bit floating
point type? (And the existing "Float" and "Double" names were removed in
favor of "Float32" and "Float64"?)

*Discussion:*

I understand the origins of these names in single- and double-precision
IEEE floats. But this distinction feels like a holdover from C (and a
32-bit world), rather than a natural fit for Swift.

Here are some reasons to *keep Double and Float as they are* (numbered
for easy reference, but otherwise unordered):

   1. "Double" and "Float" are more natural for developers who are
   "familiar with C-like languages."
   2. A corollary: A 64-bit "Float" type could be confusing to those
   developers.
   3. Another corollary: Swift needs to interoperate with Objective C,
   and its "float" and "double" types.
   4. Renaming these types would open the door to bike-shedding every
   type name and keyword in the language.
   5. Changing the meaning of an existing type ("Float") would be a bit
   PITA for existing code (although an automated migration from "Float" to
   "Float32" and "Double" to "Float" should be possible).
   6. Renaming a fundamental type would take considerable effort.

Here are some reasons to *rename these types*:

   1. The default for a "float literal" in Swift is a 64-bit value. It
   would feel natural if that that value were of type "Float".
   2. There are size-specific names for 32-bit ("Float32") and 64-bit
   ("Float64") floating point types. For cases where a size-specific type is
   needed, a size-specific name like "Float32" probably makes the intention of
   the code more clear (compared to just "Float").
   3. Apple's Objective C APIs generally use aliased types like "CGFloat"
   rather than raw float or double types.
   4. There is precedent for "Float" types being 64-bit in other
   languages like Ruby, Python and Go (as long as the hardware supports it).
   5. What kind of a name for a type is "Double" anyways, amirite?

(that last one is a joke, BTW)

What do you think? Do you agree or disagree with any of my assessments?
Are there any pros or cons that I've missed? Is the level of effort so
large that it makes this change impractical? Is it a colossal waste of
human effort to even consider a change like this?

Thanks for your time and attention,
Alex Johnson (@nonsensery)
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Int is the same size as Int64 on a 64-bit machine but the same size as Int32 on a 32-bit machine. By contrast, modern 32-bit architectures have FPUs that handle 64-bit and even 80-bit floating point types. Therefore, it does not make sense for Float to be Float32 on a 32-bit machine, as would be the case in one interpretation of what it means to mirror naming "conventions." However, if you interpret the convention to mean that Float should be the largest floating point type supported by the FPU, Float should actually be a typealias for Float80 even on some 32-bit machines. In neither interpretation does it mean that Float should simply be a typealias for what's now called Double.

IIRC, `Int` is typealiased to the target's biggest native/efficient/practical integer type, regardless of its bit-depth (although I believe some do exist, I can’t think of any CPUs in which those are different). I don’t see why it shouldn’t be the same way with floats… IMHO, `Float` should be typealiased to the biggest native/efficient/practical floating point type, which I think is pretty universally Float64. I’m under the impression that Intel’s 80-bit format is intended to be an interim representation which is automatically converted to/from 64-bit, and loading & storing a full 80-bits is a non-trivial matter. I’m not even sure if the standard “math.h" functions are defined for Float80 arguments. If Float80 is just as native/efficient/practical as Float64, I wouldn’t object to Float being typealiased to Float80 on such platforms.

Another issue to consider: a number like 42 is stored exactly regardless of whether you're using an Int32 or an Int64. However, a number like 1.1 is not stored exactly as a binary floating point type, and it's approximated *differently* as a Float than as a Double. Thus, it can be essential to consider what kind of floating point type you're using in scenarios even when the number is small, whereas the same is not true for integer types.

Oh I know. I’m not arguing that floating point math isn’t messy, just that since we can use “Int” for when we don’t care and “IntXX” for when we do, we should also be able to use “Float” when we don’t care and “FloatXX” when we do. If someone’s worried about the exact value of “1.1”, they should be specifying the bit-depth anyway. Otherwise, give them most precise type which can work with the language’s goals.

Have we (meaning the list in general, not you & me in particular) had this conversation before? This feels familiar...

-Dave Sweeris

···

On May 23, 2016, at 8:18 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

>
> Int is the same size as Int64 on a 64-bit machine but the same size as
Int32 on a 32-bit machine. By contrast, modern 32-bit architectures have
FPUs that handle 64-bit and even 80-bit floating point types. Therefore, it
does not make sense for Float to be Float32 on a 32-bit machine, as would
be the case in one interpretation of what it means to mirror naming
"conventions." However, if you interpret the convention to mean that Float
should be the largest floating point type supported by the FPU, Float
should actually be a typealias for Float80 even on some 32-bit machines. In
neither interpretation does it mean that Float should simply be a typealias
for what's now called Double.
IIRC, `Int` is typealiased to the target's biggest
native/efficient/practical integer type, regardless of its bit-depth
(although I believe some do exist, I can’t think of any CPUs in which those
are different). I don’t see why it shouldn’t be the same way with floats…
IMHO, `Float` should be typealiased to the biggest
native/efficient/practical floating point type, which I think is pretty
universally Float64. I’m under the impression that Intel’s 80-bit format is
intended to be an interim representation which is automatically converted
to/from 64-bit, and loading & storing a full 80-bits is a non-trivial
matter. I’m not even sure if the standard “math.h" functions are defined
for Float80 arguments. If Float80 is just as native/efficient/practical as
Float64, I wouldn’t object to Float being typealiased to Float80 on such
platforms.

> Another issue to consider: a number like 42 is stored exactly regardless
of whether you're using an Int32 or an Int64. However, a number like 1.1 is
not stored exactly as a binary floating point type, and it's approximated
*differently* as a Float than as a Double. Thus, it can be essential to
consider what kind of floating point type you're using in scenarios even
when the number is small, whereas the same is not true for integer types.
Oh I know. I’m not arguing that floating point math isn’t messy, just that
since we can use “Int” for when we don’t care and “IntXX” for when we do,
we should also be able to use “Float” when we don’t care and “FloatXX” when
we do. If someone’s worried about the exact value of “1.1”, they should be
specifying the bit-depth anyway. Otherwise, give them most precise type
which can work with the language’s goals.

I wouldn't be opposed to renaming Float and Double to Float32 and Float64,
but I would care if Float were typealiased to different types on different
platforms. That solution is a non-starter for me because something as
simple as (1.1 + 1.1) would evaluate to a different result depending on the
machine. That's a problem. An analogous issue does not come into play with
Int because 1 + 1 == 2 regardless of the size of Int. Swift traps when the
max value that can be stored in an Int is exceeded, so it is not possible
to obtain two different results on two different machines.

Have we (meaning the list in general, not you & me in particular) had this

conversation before? This feels familiar...

It does, doesn't it? I've been reading this list for too long.

···

On Mon, May 23, 2016 at 9:40 PM, David Sweeris <davesweeris@mac.com> wrote:

On May 23, 2016, at 8:18 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

-Dave Sweeris

I just checked, and we have! In this very thread! I didn’t realize it was started almost 6 months ago…

Out of curiosity, are there plans for Swift's IntegerLiteralType & FloatingPointLiteralType when CPUs eventually support 128-bit ints & floats? Will they still evaluate to “Int64" and “Double” by default, or will they become the bigger types?

- Dave Sweeris

···

On May 23, 2016, at 9:55 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Mon, May 23, 2016 at 9:40 PM, David Sweeris <davesweeris@mac.com <mailto:davesweeris@mac.com>> wrote:

Have we (meaning the list in general, not you & me in particular) had this conversation before? This feels familiar...

It does, doesn't it? I've been reading this list for too long.

In UIKit/Cocoa, there's CGFloat that does pretty much what you're asking (and it's pain working with it in Swift, since it's Double on 64-bit computers, while Swift defaults to Float, so you need casting all the time)... And I think the default behavior of Swift should be similar.

I wouldn't change the type names since Double still is "double precision", I'd just prefer changed default behavior...

Charlie

···

On May 24, 2016, at 5:39 AM, David Sweeris via swift-evolution <swift-evolution@swift.org> wrote:

On May 23, 2016, at 9:55 PM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Mon, May 23, 2016 at 9:40 PM, David Sweeris <davesweeris@mac.com <mailto:davesweeris@mac.com>> wrote:

Have we (meaning the list in general, not you & me in particular) had this conversation before? This feels familiar...

It does, doesn't it? I've been reading this list for too long.

I just checked, and we have! In this very thread! I didn’t realize it was started almost 6 months ago…

Out of curiosity, are there plans for Swift's IntegerLiteralType & FloatingPointLiteralType when CPUs eventually support 128-bit ints & floats? Will they still evaluate to “Int64" and “Double” by default, or will they become the bigger types?

- Dave Sweeris
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

[Charlie, this is more of a reply to the thread than to your message in particular.]

There is absolutely no good reason to have a “word size” floating-point type. We happen to have one on Apple systems (CGFloat), but that should be viewed as a historical curiosity, not as guidance that it’s a reasonable thing to do. If we were starting from scratch today, we would not have such a type.

1. Having explicit `Float32`, `Float64`, [`Float16`, `Float128`, … ] type names, by analogy to integer types, is great.

2. Making `Float` and `Double` unavailable in favor of these replacements would be a lot of churn for relatively little value, but it’s not a bad idea if you ignore the one-time pain of conversion. `Float32` and `Float64` are discoverable enough that this would be OK, IMO.

3.a. Making `Float` a “word size” type is bonkers. The last thing we want to do is create another type with the same difficulties as `CGFloat`.

3.b. Making `Float` be “the widest HW-supported type” is less bonkers, but still results in gratuitous cross-platform differences and very little real benefit. We’d also need to be careful about how we defined it, since we would *not* want it to be `Float80` on x86_64 (for performance reasons).

3.c. Making `Float` be an alias of `Float64` would just confuse people coming from a C-family language (as commonly implemented).

– Steve

···

On May 24, 2016, at 12:52 AM, Charlie Monroe via swift-evolution <swift-evolution@swift.org> wrote:

In UIKit/Cocoa, there's CGFloat that does pretty much what you're asking (and it's pain working with it in Swift, since it's Double on 64-bit computers, while Swift defaults to Float, so you need casting all the time)... And I think the default behavior of Swift should be similar.

I wouldn't change the type names since Double still is "double precision", I'd just prefer changed default behavior...

Charlie

On May 24, 2016, at 5:39 AM, David Sweeris via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On May 23, 2016, at 9:55 PM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Mon, May 23, 2016 at 9:40 PM, David Sweeris <davesweeris@mac.com <mailto:davesweeris@mac.com>> wrote:

Have we (meaning the list in general, not you & me in particular) had this conversation before? This feels familiar...

It does, doesn't it? I've been reading this list for too long.

I just checked, and we have! In this very thread! I didn’t realize it was started almost 6 months ago…

Out of curiosity, are there plans for Swift's IntegerLiteralType & FloatingPointLiteralType when CPUs eventually support 128-bit ints & floats? Will they still evaluate to “Int64" and “Double” by default, or will they become the bigger types?

- Dave Sweeris
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

<http://thread.gmane.org/gmane.comp.lang.swift.evolution/2199/focus=18327&gt;

Stephen Canon wrote:

Making `Float` be an alias of `Float64` would just confuse people
coming from a C-family language (as commonly implemented).

To avoid confusion, and to allow for decimal floating-point types:

[stdlib/public/core/FloatingPointTypes.swift.gyb]

    public struct Binary32: BinaryFloatingPoint
    public struct Binary64: BinaryFloatingPoint

[stdlib/public/core/CTypes.swift]

    public typealias CFloat = Binary32
    public typealias CDouble = Binary64

You could also have:

    public struct Binary /// The default, cf. Int
    public typealias BinaryMax /// The largest, cf. IntMax

-- Ben

<http://thread.gmane.org/gmane.comp.lang.swift.evolution/2199/focus=18327&gt;

I hope it's not too late to submit a proposal.

[stdlib/public/core/FloatingPointTypes.swift.gyb]

  public struct Float32: BinaryFloatingPoint
  public struct Float64: BinaryFloatingPoint
  public struct Float80: BinaryFloatingPoint

[stdlib/public/core/CTypes.swift]

  public typealias CFloat = Float32
  public typealias CDouble = Float64
  public typealias CLongDouble = Float80

[stdlib/public/core/Policy.swift]

  /// The default type for an otherwise-
  /// unconstrained floating point literal.
  public typealias FloatLiteralType = Float64

Clang importer example:

  /// The measurement value, represented as a
  /// double-precision floating-point number.
  public var doubleValue: CDouble { get }

Alternatives:

* IEEE 754 names: `Binary64` (or `Bin64`), etc.

* DEC64 <http://dec64.com> as the default number type!

-- Ben

At some danger of going off on a tangent, I should point out that DEC64 has very little to recommend it. It’s not much more efficient performance-wise than IEEE-754 decimal types and has significantly less exponent range (it effectively throws away almost three bits in order to have the exponent fit in a byte; 2**56 is ~7.2E16, which means that it can represent some, but not all, 17-digit significands; the effective working precision is 16 digits, which actually requires only ~53.15 bits. Even if you weren’t going to use those extra bits for exponent, they could be profitably used for other purposes. The fact that the dec-64 scheme allows one to use byte operations has only a tiny benefit, and really only on x86).

– Steve

···

On Jun 20, 2016, at 7:22 AM, Ben Rimmington via swift-evolution <swift-evolution@swift.org> wrote:

DEC64 <http://dec64.com <http://dec64.com/&gt;&gt; as the default number type!

Fair enough, I just thought the idea of a single number type was interesting.

As for the proposal, there's a FIXME comment which suggests the original plan:

<https://github.com/apple/swift/blob/master/stdlib/public/core/Policy.swift&gt;

  //===----------------------------------------------------------------------===//
  // Aliases for floating point types
  //===----------------------------------------------------------------------===//
  // FIXME: it should be the other way round, Float = Float32, Double = Float64,
  // but the type checker loses sugar currently, and ends up displaying 'FloatXX'
  // in diagnostics.
  /// A 32-bit floating point type.
  public typealias Float32 = Float
  /// A 64-bit floating point type.
  public typealias Float64 = Double

I think CFloat and CDouble are clearer, so Float and Double could be made unavailable:

  @available(*, unavailable, renamed: "Float32")
  public typealias Float = Float32

  @available(*, unavailable, renamed: "Float64")
  public typealias Double = Float64

Clang importer already seems to be using the correct names:

<https://github.com/apple/swift/blob/master/include/swift/ClangImporter/BuiltinMappedTypes.def&gt;
<https://github.com/apple/swift/blob/master/lib/ClangImporter/MappedTypes.def&gt;

-- Ben

···

On 20 Jun 2016, at 15:44, Stephen Canon <scanon@apple.com> wrote:

At some danger of going off on a tangent, I should point out that DEC64 has very little to recommend it. It’s not much more efficient performance-wise than IEEE-754 decimal types and has significantly less exponent range (it effectively throws away almost three bits in order to have the exponent fit in a byte; 2**56 is ~7.2E16, which means that it can represent some, but not all, 17-digit significands; the effective working precision is 16 digits, which actually requires only ~53.15 bits. Even if you weren’t going to use those extra bits for exponent, they could be profitably used for other purposes. The fact that the dec-64 scheme allows one to use byte operations has only a tiny benefit, and really only on x86).

FWIW, I think it is extremely unlikely that we would go away from Float/Double.

-Chris

···

On Jun 20, 2016, at 4:22 AM, Ben Rimmington via swift-evolution <swift-evolution@swift.org> wrote:

<http://thread.gmane.org/gmane.comp.lang.swift.evolution/2199/focus=18327&gt;

I hope it's not too late to submit a proposal.

[stdlib/public/core/FloatingPointTypes.swift.gyb]

  public struct Float32: BinaryFloatingPoint
  public struct Float64: BinaryFloatingPoint
  public struct Float80: BinaryFloatingPoint