[Re-Review] SE-0104: Protocol-oriented integers

Free functions generally work with autocomplete, however they have some
disadvantages:

  1. You can't easily find them when browsing with something like SwiftDoc
or looking at headers via control-click in Xcode
  2. Some of the compiler writers have commented that free functions slow
the compiler down, and presumable therefore Xcode

···

On Thu, 23 Feb 2017 at 6:02 am, Stephen Canon <scanon@apple.com> wrote:

On Feb 22, 2017, at 10:48 AM, David Sweeris via swift-evolution < > swift-evolution@swift.org> wrote:

Eh, maybe… At least in Xcode, autocomplete works for free functions. I was
just thinking about how people who already know about “signum" would expect
it to work. Like if a mathematician sits down to write something in Swift,
are they more likely to try “signum(x)” or “x.signum” first?

Honestly, as a mathematician I think either one is fine.

We like free functions in mathematics. x.signum is (slightly?) Swiftier.
Six of one, half dozen of the other, either one will be completely
satisfactory.

– Steve

--
-- Howard.

I assume that Number being renamed Numeric implies SignedNumber being
renamed SignedNumeric?

···

On Sat, Feb 25, 2017 at 09:06 Ben Rimmington via swift-evolution < swift-evolution@swift.org> wrote:

<
https://github.com/apple/swift-evolution/blob/master/proposals/0104-improved-integers.md
>

> On 24 Feb 2017, at 02:05, Ben Cohen wrote:
>
> Regarding the “Number” protocol: it was felt this is likely to cause
confusion with NSNumber, given the NS-prefix-dropping versions of other
Foundation types indicate a value-typed concrete type equivalent, which
this won’t be. The recommendation is to use “Numeric" instead.

Does the `Error` protocol cause confusion with the `NSError` class?

I think `Number` is better than `Numeric`, because it is consistent with
`SignedNumber`.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Error and NSError are actually related types, like String and NSString, and are used in roughly the same way. Numeric and NSNumber serve very different purposes: the former is a protocol for various operations, and the latter is a semi-type-erased class box around a *cough* numeric value that (deliberately) doesn't implement most of the operations in the protocol.

Jordan

···

On Feb 25, 2017, at 07:06, Ben Rimmington via swift-evolution <swift-evolution@swift.org> wrote:

<https://github.com/apple/swift-evolution/blob/master/proposals/0104-improved-integers.md&gt;

On 24 Feb 2017, at 02:05, Ben Cohen wrote:

Regarding the “Number” protocol: it was felt this is likely to cause confusion with NSNumber, given the NS-prefix-dropping versions of other Foundation types indicate a value-typed concrete type equivalent, which this won’t be. The recommendation is to use “Numeric" instead.

Does the `Error` protocol cause confusion with the `NSError` class?

I think `Number` is better than `Numeric`, because it is consistent with `SignedNumber`.

I've followed this for a long time and work a lot with number/big num related code, and I must say I'm really excited about the way this is turning out!

Popcount & leadingZeroBits
- Placement: What's the rationale behind placing popcount & clz on fixed width integer instead of BinaryInteger? It seems implementing these would be trivial also for BigInt types.

Arbitrary sized signed -1 can have different popcount depending on the underlying buffer length (if there is a buffer) even if it’s the same type and the same value. Same with leading zeros, as the underlying representation is unbounded on the more significant side.

- Naming: Why does popcount retain the term of art? Considering it's relatively obscure it would seem numberOfOneBits or something along those lines would be a more consistent choice.

The thinking was something like: people who know it, know it by this name and will be a little annoyed, people who don’t know what it is, and simply want their stack overflow snippet to work, will not be able to identify it under any other name. So a non-term-of-art name would not really benefit anyone, other than our naming guidelines.

Also, arguably shouldn't it be numberOfLeadingZeroBits?

There is countLeadingZeroBits for that.

I'm very happy with the inclusion of exposing these instructions btw, I've run into them lacking more than once before!

What would you say about popcount being a protocol requirement? If you’ve needed it, was it in the generic context or on concrete types?

FullWidth & ReportingOverflow
That's pretty clever there with the trailing argument :). Do you know whether there is any technical reason why we couldn't support a trailing 'argument label' without actual argument directly in the language? If not I might want to write up a proposal for that because if run into wanting this for a longer time. E.g. delegate methods would be a very common case: tableView(_:numberOfSections) is a lot more consistent with all other delegate methods.

Joe recently sent an email on behalf of Dave to start this very discussion.

Division on Number?
The intro of the proposal puts division under Number, while the detailed design puts it under BinaryInteger, which is it?

It is one of the most recent changes, division is *not* in the Number. It was moved down the hierarchy to BinaryInteger. I’ll fix the proposal. Thanks!

Max

···

On Feb 21, 2017, at 1:15 PM, Patrick Pijnappel <patrickpijnappel@gmail.com> wrote:

On Wed, Feb 22, 2017 at 7:39 AM, Max Moiseev via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Feb 18, 2017, at 12:02 PM, Karl Wagner via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

I assume the “SignedNumber” protocol is the same as the existing one in the standard library. That is to say, Strideable.Stride will now conform to Number and have operators.

SignedNumber will *not* be the same. It is just the same name.
Stride will have operators, yes. Strideable in general will not, unless it’s a _Pointer. (you can find the current implementation prototype here <https://github.com/apple/swift/blob/new-integer-protocols/stdlib/public/core/Stride.swift.gyb&gt;\).

Also minor nitpick, would it be too onerous to require Number.Magnitude to be Comparable? Currently it’s only Equatable and ExpressibleByIntegerLiteral.

Magnitude is supposed to conform to Arithmetic (or Number, or whatever it ends up being called), but the recursive constraints feature is missing, therefore we constrained it with the protocols that Arithmetic itself refines.

Why would you want Comparable?

Max

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

Hmm, with respect to both endianness and Jordan's comments re popcount (and
by extension, trailing zeros and leading zeros), I think these go to the
dual purpose of `BinaryInteger` (and its refinement, `FixedWidthInteger`).
Both concede by their very name that they are modeling not merely a numeric
type but one that has a certain machine representation. Endian conversions
aren't numeric operations, but they are appropriately operations that can
be generic over fixed width integers. I'm not convinced (though I haven't
thought about it for too long yet) of what `LittleEndian<Int>` would gain
us.

···

On Tue, Feb 21, 2017 at 2:12 PM, John McCall via swift-evolution < swift-evolution@swift.org> wrote:

On Feb 21, 2017, at 3:08 PM, John McCall via swift-evolution < > swift-evolution@swift.org> wrote:

On Feb 21, 2017, at 2:15 PM, Dave Abrahams via swift-evolution < > swift-evolution@swift.org> wrote:
Sent from my moss-covered three-handled family gradunza

On Feb 21, 2017, at 9:04 AM, Jordan Rose <jordan_rose@apple.com> wrote:

[Proposal: https://github.com/apple/swift-evolution/blob/
master/proposals/0104-improved-integers.md]

Hi, Max (and Dave). I did have some questions about this revision:

Arithmetic and SignedArithmetic protocols have been renamed
to Number and SignedNumber.

What happens to NSNumber here? It feels like the same problem as Character
and (NS)CharacterSet.

Endian-converting initializers and properties were added to
the FixedWidthInteger protocol.

This is the thing I have the biggest problem with. Endian conversions
aren't numeric operations, and you can't meaningfully mix numbers of
different endianness. That implies to me that numbers with different
endianness should have different types. I think there's a design to explore
with LittleEndian<Int> and BigEndian<Int>, and explicitly using those types
whenever you need to convert.

I disagree. Nobody actually wants to compute with numbers in the wrong
endianness for the machine. This is just used for corrections at the ends
of wire protocols, where static type has no meaning.

I think Jordan's suggestion is not that LittleEndian<Int> or
BigEndian<Int> would be artihmetic types, but that they would be different
types, primarily opaque, that can be explicitly converted to/from Int.
When you read something off the wire, you ask for the bytes as one of those
two types (as appropriate) and then convert to the underlying type.
Ideally, Int doesn't even conform to the "this type can be read off the
wire" protocol, eliminating the common mistake of serializing something
using native endianness.

Of course, you would not want LittleEndian<Int> to be directly
serializable either, because it is not a fixed-size type; but I think the
underlying point stands.

John.

John.

Here's a sketch of such a thing:

struct LittleEndian<Value: FixedWidthInteger> {
  private var storage: Value

  public var value: Value {
if little_endian
    return storage
#else
    return swapBytes(storage)
#endif
  }

  public var bitPattern: Value {
    return storage
  }

  public var asBigEndian: BigEndian<Value> {
    return BigEndian(value: self.value)
  }

  public init(value: Value) {
if little_endian
    storage = value
#else
    storage = swapBytes(value)
#endif
  }

  public init(bitPattern: Value) {
    storage = bitPattern
  }
}

I'm not saying this is the *right* solution, just that I suspect adding
Self-producing properties that change endianness is the wrong one.

  /// The number of bits equal to 1 in this value's binary representation.
  ///
  /// For example, in a fixed-width integer type with a `bitWidth` value
of 8,
  /// the number 31 has five bits equal to 1.
  ///
  /// let x: Int8 = 0b0001_1111
  /// // x == 31
  /// // x.popcount == 5
  var popcount: Int { get
}

Is this property actually useful enough to put into a protocol? I know
it's defaulted, but it's already an esoteric operation; it seems unlikely
that one would need it in a generic context. (It's also definable for
arbitrary UnsignedIntegers as well as arbitrary FixedWidthIntegers.)

The whole point is that you want to dispatch down to an LLVM instruction
for this and not rely on the optimizer to collapse your loop into one.

(I'm also still not happy with the non-Swifty name, but I see
"populationCount" or "numberOfOneBits" would probably be worse.)

Thanks in advance,
Jordan

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I assume the “SignedNumber” protocol is the same as the existing one in the standard library. That is to say, Strideable.Stride will now conform to Number and have operators.

SignedNumber will *not* be the same. It is just the same name.
Stride will have operators, yes. Strideable in general will not, unless it’s a _Pointer. (you can find the current implementation prototype here <https://github.com/apple/swift/blob/new-integer-protocols/stdlib/public/core/Stride.swift.gyb&gt;\).

Currently, it’s difficult to work with Strideable because you can calculate distances between them (as type Strideable.Stride), but you can’t add those distances because Strideable.Stride is only constrained to conform to “SignedNumber”, which is a pretty useless protocol.

In the prototype, Strideable.Stride has now been changed, so it is constrained to “SignedArithmetic” (which, apparently, is to be renamed “SignedNumber”). So essentially, the existing SignedNumber has been re-parented to “Number” and gains operations such as addition. That’s great!

Ah.. I see what you mean. Thanks for the explanation. I think we usually just used Bound : Strideable, Bound.Stride : SignedInteger, but yes, you’re right, it will be simpler now.

I think it would be worth including Strideable in the “big picture” in the proposal/manifesto, showing how it fits in..

That can be a valuable addition, indeed. Thanks!

···

On Feb 21, 2017, at 2:04 PM, Karl Wagner <razielim@gmail.com> wrote:

On 21 Feb 2017, at 21:39, Max Moiseev <moiseev@apple.com <mailto:moiseev@apple.com>> wrote:

On Feb 18, 2017, at 12:02 PM, Karl Wagner via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Also minor nitpick, would it be too onerous to require Number.Magnitude to be Comparable? Currently it’s only Equatable and ExpressibleByIntegerLiteral.

Magnitude is supposed to conform to Arithmetic (or Number, or whatever it ends up being called), but the recursive constraints feature is missing, therefore we constrained it with the protocols that Arithmetic itself refines.

Why would you want Comparable?

Max

I suppose that leads me on to the question of why Number itself only requires that conformers be (Equatable & ExpressibleByIntegerLiteral) and does not require that they be Comparable.

If I must be able to create any “Number" out of thin air with an integer literal, is it not reasonable to also require that I am able to compare two instances?

Are there Number types which can’t be Comparable?

- Karl

Hmm, with respect to both endianness and Jordan's comments re popcount (and by extension, trailing zeros and leading zeros), I think these go to the dual purpose of `BinaryInteger` (and its refinement, `FixedWidthInteger`). Both concede by their very name that they are modeling not merely a numeric type but one that has a certain machine representation. Endian conversions aren't numeric operations, but they are appropriately operations that can be generic over fixed width integers. I'm not convinced (though I haven't thought about it for too long yet) of what `LittleEndian<Int>` would gain us.

It isolates non-native and native integer representations so that you can't easily mix them. It's an extremely common bug in serialization code to forget to byte-swap integers, and we should not naively introduce that bug into Swift by encouraging programmers to use the same type to represent them. The easiest way to achieve that is to try to prevent programmers from ever creating an Int32 whose bytes aren't in native order in the first place. That's not always possible — notably, C doesn't distinguish these in the type system, and there are C APIs that expect to be given a big-endian number — but it's easy enough for BigEndian to have an "init(bitPattern: T)" and a "var bitPattern: T" to cover for cases like that.

John.

···

On Feb 21, 2017, at 3:27 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Tue, Feb 21, 2017 at 2:12 PM, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Feb 21, 2017, at 3:08 PM, John McCall via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Feb 21, 2017, at 2:15 PM, Dave Abrahams via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
Sent from my moss-covered three-handled family gradunza

On Feb 21, 2017, at 9:04 AM, Jordan Rose <jordan_rose@apple.com <mailto:jordan_rose@apple.com>> wrote:

[Proposal: https://github.com/apple/swift-evolution/blob/master/proposals/0104-improved-integers.md\]

Hi, Max (and Dave). I did have some questions about this revision:

Arithmetic and SignedArithmetic protocols have been renamed to Number and SignedNumber.

What happens to NSNumber here? It feels like the same problem as Character and (NS)CharacterSet.

Endian-converting initializers and properties were added to the FixedWidthInteger protocol.

This is the thing I have the biggest problem with. Endian conversions aren't numeric operations, and you can't meaningfully mix numbers of different endianness. That implies to me that numbers with different endianness should have different types. I think there's a design to explore with LittleEndian<Int> and BigEndian<Int>, and explicitly using those types whenever you need to convert.

I disagree. Nobody actually wants to compute with numbers in the wrong endianness for the machine. This is just used for corrections at the ends of wire protocols, where static type has no meaning.

I think Jordan's suggestion is not that LittleEndian<Int> or BigEndian<Int> would be artihmetic types, but that they would be different types, primarily opaque, that can be explicitly converted to/from Int. When you read something off the wire, you ask for the bytes as one of those two types (as appropriate) and then convert to the underlying type. Ideally, Int doesn't even conform to the "this type can be read off the wire" protocol, eliminating the common mistake of serializing something using native endianness.

Of course, you would not want LittleEndian<Int> to be directly serializable either, because it is not a fixed-size type; but I think the underlying point stands.

John.

John.

Here's a sketch of such a thing:

struct LittleEndian<Value: FixedWidthInteger> {
  private var storage: Value

  public var value: Value {
if little_endian
    return storage
#else
    return swapBytes(storage)
#endif
  }

  public var bitPattern: Value {
    return storage
  }

  public var asBigEndian: BigEndian<Value> {
    return BigEndian(value: self.value)
  }

  public init(value: Value) {
if little_endian
    storage = value
#else
    storage = swapBytes(value)
#endif
  }

  public init(bitPattern: Value) {
    storage = bitPattern
  }
}

I'm not saying this is the right solution, just that I suspect adding Self-producing properties that change endianness is the wrong one.

  /// The number of bits equal to 1 in this value's binary representation.
  ///
  /// For example, in a fixed-width integer type with a `bitWidth` value of 8,
  /// the number 31 has five bits equal to 1.
  ///
  /// let x: Int8 = 0b0001_1111
  /// // x == 31
  /// // x.popcount == 5
  var popcount: Int { get
}

Is this property actually useful enough to put into a protocol? I know it's defaulted, but it's already an esoteric operation; it seems unlikely that one would need it in a generic context. (It's also definable for arbitrary UnsignedIntegers as well as arbitrary FixedWidthIntegers.)

The whole point is that you want to dispatch down to an LLVM instruction for this and not rely on the optimizer to collapse your loop into one.

(I'm also still not happy with the non-Swifty name, but I see "populationCount" or "numberOfOneBits" would probably be worse.)

Thanks in advance,
Jordan

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

I've followed this for a long time and work a lot with number/big num
related code, and I must say I'm really excited about the way this is
turning out!

*Popcount & leadingZeroBits*
- *Placement:* What's the rationale behind placing popcount & clz on
fixed width integer instead of BinaryInteger? It seems implementing these
would be trivial also for BigInt types.

Arbitrary sized signed -1 can have different popcount depending on the
underlying buffer length (if there is a buffer) even if it’s the same type
and the same value. Same with leading zeros, as the underlying
representation is unbounded on the more significant side.

- *Naming: W*hy does popcount retain the term of art? Considering it's
relatively obscure it would seem numberOfOneBits or something along those
lines would be a more consistent choice.

The thinking was something like: people who know it, know it by this name
and will be a little annoyed, people who don’t know what it is, and simply
want their stack overflow snippet to work, will not be able to identify it
under any other name. So a non-term-of-art name would not really benefit
anyone, other than our naming guidelines.

Also, arguably shouldn't it be numberOfLeadingZeroBits?

There is countLeadingZeroBits for that.

I'm very happy with the inclusion of exposing these instructions btw, I've
run into them lacking more than once before!

What would you say about popcount being a protocol requirement? If you’ve
needed it, was it in the generic context or on concrete types?

So I for one have needed it. The question Jordan asked was interesting,
because on reflection I've only needed it so far on concrete types. That
said, in part that must be chalked up to integer generics not being really
usable in the past.

On balance, MHO is that it's worthwhile to make it a protocol requirement
(defaulted of course). It makes sense that all fixed-width integers have a
popcount (and leadingZeroBits), just as they all have a bitWidth. Now, some
fixed-width integers might not have a very optimized way of computing that
popcount, but still. It is not out of the question that one would want to
do some work based on the bits in a generic integer that has a fixed number
of bits.

*FullWidth & ReportingOverflow*

···

On Tue, Feb 21, 2017 at 3:27 PM, Max Moiseev via swift-evolution < swift-evolution@swift.org> wrote:

On Feb 21, 2017, at 1:15 PM, Patrick Pijnappel <patrickpijnappel@gmail.com> > wrote:
That's pretty clever there with the trailing argument :). Do you know
whether there is any technical reason why we couldn't support a trailing
'argument label' without actual argument directly in the language? If not I
might want to write up a proposal for that because if run into wanting this
for a longer time. E.g. delegate methods would be a very common case:
tableView(_:numberOfSections) is a lot more consistent with all other
delegate methods.

Joe recently sent an email on behalf of Dave to start this very discussion.

*Division on Number?*
The intro of the proposal puts division under Number, while the detailed
design puts it under BinaryInteger, which is it?

It is one of the most recent changes, division is *not* in the Number. It
was moved down the hierarchy to BinaryInteger. I’ll fix the proposal.
Thanks!

Max

On Wed, Feb 22, 2017 at 7:39 AM, Max Moiseev via swift-evolution < > swift-evolution@swift.org> wrote:

On Feb 18, 2017, at 12:02 PM, Karl Wagner via swift-evolution < >> swift-evolution@swift.org> wrote:

I assume the “SignedNumber” protocol is the same as the existing one in
the standard library. That is to say, Strideable.Stride will now conform to
Number and have operators.

SignedNumber will *not* be the same. It is just the same name.
Stride will have operators, yes. Strideable in general will not, unless
it’s a _Pointer. (you can find the current implementation prototype here
<https://github.com/apple/swift/blob/new-integer-protocols/stdlib/public/core/Stride.swift.gyb&gt;
).

Also minor nitpick, would it be too onerous to require Number.Magnitude
to be Comparable? Currently it’s only Equatable and
ExpressibleByIntegerLiteral.

Magnitude is supposed to conform to Arithmetic (or Number, or whatever it
ends up being called), but the recursive constraints feature is missing,
therefore we constrained it with the protocols that Arithmetic itself
refines.

Why would you want Comparable?

Max

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I assume the “SignedNumber” protocol is the same as the existing one in
the standard library. That is to say, Strideable.Stride will now conform to
Number and have operators.

SignedNumber will *not* be the same. It is just the same name.
Stride will have operators, yes. Strideable in general will not, unless
it’s a _Pointer. (you can find the current implementation prototype here
<https://github.com/apple/swift/blob/new-integer-protocols/stdlib/public/core/Stride.swift.gyb&gt;
).

Currently, it’s difficult to work with Strideable because you can
calculate distances between them (as type Strideable.Stride), but you can’t
add those distances because Strideable.Stride is only constrained to
conform to “SignedNumber”, which is a pretty useless protocol.

In the prototype, Strideable.Stride has now been changed, so it is
constrained to “SignedArithmetic” (which, apparently, is to be renamed
“SignedNumber”). So essentially, the existing SignedNumber has been
re-parented to “Number” and gains operations such as addition. That’s great!

I think it would be worth including Strideable in the “big picture” in the
proposal/manifesto, showing how it fits in..

Also minor nitpick, would it be too onerous to require Number.Magnitude to
be Comparable? Currently it’s only Equatable and
ExpressibleByIntegerLiteral.

Magnitude is supposed to conform to Arithmetic (or Number, or whatever it
ends up being called), but the recursive constraints feature is missing,
therefore we constrained it with the protocols that Arithmetic itself
refines.

Why would you want Comparable?

Max

I suppose that leads me on to the question of why Number itself only
requires that conformers be (Equatable & ExpressibleByIntegerLiteral) and
does not require that they be Comparable.

If I must be able to create any “Number" out of thin air with an integer
literal, is it not reasonable to also require that I am able to compare two
instances?

Are there Number types which can’t be Comparable?

Complex numbers. I believe `Number` is designed to allow a complex number
type to conform.

···

On Tue, Feb 21, 2017 at 4:04 PM, Karl Wagner via swift-evolution < swift-evolution@swift.org> wrote:

On 21 Feb 2017, at 21:39, Max Moiseev <moiseev@apple.com> wrote:
On Feb 18, 2017, at 12:02 PM, Karl Wagner via swift-evolution < > swift-evolution@swift.org> wrote:

- Karl

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Arbitrary sized signed -1 can have different popcount depending on the

underlying buffer length (if there is a buffer) even if it’s the same type
and the same value. Same with leading zeros, as the underlying
representation is unbounded on the more significant side.

Sensible, why didn't I think of that :).

Also, arguably shouldn't it be numberOfLeadingZeroBits?

There is countLeadingZeroBits for that.

I'm mean whether leadingZeroBits should be named numberOfLeadingZeroBits, I
don't see any countLeadingZeroBits.

Joe recently sent an email on behalf of Dave to start this very discussion.

Yeah, that's the one I was responding to! I'm was talking more concretely
about language support.

···

On Wed, Feb 22, 2017 at 10:18 AM, John McCall via swift-evolution < swift-evolution@swift.org> wrote:

On Feb 21, 2017, at 5:39 PM, Dave Abrahams <dabrahams@apple.com> wrote:
Sent from my moss-covered three-handled family gradunza

On Feb 21, 2017, at 10:08 AM, John McCall <rjmccall@apple.com> wrote:

On Feb 21, 2017, at 2:15 PM, Dave Abrahams via swift-evolution < > swift-evolution@swift.org> wrote:

Sent from my moss-covered three-handled family gradunza

On Feb 21, 2017, at 9:04 AM, Jordan Rose <jordan_rose@apple.com> wrote:

[Proposal: GitHub - apple/swift-evolution: This maintains proposals for changes and user-visible enhancements to the Swift Programming Language.
proposals/0104-improved-integers.md]

Hi, Max (and Dave). I did have some questions about this revision:

Arithmetic and SignedArithmetic protocols have been renamed
to Number and SignedNumber.

What happens to NSNumber here? It feels like the same problem as Character
and (NS)CharacterSet.

Endian-converting initializers and properties were added to
the FixedWidthInteger protocol.

This is the thing I have the biggest problem with. Endian conversions
aren't numeric operations, and you can't meaningfully mix numbers of
different endianness. That implies to me that numbers with different
endianness should have different types. I think there's a design to explore
with LittleEndian<Int> and BigEndian<Int>, and explicitly using those types
whenever you need to convert.

I disagree. Nobody actually wants to compute with numbers in the wrong
endianness for the machine. This is just used for corrections at the ends
of wire protocols, where static type has no meaning.

I think Jordan's suggestion is not that LittleEndian<Int> or
BigEndian<Int> would be artihmetic types, but that they would be different
types, primarily opaque, that can be explicitly converted to/from Int.
When you read something off the wire, you ask for the bytes as one of those
two types (as appropriate) and then convert to the underlying type.
Ideally, Int doesn't even conform to the "this type can be read off the
wire" protocol, eliminating the common mistake of serializing something
using native endianness.

Still, you have to implement those somehow. How do you do that without
this functionality in the Integer API? Turtles have to stop somewhere. We
could de-emphasize these APIs by making them static, but "x.yyyy" already
*is* a place of reduced emphasis for integers.

That's fair. I don't object to having methods for these as long as they
aren't the encouraged way of working with byte-swapped integers.

John.

John.

Here's a sketch of such a thing:

struct LittleEndian<Value: FixedWidthInteger> {
  private var storage: Value

  public var value: Value {
if little_endian
    return storage
#else
    return swapBytes(storage)
#endif
  }

  public var bitPattern: Value {
    return storage
  }

  public var asBigEndian: BigEndian<Value> {
    return BigEndian(value: self.value)
  }

  public init(value: Value) {
if little_endian
    storage = value
#else
    storage = swapBytes(value)
#endif
  }

  public init(bitPattern: Value) {
    storage = bitPattern
  }
}

I'm not saying this is the *right* solution, just that I suspect adding
Self-producing properties that change endianness is the wrong one.

  /// The number of bits equal to 1 in this value's binary representation.
  ///
  /// For example, in a fixed-width integer type with a `bitWidth` value
of 8,
  /// the number 31 has five bits equal to 1.
  ///
  /// let x: Int8 = 0b0001_1111
  /// // x == 31
  /// // x.popcount == 5
  var popcount: Int { get
}

Is this property actually useful enough to put into a protocol? I know
it's defaulted, but it's already an esoteric operation; it seems unlikely
that one would need it in a generic context. (It's also definable for
arbitrary UnsignedIntegers as well as arbitrary FixedWidthIntegers.)

The whole point is that you want to dispatch down to an LLVM instruction
for this and not rely on the optimizer to collapse your loop into one.

(I'm also still not happy with the non-Swifty name, but I see
"populationCount" or "numberOfOneBits" would probably be worse.)

Thanks in advance,
Jordan

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

There is another option to avoid the extra types, which is to stop trying to force the disambiguation through argument labels, accept an ambiguous call and have the context disambiguate:

// compilation error: ambiguous
let x = i.multiplied(by: j)
// unambiguously overflow-checked
let (y,overflow) = i.multiplied(by: j)
// unambiguously full-width
let z: DoubleWidth = i.multiplied(by: j)

Ambiguity is bad when you want to distinguish between the “usual one” versus other more specialized versions. So if you really had a regular trapping `adding`, but then also wanted to accommodate the overflow-reporting version when a user explicitly requests it, then the argument label is a clear win. This is a slightly bogus example though, because we explicitly don’t have things like `adding`, we have a `static +` instead. Where the disambiguation is needed is instead between two less common variants as described above.

It’d need a new language feature to support it, but what having “default” resolutions for overloaded functions?
    default func multiplied(by other: Self) -> Self // `default` means try resolving ambiguities with this version first. The overloaded versions are only considered if the type-checker can’t make this version work.

This feature is not strictly required in this case, as we moved away from using `multiplied` of type (Self, Self) -> Self to using proper `static func +`. So the ambiguity will *not* happen in the most common case when you want to multiply two numbers of some type and get the result of the same type. Ambiguity will only become a problem in what I believe to be a very less frequent case, when you want to do something very special, like, catch the overflow explicitly or get the full result in a form of DoubleWidth<T>.

···

On Feb 22, 2017, at 8:47 AM, David Sweeris via swift-evolution <swift-evolution@swift.org> wrote:

On Feb 22, 2017, at 8:01 AM, Ben Cohen via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

    func multiplied(by other: Self) -> (partialValue: Self, overflow: ArithmeticOverflow)
    func multiplied(by other: Self) -> DoubleWidth<Self>

// signature matches default implementation, use that
let x = i.multiplied(by: j)
// default version doesn’t return a tuple, so try the overloads… matches the overflow-checked function
let (y,overflow) = i.multiplied(by: j)
// default version doesn’t return a DoubleWidth, so try the overloads… matches the double-width function
let z: DoubleWidth = i.multiplied(by: j)

- Dave Sweeris
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I assume that Number being renamed Numeric implies SignedNumber being renamed SignedNumeric?

That is correct.

···

On Feb 25, 2017, at 8:28 AM, Xiaodi Wu via swift-evolution <swift-evolution@swift.org> wrote:

On Sat, Feb 25, 2017 at 09:06 Ben Rimmington via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
<https://github.com/apple/swift-evolution/blob/master/proposals/0104-improved-integers.md&gt;

> On 24 Feb 2017, at 02:05, Ben Cohen wrote:
>
> Regarding the “Number” protocol: it was felt this is likely to cause confusion with NSNumber, given the NS-prefix-dropping versions of other Foundation types indicate a value-typed concrete type equivalent, which this won’t be. The recommendation is to use “Numeric" instead.

Does the `Error` protocol cause confusion with the `NSError` class?

I think `Number` is better than `Numeric`, because it is consistent with `SignedNumber`.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

However, without doing anything, Numeric itself could serve the same type-erased box purpose. Since most arithmetic operations require `Self` arguments, they naturally wouldn't be available on the Numeric existential (unless someone designed and implemented an extension that handled up-conversion among arbitrary numeric types in the vein of Scheme's number tower or something like that). This seems like one place where that behavior would be desirable.

-Joe

···

On Feb 27, 2017, at 11:05 AM, Jordan Rose via swift-evolution <swift-evolution@swift.org> wrote:

On Feb 25, 2017, at 07:06, Ben Rimmington via swift-evolution <swift-evolution@swift.org> wrote:

<https://github.com/apple/swift-evolution/blob/master/proposals/0104-improved-integers.md&gt;

On 24 Feb 2017, at 02:05, Ben Cohen wrote:

Regarding the “Number” protocol: it was felt this is likely to cause confusion with NSNumber, given the NS-prefix-dropping versions of other Foundation types indicate a value-typed concrete type equivalent, which this won’t be. The recommendation is to use “Numeric" instead.

Does the `Error` protocol cause confusion with the `NSError` class?

I think `Number` is better than `Numeric`, because it is consistent with `SignedNumber`.

Error and NSError are actually related types, like String and NSString, and are used in roughly the same way. Numeric and NSNumber serve very different purposes: the former is a protocol for various operations, and the latter is a semi-type-erased class box around a *cough* numeric value that (deliberately) doesn't implement most of the operations in the protocol.

True, but I think the only reason we're not using operators for the other `multiplied` variants is that they need the third parameter. So we could instead specify:

  default static func * (lhs: Self, rhs: Self) -> Self
  static func * (lhs: Self, rhs: Self) -> (partialValue: Self, overflow: ArithmeticOverflow)
  static func * (lhs: Self, rhs: Self) -> DoubleWidth<Self>

And not have to worry about these names at all.

(On the other hand, the reversed DoubleWidth division would still be a problem. But isn't there a proposal in the works to add `\` to the operator characters?)

···

On Feb 22, 2017, at 10:55 AM, Max Moiseev via swift-evolution <swift-evolution@swift.org> wrote:

    default func multiplied(by other: Self) -> Self // `default` means try resolving ambiguities with this version first. The overloaded versions are only considered if the type-checker can’t make this version work.

This feature is not strictly required in this case, as we moved away from using `multiplied` of type (Self, Self) -> Self to using proper `static func +`. So the ambiguity will *not* happen in the most common case when you want to multiply two numbers of some type and get the result of the same type. Ambiguity will only become a problem in what I believe to be a very less frequent case, when you want to do something very special, like, catch the overflow explicitly or get the full result in a form of DoubleWidth<T>.

--
Brent Royal-Gordon
Architechies