The current protocols ExpressibleByIntegerLiteral and ExpressibleByFloat
Literal are simple and work well but don't support arbitrary precision
literal values. Replacing those protocols is a non-goal as they provide a
simple interface for work well for most cases.
Honestly, I don't think I agree with this. I see no particular reason to
like our current protocols; they break down as soon as your type gets
larger than the largest standard library integer/float type, which
undermines one of their main use cases.
Right. I think the existing ones should, if at all possible, be revised to
support arbitrary precision literal values--or at least very, very large
precision literal values (as it can be argued that, even for a BigInt, the
ability to specify a value of arbitrarily many digits as a _literal_ would
be rarely used).
I've been toying with a different approach in my head for a few weeks. The
`BinaryInteger` protocol contains the concept of a `words` collection,
which expresses any integer type as a collection of `UInt`s containing a
signed two's-compliment representation of the integer. That means any
`BinaryInteger` already contains code to handle a `words` collection. If we
made this more exposed in some way, then `ExpressibleByIntegerLiteral`
could leverage that conformance.
One approach would be to extract the `words` collection into a
higher-level protocol:
protocol BinaryIntegerSource {
associatedtype Words: Collection where Iterator.Element == UInt
var words: Words { get }
}
Then we could modify `BinaryInteger` to accept this:
protocol BinaryInteger: BinaryIntegerSource {
...
init<T : BinaryIntegerSource>(_ source: T)
...
}
And introduce a new `IntegerLiteral` type which is a
`BinaryIntegerSource`, but not a `BinaryInteger` (so you can't do
arithmetic with it):
struct IntegerLiteral: BinaryIntegerSource {
associatedtype Words = …
var words: Words { … }
}
And now, you can say something like:
struct Int128: ExpressibleByIntegerLiteral {
fileprivate var _value: DoubleWidth<Int64>
init(integerLiteral value: IntegerLiteral) {
_value = DoubleWidth(value)
}
}
And everything ought to do what it's supposed to. You could still use a
different type if you didn't need anything larger than, say, `Int`. I don't
believe this would require any changes to the compiler; `IntegerLiteral`
could conform to `_ExpressibleByBuiltinIntegerLiteral`, which would allow
it to represent integers up to the current limit of 1024 bits + 1 sign bit.
(There are a few similar approaches we could take, like exposing an
`init(words:)` constructor in `BinaryInteger` and having the
`IntegerLiteral` behave as a `Words` collection, but all of them basically
involve bootstrapping into `BinaryInteger` through the `Words` type.)
I *think* that the not-yet-implemented `BinaryFloatingPoint.init<
Source: BinaryFloatingPoint>(_ value: Source)` initializers could be
leveraged in a similar way—create a `BinaryFloatingPointSource` protocol
and a `BinaryFloatLiteral` type that conforms to it—but I'm less certain of
that because I don't really understand how this universal float conversion
is supposed to work. Plus, the universal float conversion is still just a
TODO comment right now.
Hmm, I wonder if less is more.
First, we will soon have DoubleWidth types in the stdlib, which I would
hope means that they will be useable out of the box as integer literal
types. This would cover a lot of use cases, I'd imagine, as it would
trivially get you 128-bit and 256-bit types. Though they might be less
efficient for arithmetic, they should be perfectly suitable for
initializing a value from a literal.
For larger than 256 bits, could we not recover almost all of the benefits
by exposing Int2048 as an integer literal type? Given that Float80 has not
been abused, I don't think it's the case that offering Int2048 means people
will reach for it to do arithmetic when a smaller type will do.
(This would leave non-binary floats in the lurch, but we're pretty much
doing that already—try initializing `Decimal` through its
`ExpressibleByFloatLiteral` conformance sometime and you'll see what I
mean. I would support changing its name to `ExpressibleByBinaryFloatLitera
l`.)
This is actually one of the greatest deficiencies I see in
`ExpressibleByFloatLiteral`. Here, I would disagree with you and say that
I'd like to see `ExpressibleByFloatLiteral` improved precisely because of
its very poor functionality for `Decimal`. IMO, while saying that floating
point literals are only meant to work in binary would mean that the current
design is "correct," it's unfortunate and unjustifiable that `0.1` doesn't
mean 0.1.
Honestly, I'd even prefer to allow `String` as a floating point literal
type (because after all that's what Decimal is really doing under the hood)
than to just give up on this aspect of float literals altogether. I don't
have a good answer here but I would hate to see this opportunity lost to
fix the deficiency for real.
These leave our current integer and floating-point literal size limits
···
On Fri, Mar 31, 2017 at 3:45 AM, Brent Royal-Gordon via swift-evolution < swift-evolution@swift.org> wrote:
On Mar 30, 2017, at 2:56 PM, David Hart via swift-evolution < > swift-evolution@swift.org> wrote:
(1025-bit signed integers and 80-bit floats) in place, but those are
implementation details and could be changed. In practice, I very much hope
the compiler will try to optimize initialization from literals aggressively.
--
Brent Royal-Gordon
Architechies
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution