BTW, I've been investigating other Decimal number implementations in Swift, and they seem to be successful (I think) in converting from floating-point literals. I'm going to have a closer look at their implementations -- perhaps they just don't know what they're doing is not possible. 
Here are some of the test cases that appear to be passing:
let values: [(String, Float)] = [
("1.0", 1.0),
("0.5", 0.5),
("0.25", 0.25),
("50.", 50.0),
("50000", 50000.0),
("0.001", 0.001),
("12.34", 12.34),
("0.15625", 5.0 * 0.03125),
("3.1415925", Float.pi),
("31415.926", Float.pi * 10000.0),
("94247.77", Float.pi * 30000.0)
let values = [
("1.0", 1.0),
("0.5", 0.5),
("50", 50.0),
("50000", 50000.0),
("1e-3", 0.001),
("0.25", 0.25),
("12.34", 12.34),
("0.15625", 5.0 * 0.03125),
("0.3333333333333333", 1.0 / 3.0),
("3.141592653589793", Double.pi),
("31415.926535897932", Double.pi * 10000.0),
("94247.7796076938", Double.pi * 30000.0)
Can someone tell me a test case that would fail? Here are more "impossible" test cases that pass:
let values: [BigDecimal] = [
2.5,
0.3,
0.001,
]
let expectedSum = BigDecimal(2.801)