I think Java used to (until around 2012 or thereabouts) implement String
like our Substring
type. That is, a string would hold a character buffer combined with an offset and a length. Calling .substring
on a string would return a String
type (but again, similar to our Substring
) with the same buffer, but a different length and offset.
They changed that because the overwhelming number of string manipulations weren't parsers, scanners and other cases where that optimisation mattered. Their String
is now like our String
.
Swift opted for a middle ground. The Swift project realised that although Java made the right decision when they changed their String
, there are still cases where keeping the old "lens" type made sense. So Swift god two distinct types.
It has a different set of tradeoffs. For the most part, the inconvenience of dealing with two distinct types are mitigated by type inference and function overloads, but it sometimes surfaces to the user/programmer. Like it did for the OP.
I still think it is far preferable to having a single String
type with Substring
semantics.
As the Java team learnt the hard way.