An aside about the word âshouldâ, for curiosityâs sake
Some time ago I put some more explicit thought into the word âshouldâ, and I noticed some rather obvious things that nonetheless seemed extremely important to me because of how commonly it is used in everyday English, and how commonly I believe I see it contribute to conflicts.
Hereâs some Swift that captures an aspect of âshouldâ that I think is important:
protocol SentenceSubject {
func should
<Goal>
(_ strategy: Action,
inOrderToAchieve goal: Goal)
-> PieceOfAdvice
}
if you have the subject:
struct Person: SentenceSubject { ... }
then you can create a piece of advice like this:
let you = Person()
let advice =
you.should(
goBuyYourTicketFromTheTeller,
inOrderToAchieve: yourDesiredBusTrip
)
One of the âobviousâ things I was referring to is the simple fact that âshouldâ actually takes a second argument, namely inOrderToAchieve goal: Goal
.
When I say that âshouldâ contributes to conflicts, Iâm referring to a specific, common âprogrammingâ error that I believe people make when using it. Iâll clarify:
Since the âshouldâ function is used very commonly but is also somewhat unwieldy, whenever youâre within a domain where you can assume a particular value for the goal
itâs also very common to define more ergonomic overloads of the form:
func should (_ strategy: Action) -> PieceOfAdvice
Example:
âI want to take the bus to Madrid.â ( this "macro" provides the overload should(_:)
)
âYou should go buy your ticket from the teller in order to achieve your desired bus trip.â.
The first thing that can go wrong is that someone tries to use one of these ergonomic overloads but finds that they are actually not in a context where such an overload exists (unlike the simple example with the bus where it worked fine). I interpret @xwu âs question as analogous to a compile-time error telling @anon9791410 that in his (Xiaodiâs) current context there is no such overload should(_:)
.
This is actually not any kind of issue in my mind though. The listener emits a compile-time error and it gets worked out promptly.
Thereâs a worse (but also more common) outcome, which I think derives from a strange cultural aversion that we seem to have to this type of âcompile-time errorâ in conversation (i.e., slowing down to clarify before responding). What I think I see happen often is that (because of the aversion to compile-time errors) the listener accepts the usage of the overload despite not having a sound way to resolve it to a particular meaning in the current context, and so tries to use some combination of unsound methods to come up with the best meaning they can and then uses that to respond to the statement (probably erroneously). This is analogous to a âruntime bugâ. If both participants suffer from the aversion to using âcompile-time errorsâ in conversation then it is likely that these misunderstandings will stack up, as each person responds to the flawed response with another response that is born of faulty understanding mixed with a strange insistence on barreling ever-forward.
(Analogy: Imagine SwiftGPT, a language model that accepts absolutely any âSwiftâ code that you write and compiles the âclosestâ correct program that it can think of to what youâve written. If what you write compiles then itâs guaranteed to come out identical, but if you have any compile-time error then instead of being told about it you just give SwiftGPT license to change whatever it âneedsâ to in order to make it âworkâ. What a nightmare! Weâd never ship another stable thing! The point is, what a blessing compile-time errors are.)
(After writing that passage about SwiftGPT I realized that maybe it could be made well enough to actually work fine, and that maybe thatâll be more or less the future of programming⌠)
Concretely, I think that in the realm of politics in particular there is a lot of this kind of âshouldâ thrown around without much precision about the second parameter (inOrderToAchieve:
), and that a lot arguments will go in circles until someone thinks to clarify what each personâs goal is when they imprecisely say âwe (as a country) should {xyz}â. In the case where the people have different goals in mind, the argument wonât be immediately resolved, but it will at least be clarified that the disagreement is actually about what the goals are, and not about the effectiveness of a particular strategy toward a particular goal, which can help the participants move forward. If/when they eventually converge on a shared goal, then they can finally debate the effectiveness of different strategies toward that goal.
Thanks for reading!