Functional programming in Swift

Continuing the discussion from Higher Kinded Types (Monads, Functors, etc.):

I think this is a sure way to alienate a lot of people who could be lured to the advantages of functional programming. I realize the mind-blowing potential of results such as the Curry–Howard isomorphism and I respect the power of thinking formally, in well-defined abstractions, but emphasizing the math behind the concepts too early is often precisely what drives people away from FP.

From teaching viewpoint, the correct way for a lot of programmers is going from the particular to the abstract, not the other way around. So what usually works for functors is to introduce something like protocol Mappable or emphasize the shared properties of collections and optionals – not introducing a basic course on category theory.

(The main reason I post is to split this discussion from the Higher Kinded Types.)

This highly depends on a person's background. I know Jan has some background in mathematics, which is one of the reasons I wrote the reply as it is.

In any case, I was not implying the necessity of introducing courses on category theory or emphasizing mathematical concepts behind the terminology in FP. What I am against is concealing that relation or standing in the way of one's understanding of it's existence.

Off-topic discussion

Doesn't this thread belong into #evolution:discuss ?

I have changed the category on purpose. The original HKT discussion is a fine fit for #evolution, but this general discussion about FP in Swift is not, IMHO.

I see, OK. FWIW, I have dabbled in computer science a bit, but coming to FP from the practical side was much easier for me. This I think is also one of the reasons languages like Elm enjoy much more success between “ordinary” programmers then other FP languages that insist on more rigour right from the start.

1 Like
Off-topic discussion

Not trying to nitpick something, but to be honest #swift-users is more about "I have this pure Swift code, but I cannot get it right, can somebody guide me" and general discussions about Swift belong into well #evolution:discuss (at least they do for me), but it's your thread. ;)

1 Like

I agree with you that from particular to the abstract to comprehend the concepts in FP one at a time.
But I am a bit doubtful about whether the syntax of swift would hinder the progress of comprehension of FP. Because the swift is not designed to support FP, it supports multi programming paradigm like OOP, therefore its syntax is a bit compromise compared with Haskell to FP. I didn't mean the syntax design of Swift is not good, I just wanna emphasize the design focus for swift is not on FP even though swift does support FP but not fully at least now. In addition, one thing about FP is the laws governing the concepts of FP, like the functor law, monad law, those laws make functor as functor, monad as monad.
From my humble opinion, how about learning FP by studying a FP programming language like Haskell, not from Category Theory. It is a bit lower abstraction and more practicing. Then mapping the concepts or types between swift and Haskell to gain more understanding. I think this way is to jump a little bit to catch a fruit. In Haskell, I can feel what means everything is function and A approximation of CT is the abstract algebra of function said by someone, that viewpoint is good way unify lots of concepts hanging around in FP.
By the way, there are some great beginner books for Haskell.

I know I already linked this many times, but I think that the guys over at PointFree are doing a great job in teaching the basics of functional programming to the Swift community in a pragmatic way.

But there's a catch, and I think you already identified it. Functional programming is just about trying to program using pure functions, and all the crazy abstractions come after realizing that, in using pure functions, we can derive a lot of structure from some well-defined theoretical assumptions. But this drive towards pure functions actually follows from the drive to write code that's easier to understand and reason about, thus programming in a more "principled" way: pure functions provide the necessary purity and compositionality to make the complicated simple. To me, this idea comes before everything else.

In my experience, starting from showing – to developers with some experience already in other styles/paradigms – examples of functional constructs for solving practical problems usually results in variations of the same objection: these functional constructs are equivalent to other constructs, more structured or object oriented, but pointlessly abstract and with weird names.

Also in my experience, I was able to achieve some good results with seasoned developers the moment in which I emphasized the formal differences between the methods, and what are the advantages, from an abstract point of view, in developing in a more principled and pure way. It's just my experience, and certainly anecdotical, but I feel like that the focus on some practical examples, typically on collections ("why should I map if I can for-in?"), optionals ("is that the Null Object pattern?"), or even the Result monad ("isn't it easier to catch exceptions at the top level?") is in fact what separates the communities so strongly. In my professional life I've been separately involved in discussions with OO people and FP people, and its like they're talking completely different languages. There's very little cross-pollination between the communities, and it feels a shame.

The moment you realize that mutability is almost always a bad idea, and that there's no point in passing around references when you really want to transfer information through values, it's the moment when you start programming (without realizing it) in a functional way, and the humble beginnings can really take you away, in time: thus, I think that focusing on first principles is a better way to start teaching FP, at least in my experience.

That being said, category theory is a whole different beast, and I don't think it's a useful tool to teach in an initial phase. After having laid the foundations of principled programming and purity, we discover some interesting structure while composing functions, and category theory provides a lot of already done work on the matter, other that giving you the tools to further improve upon what's already done, or even inventing new stuff: it become the swiss-army knife for constructing new abstractions, so its study is certainly useful (to a point: let's remember that category theory is really a theory about whats common among many mathematical disciplines, and we're really just interested in one, computation) but I would consider it in a second, or even third phase.


I do think this is only half-true. There are some OOP people (or even "unprincipled programmers" who don't subscribe to any particular ideology) who will say "ugh, PFP and Monads etc. are just stupid ivory tower abstractions", but I also know a lot of people in OOP who try to play with pure functions, abstractions like monoids, etc. etc. Maybe not from a super principled rigorous standpoint and with not 100% concrete translations, but nonetheless, the interest is there.

By contrast, I find that there is a substantial part of the PFP (and even the non-pure FP community, such as Clojure or Erlang, although maybe to some lesser extent) who just perpetuate this "OOP is a disaster and should be abandoned completely" notion. Reasonable discussions about the respective merits of the two paradigms (or of any kinds of paradigms) are rare; and engineering tradeoffs are not often mentioned (although that is, sadly, an unfortunate reality of most online programming discussions). I also rarely see PFP people actually carefully consider key techniques popularised in OOP languages such as domain driven design (which could apply equally well to ADTs and doesn't need objects) or design patterns (funnily enough, some Haskell people rant against design patterns and don't understand that f <$> xs <*> ys <*> zs is also a design pattern).

Even if OOP were so horrible, it would still be bad PR: The PFP still hasn't been able to explain why OOP is supposedly so horrible, yet 99% of the world's software works reasonably well and generates a lot of value while being written in those horrible languages, yet there is very few software that is actually written in something like Haskell, OCaml or even Clojure. This mismatch just creates a lot of anger at languages like Haskell; IMHO, unfairly, because it's not Haskell's fault that some people try to treat it as a silver bullet.

There is value to PFP, but the question of how much more value is there to PFP, and at what cost is rarely raised. This is ultimately the question about the marginal utility of using (more) PFP, which is a question that depends a lot on the problem domain.

More concretely:

Yeah, no. We should really (as a discipline) stop making these kinds of blanket statement. I'm not a big fan of unprincipled mutation myself, but there is a cost to avoiding mutation and we should make this clear.

I'll also say that a lot of "silver bullet" programmers tend to forget that while it's cool and useful to create beautiful models and abstractions, you don't actually always know what the right abstraction is. Sometimes you stumble upon it and then it's great and you should enshrine it, but a lot of the time, assumptions just shift under your feet all the time.

(All that said, I still think functional programming is great.)


I'm not sure what's your point. Is it "everything is ok and mostly equivalent"? I'll try to dissect some of your statements, sorry if I forget something important but I don't have the energy to embark in this kind of discussion in full force. But considering the tone of your answer, I'll bite for now.

Mathematics, logic, and the effort to program in a principled way is not an ideology: I never met a single functional programmer that assumed that FP is the only true way. Fascism and communism are ideologies, not trying to approach computation in a rigorous and formal way.

And I'm happy about that. A principled approach has nothing to do with the coding paradigm or style: you can be principled even in a procedural-only world.

In my opinion, and from what I've seen in my professional career, OOP is mostly treated as a de-facto way to solve programming problems, without any real thought about the matter. Also, OOP is a confuse and confusing term, and is often interpreted in different ways by different people. Usually FP people know or have a background in OOP, and have no reason to even consider trying to solve any problem in a OOP way (whatever it means), but they'll use reference types when it makes sense: if possible in their language, they'll use reference types as a tool to pass around references, so they don't refuse classes.

I agree on this one: DDD is an area from which the FP community can draw a lot of ideas.

I completely disagree. I don't know what your source is on this, that makes you say with such confidence that a construction based on applicative functors is a design pattern, but I don't see any resemblance of many functional abstractions and constructions to the design patterns of OOP. Care to elaborate on how that particular example is a design pattern, like those in the Gang of Four?

The fact that Haskell is often treated as a silver bullet (wrongly, in my opinion) is a different matter from your request to explain why OOP is so bad. It's a long story, and I don't want to elaborate further right now, but I'll just observe that rate of usage and consent are not relevant factors when considering quality and effectiveness of a particular technology (or anything, really).

A question like this was already posed 30 years ago, in regard to OOP, and even before, when strict structured programming was proposed. A principled approach has nothing to do with the problem domain, and my point (which is also the point of most of the people that eventually end up programming in a functional way) is that this approach naturally leans towards FP: I might be wrong, but I still didn't see any relevant counterexample.

This is a blanket statement like saying that "throwing yourself out a window is almost always bad" is a blanket statement: there might be situations where that's not true, but they're rare, and usually the statement is absolutely valid.

Of course there is a cost in immutability, and I wrote almost always. But a problem in the programming community as a whole is certainly not "too much immutability": if anything, it's the opposite, and by a large margin.

Unless you actively study the very concept of "abstraction" in itself, which is one of the goals of category theory.

You've expressed very clearly one of the major problems in programming nowadays: most of it is accidental. I guess some like it that way: I, along with many people, certainly don't.

That's fair. I think this is an important discussion. My point is not that "everything is ok", it's that "engineering involves substantial trade-offs and I find it very counterproductive to pretend that it's anything other than that". In particular, I believe that any kind of unqualified "FP is superior to OOP" statement (without further context) is harmful to FP itself because it makes FP advocates look as if they're full of themselves.

I think a lot of the things you mention - HKT, immutability, purity, even things like dependent types - are very useful tools to have, but the goal of software engineering is not to use nice tools, it's to use tools effectively in order to solve a particular problem.

Yes, it is. The insistence that every programming effort, no matter the context, needs to be approached in a "principled" manner, is an ideology. It's one that I find myself agreeing with a lot of the time, but not always. Of course, mathematics and logic in itself, are not ideologies, but the insistence that every problem be approached as a mathematical one, is.

This will depend a lot on your background. In the Ruby or Javascript world, I don't find this to be the case; if anything, I find that many Ruby programs could have benefited from a better understanding of OOP; in any case these two communities are very open about FP approaches. More importantly, further down you write:

so maybe OOP programmers are not knowledgeable about FP, but FP programmers are knowledgeable about OOP and never even consider it? I find this attitude to be very troubling. What if nobody else in your team knows PFP?

To this quote I'll add two things: First of all, the original idea of OOP as envisioned by Alan Kay is clear enough to me, even if you have to make some effort to actually implement them in your run-of-the-mill OOP languages (it's much easier in e.g. Smalltalk or, surprisingly, even Erlang). Second, there is this kind of mistaken notion among some PFP practitioners that terms must be very clearly and rigorously defined or they may not be used. This completely ignores the important role that metaphors play in human understanding. Even mathematics is full of metaphors. The fact is that programs must be understood by humans, so if thinking in metaphors of independent objects that communicate with their collaborators helps us design systems, I don't care whether there's a mathematical formulation behind it or not. I hope you're not one of these people who suggest that there is no value to the humanities and social sciences just because it's very difficult to very rigorously define every term.

Wikipedia defines "design pattern" as "a general, reusable solution to a commonly occurring problem within a given context in software design". The pattern that I described provides a solution for the problem of having to lift an n-argument function over a particular functor; you know this because you recognised the pattern. Wikipedia also describes that a pattern is not an immediately reusable piece of software; this applies here because (unless you use code generation of some sort) your "n" might be in theory as large as possible, so you can write lift2, lift3 etc. functions, but not some generic liftN function.

Why do you recognise this pattern? Because you have seen it multiple times before. I sincerely doubt that, every time you see this kind of notation, you have to re-derive its meaning from first principles. So, as a design pattern, this construction gives you a general solution to a common problem, much as the Strategy pattern gives you a general solution to a common problem in Java. If you've never seen a Strategy, its use will seem puzzling, just as you will be confused the first time you see a Haskell example like the one above. In this sense, a pattern, to me, is part of a "shared language" of practitioners that is not, in and of itself, part of the language syntax per se.

This kind of statement is repeated so often that I believe it's necessary to adress it directly: While it's true that popularity of a thing doesn't indicate superiority, I also believe that things that are really, really horrible cannot endure long, especially not when the target audience consists in generally well-educated, motivated people with a high willingness to learn. The "OOP is actually horrible but most programmers are not smart / knowledgeable enough to realise it" is, IMHO, a really elitist way of thinking.

Maybe OOP is not a global optimum of any sort, but it very probably is in some way a local optimum or people would have moved away.

How about:

  • we're writing a quick prototype and there's no need to prevent mutation
  • we're writing some bash script
  • there is a published algorithm that uses mutation (locally) and it would be a waste of time to rewrite it not to use mutation, when there is a published proof for the correctness of the result anyway
  • you have a team of very junior developers who are not very comfortable with FP and you have to ship something soon
  • you write two versions of your code, one with and one without mutation and somehow realise that the first one is just much more readable
  • you write two versions of your code and somehow the one with mutation is much faster (order of magnitude)

etc. etc.

I really think that your "mutation is almost always bad, certainly as often as throwing yourself out a window is" is a very, very bold assertion, and I furthermore think that these kinds of assertions turn people away from FP because they do not, on the surface, appear reasonable. It also appears especially puzzling in the context of a language like Swift which very explicitly provides you with both alternatives (which is a major step forward from e.g. Java in which you have to go to greater lengths to make things actually immutable).

I don't understand how that relates to what I said, namely that reality is messy.

I think it's not so much that some people like it that way, it's that somehow, you can never remove all accidents from code; certainly not if you have to deal with weird, ever-changing business requirements, varying degrees of expertise within a team, bugs in technology and outside libraries / APIs, etc. etc.

I don't think that every problem should be approached as a mathematical one, far from that, and I don't see this in the FP community: what I see, in fact, is much more pragmatism, something that you actually find a lot less in the OOP one, because a single function is a much simpler and smaller concept than an object. But I think that for many problems in software development, a mathematical approach is generally a very good one, even if it just gives some insights.

That's a different point. Coding is in general a team effort, and the team should proceed as much as possible a whole. But the OOP background (or any background, really) should be a strength, and not an arbitrary boundary.

The confusion around OOP is not a matter of metaphors: it's really about the fact that, as a paradigm (a set of tools, approaches and patterns for solving problems), OOP has a lot of interpretations that are often incompatible. I'm familiar with the original Alan Kay definition, that's what I tried to use in the Objective-C era, and funnily that was my true gateway towards FP. But I've known maybe 2 people in my life that have maybe heard about Alan Kay and what he talked about when he coined the expression "object-oriented programming". We've probably had different experiences with the OOP community: fair enough.

I still don't see the resemblance. Lifting a function over a functor is the solution, not the problem: the problem is constructing an instance from multiple instances that are only available in a functor context. The applicative is then an abstraction that can be used to represent this lifting operation. Lifting a function over a functor is a design pattern, in this context, but not the applicative definition itself and, most importantly, the laws behind it.

You cited the strategy pattern: a thing that makes you achieve something similar in FP is just currying a function, that you then prepare appropriately. The difference is that currying is a technique at a much lower lever than the strategy pattern, that can be used to solve a variety of problems, often solved in OOP context with different patterns. It's not that every possible solution to every possible problem is a design pattern: the shared language of the problem named design patterns in the OOP community is needed to overcome limitations of some (wildly popular) languages in their abstraction power.

I'd rather talk abstractions than design patterns. "Applicative functor" is not a design pattern in the same way as a "Monoid" is not a design pattern.

I'm not sure about what you mean here. The problem is not the motivation or intelligence of developers: the problem is that OOP is the de facto default way of thinking for many, many developers. FP is not a something that you start from: is something you arrive to when you realize that most if your software problems are about transforming data, and the most natural building block for representing this transformation is a function. On the contrary, OOP is something that you start from when approaching a problem in software development, and then try to mold objects to somehow express your computation, often defaulting to design patterns because you realize that you have almost no other fixed point of reference.

Most of the things you cited here can be analyzed in depth, and depend on the context. For example, it's clear that if you have junior developers and you must ship soon you're probably not going to produce state-of-the-art code, FP or not.

Or also, I never realized that a mutable version of some code was more readable, but when I say "mutability" I'm not referring to "local" mutability, that is, using local variables to implement an algorithm. I do that all the time. I'm talking about global, out-of-scope, at-a-distance effectful computation, like passing huge mutable reference around: that's what I'm referring to when I say that mutability is almost always a bad idea.

Ditto. I definitely use local mutable variables in Swift (not that often actually, I still find it frequently preferable to use constants even in local contexts), and I like and use the inout capabilities of Swift in full force.

Of course you can't remove every accident, but depending on your approach, you can be careful and try to avoid them, or you can be reckless and code in an actively harmful way.

If I'm an expert, seasoned climber and I decide to attempt an extreme hard climb, and I then slip and fall, that's a human error. But if I don't know anything about climbing and I still attempt same climb, with no experience and no equipment, and then I slip and fall, I'm just an idiot.

From what you write it seems like you've had unfruitful conversations with FP people that assumed that only a strict pure functional approach is the only true way. I don't intend to support any of those assumptions. I simply wrote the following:

FP comes (for many, not everyone, I reckon) after the realization that we need a more principled approach to programming (I wrote more, not only exclusively), and sits at the end of a path, not at the beginning, and I simply think that, when trying to teach FP to someone, this concept should be at the front of every discussion.

@ExFalsoQuodlibet, I'd be very interested to see some examples of when you consider mutability to be good vs. when it is not. Thanks in advance!


IMO the biggest thing missing from the various Swift FP libraries: is simple practical examples of how to use / exploit these algebras. The guys stuff covers some of this ideal, but for someone just starting out with FP, there are just too many basic gaps that aren't easily breached, and hence most just don't bother.

As a Swift community we probably need an something akin to Professor Frisby's Mostly adequate guide to FP, something to which I'd happily contribute.

As to which FP library such an effort would be centred on is naturally open for debate.

Side note: it's too easy to get lost in the shortcomings of Swift e.g. HKTs, and end up overlooking what features Swift already has to enable FP.

+1 for the Mostly adequate guide

Well I actually use mutability a lot in Swift, thanks to the fact that struct is a value type, and you can make the properties safely var, because mutating them will actually copy the reference.

I also like methods that take inout parameters, they feel easier to work with when you're really just transforming some data and returning it: for example, I'm a great fan of SE-0171.

But even locally, I usually never find algorithms based on mutating variables more readable, because it seems to me that having to keep track of the value of a variable in a certain stage of a computation is always harder to follow and understand than alternatives based on well defined constants, or even recursion.

Also, Swift has some nice features that let you avoid var, like late initialization, for example in the following code...

var value = 0

if someCondition && someOtherCondition {
    /// some code
    value = 42
} else {
    /// some code
    value = 43

/// code that uses "value"

...I don't really need to make value a var, I can just write:

let value: Int

if someCondition && someOtherCondition {
    /// some code
    value = 42
} else {
    /// some code
    value = 43

/// code that uses "value"

Swift is smart enough to understand that value was eventually initialized in any possible code path, so I can make it let even if it's initialized in the branches of an if-else.


The challenges here of course will be finding a few willing to assist with this; second only to picking which "popular" FP library it should be based on.

Anyway I'm up for the challenge assuming there's sufficient interest.