I generally try to avoid "jumping in" on threads like this, where it looks like it's OP vs the world, but this excerpt broke my self-restraint haha
There's absolutely no chance in hell that the operations work even remotely the same. If you (generic you) implement the same program the same way in both Haskell and Java, then you very clearly either don't know Haskell, or you don't know Java. I think this mentality stems from people whose entire programming career has consisted of exposure to exclusively procedural/OO languages. That leads to the sort of mentality where the first questions about a new language is "how do I spell a for
loop?", as if all programming languages are roughly identical, short of the spelling of some language keywords, and naming of the standard library's types.
Take that mentality to a functional language like Haskell, or a logic programming language like Prolog, and your whole world will shatter instantly. What if I told you there was no for
loop?
Additionally, even if you did have all the implementations have roughly the same design, you've still produced completely useless results. The fact of the matter is, even if you write Haskell like Java, no other Haskell programmer does. Your single data point on Haskell performance is not just bad because it's singular, but it's bad because it's not even representative of real Haskell programs.
Here's an example to consider:
- In Haskell, tree data structures would seem incredibly efficient, because of Haskell's topologically sortable memory graph, which makes garbage collection incredibly fast.
- However, pointers to parents might be hard to implement (impossible?).
- In C, your full manual control over memory could allow you to write very optimized allocators that maximize locality and minimize cache misses.
- If you get clever with unions, you could probably even remove a lot of references/pointer chasing, and only insert them into the data structure when they become necessary.
- Naturally, this comes at a great deal of memory management headache, and a lot of complexity.
- It's easy to get it wrong.
- In Java, tree data structures are incredibly easy to allocate, but too many mutations would produce a lot of garbage.
- In Swift, trees are pretty easy to construct.
- The only common complication is about the
parentNode
reference needing to be weak
.
- Swift has definite deinitialization guarantees that ARC must honour.
- Pro: RAII. You can use classes to model other resources (threads, sockets, file handles, etc.), and have ARC automate the management of those other resources.
- Pro: your app never keeps around unusable objects (garbage)
- Con: ARC can cause delays when releasing large object graphs, because one
deinit
causes another deinit
to run, which causes another... and so on. To uphold deterministic deinit guarantees, all of these deinits need to happen synchronously, blocking further progress of the program.
- If deinit pauses start happening in your program, then you might need to sacrifice some of the perks by implementing a deinitalization pool (a term I just came up with, IDK what the common term for this is, but to be fair, it's quite rarely needed). Your deinitialization pool would have strong references to the objects in question, and would titrate their deinitalizations on a background thread.
- Because garbage lives longer than it otherwise would have, you using more memory than strictly necessary, and give up your deterministic deinit guarantee. A familiar situation to anyone who has dealt with a GC.
- In a sense, ARC (minimal memory use, great at many small allocs/deallocs, bad at large clean ups) is the opposite of GC (really high memory requirements, bad at many small allocs/deallocs, great at large clean ups).
As you can see, there's an incredibly large variability to something as simple as a tree, that's deeply influenced by a very large set of design choices each language has made. Naturally, this leads you to pick trees more frequently in some languages than in others. In Swift we reverse arrays all the time. In Haskell, you would almost always want to avoid reversing a list. Prescribing a "one size fits all" unified implementation design across a broad set of languages like this is complete non-sense.
The correct thing to do here, IMO, is to say "look programmers, here's the problem, here's the acceptance criteria, go solve it the best way you can", leaving each programmer the flexibility to think in the mindset that their language prefers.