Thanks for signing up.
We'll see you soon 👋
We'll see you soon 👋
If you try to research this question, you might come up with confusing and contradictory answers, strongly-stated opinions, and loads of technical jargon.
The answer depends which perspective the questioner is coming from, and what assumptions, expectations and preconceived notions they bring with them.
Haskell is a high-level language - a tool used to write programs with a goal of avoiding the programmer having to specify implementation details, as far as practical. This is typically the most productive way of writing most software, where performance must be 'adequate' rather than optimum.
Haskell, on the other hand, is type-erased and compiled. The GHC compiler does an excellent job of optimizing Haskell code to make it run much faster than it otherwise might. So as long as the developer is aware of the necessary caveats and pitfalls (more on those later), Haskell programs will run dramatically faster than their HLL counterparts.
It remains a fact that code written in clear, idiomatic Haskell will (at present) be slower than code written in a low-level language such as C, C++ or Rust, which would typically be used for performance-critical code. GHC cannot yet optimize all the overhead from the Haskell abstractions, and will lose out to code that runs close to the metal, typically with a slowdown of around 50% to 4x.
This leaves Haskell in a bit of a tight spot when it comes to performance: it competes well against other high level languages in a domain where performance isn't that important, but can't quite keep up with the incumbents in the race for ultimate speed.
However, if someone is using Haskell for other reasons, to write more concise, maintainable code that will be easier to be confident in its correctness, they will get decent performance and responsiveness 'for free!'
So what to do with performance-critical code? One option is to write it in a low-level language, possibly calling in from Haskell through the Foreign Function Interface. In doing so, the programmer accepts the sacrifice of expressivity and safety, and must factor for the overhead of FFI and marshalling.
But there is another way. With a strong understanding of the GHC execution model, there is a set of techniques that can be used to write Haskell code that compiles to run at least as fast as equivalent C programs. This takes quite a bit of expertise to do, and it won't be quite as tidy as idiomatic Haskell, but it should still be more readable (to the practiced eye) and maintainable than a highly-tuned C implementation.
A full rundown is beyond the scope of this post, but I'll link to some resources:
There are, however, some things that every production Haskell developer needs to be aware of. There is Haskell code that looks correct, but will run way slower, and use far more memory, than the worst you will come across in an interpreted language. The slowdown can run to several factors, with programs occasionally running out of stack space before they can get anything done!
This is due to a controversial design decision called laziness, compounded by, in hindsight, suboptimal library design and idiom choice, and needs to be understood in order to be able to guarantee consistent reasonable performance from your Haskell code.
I won't go into all the details here, but in short, never use
foldl' instead), know how to use bang patterns, and read this article by Michael Snoyman:
People seem to think there is inevitably a price to pay for writing high-level code. In fact the opposite is true. Theoretically speaking, high-level code is easier to optimize than low-level code. It simply provides more information to the compiler about the programmer's intent.
As an illustration, let's take the case of Rust. Rust is considered on par with C and C++ in terms of performance, despite being somewhat higher level. In spite of this, if you look through the Rust issues on Github, you'll see many instances of the 'I-slow' tag, suggesting unexploited optimization opportunities. This suggests Rust is set to get faster yet.
How can this be? Surely everyone knows nothing beats hand-written C code?
I believe this to be a widely held myth. Optimization is more-or-less an exact science, and machines must eventually outperform humans. And ultimately, a perfect Haskell compiler will outperform a perfect optimizing C compiler.
Why is this?
As long as it your code is written with an eye to a few common pitfalls, Haskell is a fast language compared to it's closest competitors, although not quite fast enough to compete in the speed stakes. Heavily hand-optimized Haskell can be competitive, and we can expect compiler advances to further narrow the gap for more idiomatic code in coming years.
Originally published on arifordsham.com
Engineers who find a new job through WorksHub average a 15% increase in salary.Start with GitHubStart with TwitterStart with Stack OverflowStart with Email