Elixir is a relatively new programming language. José Valim, the language creator, pushed the first git commit 8 years ago and the release of v1.0 was announced on September 18, 2014. Compare that to the age of other programming languages like Ruby and Python, which are a few decades old. Elixir is a young fresh language.
Despite its young age, Elixir has been trusted by several companies to be part of their toolset. Some of those well-known companies that publicly shared they use Elixir to build their products are Pinterest, PagerDuty and Wistia.
The Elixir language and the Erlang eco-system have a special relationship with a number of industries, which have been relying on and benefiting from it:
Although those industries have a special connection to Elixir, the language is being used and is rising in popularity in the most different areas.
Online content covering how companies switched their tech stack to Elixir and have reduced their costs (or resources usage) by X% (where X is always an impressive number) while improving KPIs (throughput, requests per second, memory usage, etc) are quite popular and create lots of fuss around Elixir.
A few weeks ago, Devon Estes and Joe Armstrong exchanged tweets about companies using Elixir (and Erlang) in production that chose to protect this information. The reason behind keeping the usage of Elixir (and Erlang) confidential, was the belief this information gives them a competitive advantage. Those companies, which probably have a healthy revenue/profit and know their business, were considering Elixir (and Erlang), a programming language, one of their “secret sauces” , an advantage when compared to their competitors which weren’t using Elixir (and Erlang).
How can a programming language become a competitive advantage? Does it make sense to hide that information? Well, like everything in life, the answer is probably: it depends.
I’ve been working with Elixir and Erlang for the last 6 years and I’m quite familiar with the benefits and advantages it has when compared to other languages, so I decided to come up with a (non-exhaustive) list of “features” from Elixir and the Erlang environment and how they could potentially become business competitive advantages.
Elixir is a productive, modern programming language. It has useful features (like the Pipe operator, Protocols, built-in documentation, package manager, etc), which makes it a pleasure to use as a developer. However, since Elixir is built on top of the Erlang virtual machine, it’s also built on a battle-tested core, created by Ericsson. Erlang has been used to power distributed, fault-tolerant and highly-available systems for decades. Telcom systems around the world, Facebook messenger, Bet365, IBM Cloudant, and Whatsapp have all benefited from its rock-solid design.
The combination of new and productive language features, excellente development tooling, awesome documentation, with the stability and predictability of proven technology, makes Elixir a unique language.
Elixir has its own way of handling concurrency. Every concurrent activity in Elixir is a process. In contrast to OS processes, Elixir processes are lightweight and have a small memory footprint. The costs of spawning new Elixir process is negligible, enabling a system to launch thousands (or even millions) of processes without worrying about the system stability and responsiveness.
One of the advantages of spawning an arbitrary number of concurrent processes, when compared to the limitations that a system relying on threads might have, is the ability to define the concurrency based on the application domain, and not based on the available resources. It means that if an application requires the creation of 100k concurrent process to map its business domain, that’s completely fine with Elixir.
You might wonder: “how can Elixir handle such a high number of processes without becoming completely unresponsive?”. Elixir comes with its own scheduler. This component is responsible for mapping the application concurrency (Elixir processes) to the available architecture parallelism (CPUs, cores).
The disconnection between concurrency and parallelism brings yet another feature: the ability to run the same application with optimal use of the available resources, without changing the code. Imagine you build an application today, it runs great in your 4-core notebook and when deploying it to a 36-core production server, the Elixir scheduler will make sure all cores are fully used.
The same feature that enables Elixir applications to use all the available processing units today, also makes Elixir applications ready for the future, where SMP hardware architectures tend to skyrocket the number of processing units.
Elixir is not always the fastest language out there. It will fall short in some specific scenarios when compared to other languages like Java, C, C++, etc. However, due to its concurrency model, Elixir can be fast, extremely fast, even when compared with such languages.
If you compare an application written in Java, which launches two threads to coordinate its concurrent and cooperative work, to an Elixir application which launches thousands of processes to perform the same work, Java can beat Elixir when running on a 2-core machine. However, when running the same applications in a 10-cores machine, while Java is going to use 2 of the 10-cores, Elixir will make full use of all of them and very likely will be the fastest application.
Nevertheless, performance is not only about how fast something happens once but also about how consistent and repeatable that same thing happens.
Phoenix, a web framework built using Elixir measures response times in microseconds. When benchmarking Phoenix applications, results show a consistent response time and availability even with the increase of load, while being really cheap to the system memory and CPUs usage.
Scaling applications is a recurrent concern in every company and development team. Even companies building products which are still on the development phase get themselves wondering if their application will be able to scale once it’s live (10x? 100x? 1000x?).
The Elixir scheduler makes it easy for applications to vertically scale (scale by adding more power to a single machine). But what about scaling horizontally (scale by adding more machines)?
The great news is that Elixir has it covered as well. The same concurrency model that allows an application to spawn millions of processes in a single machine, can also be used to spawn those processes in different machines that belong to a cluster.
The support for distributed systems in Elixir goes beyond the ability to run processes in different machines, though.
The Elixir platform has its own protocol to serialize native data types, which optimizes the communication between different machines. That support removes the need of integrating an extra tool to the stack, like Protocol Buffers or Apache Thrift to handle communication.
Another common problem when working with distributed systems is the need to find in which machine a service is running. When dealing with such problem, other languages usually rely on external tools like Consul or etcd, which offer service discovery features. When building distributed applications in Elixir, developers don’t need to worry about that, since the Elixir platform includes a native service discovery tool.
In most languages scalability is achieved by adding extra tools, usually as an afterthought. Elixir deals with scalability from the beginning, it’s backed into the language and the platform.
Things do not always work as we expect; because of a programming bug or hardware failure, your application might not behave as imagined.
When dealing with errors, most languages promote the idea of “let’s try to handle this error” approach. That’s defensive programming, and statistically speaking, it’s very hard to a developer or team of developers to come up with all potential failure scenarios.
Elixir uses the “let it crash” approach when dealing with errors. You might be wondering: “that’s not a strategy, that’s giving up and failing hard”. I hear you, but the “let it crash” approach is not about allowing the entire system to crash, it is about to allow small parts of the systems to fail and have a mechanism to recover from those small failures.
Elixir has the concept of “Supervisors”, which goes hand in hand with the “let it crash” idea. As the name implies, “supervisors” are used to supervise other processes. When a supervised process crashes, a “supervisor” kicks in and take measures to handle the crash.
Elixir applications are structured as trees, where leafs are either supervisors or workers. “Supervisors” are responsible for starting children processes which are logically connected. An Elixir application can have a dozen “supervisors” processes.
A “supervisor” supports three different strategies to handle failure:
When relying on supervisors and using the “let it crash” approach, developers are freed from the agony of trying to handle all the failure scenarios. However the benefits are not only limited to the “developers agony” relief. Without defensive code to handle all potential edge cases, applications take less time to be developed, tested and there is less chance of bugs being introduced by the defensive code that was added as an attempt to handle errors.
The “let it crash” approach empowers developers to focus on handling the happy path of the business domain, while leaving the edge cases to be graciously handled by “supervisors”.
Elixir is a unique, powerful and productive language. It includes features that enable development teams to build distributed, fault-tolerant and highly-available systems. The “let it crash” philosophy empowers developers to concentrate on building the business domain and not to spend time working on handling edge cases. Elixir concurrency model allows companies to make optimal use of hardware resources, building applications that are fast and scalable.
If you ask me: Can the Elixir programming language be a competitive advantage?
I’ll certainly answer with: YES, it can!
Hi, I'm Elvio. I’m an independent software developer consultant based in Berlin, Germany. I’ve spent the last 10+ years working in a wide range of software projects, from internal web dashboards to real-time applications used by millions of people.More about me