The Biggest Problem with Elm

Charles Scalfani
8 min readNov 20, 2019


The biggest problem with Elm isn’t that the language lacks higher level abstractions like many Haskellers complain. Or that the language keeps removing advanced features in favor of the beginner experience.

The FFI (Foreign Function Interface) mechanism, which allows developers to call Javascript from Elm, leaves a lot to be desired especially compared to other functional languages. And while this mechanism would benefit from returning a Task instead of a Cmd, it too is not the biggest problem.

And it’s not that programmers are prohibited from writing Effects Managers, e.g. a Websocket Effects Manager which would support Websockets natively instead of forcing developers to use the weak FFI mechanism known as Ports.

It’s certainly not all the boilerplate code that you find yourself writing to interface to a JSON API on the backend. (For our 100,000+ LOC application, it’s around 10% of the codebase.)

The fact that Elm is the only language I’ve worked with in almost 4 decades that has NO official support for private libraries is still not its biggest problem.

These problems pale in comparison to the biggest reason we’re leaving Elm for PureScript: the Elm Architecture.

The Elm Architecture (TEA for short)

The Elm Architecture works well for lots of cases, but one size never fits all. And that’s why you see different libraries in other ecosystems to solve similar problems. Not so in Elm.

At first, you might think that having one architecture is a great thing. You get lots of benefits from having only a single hammer in your toolbelt. Once you master this lone hammer, you know exactly how to drive every nail you’ll encounter.

Also, when you look at someone else’s program, you already understand its structure because you already understand the architecture. This is the strength of a single architecture for all applications.

This approach is very attractive to two types of programmers, new ones and ones who develop simple applications. Those new to Elm find great comfort in being able to understand someone else’s code much quicker than they would in most other environments.

Those who only play with Elm or make simple standalone games or a small applications, say 20,000 lines or less, will sing the praises of the anointed architecture and will defend it vehemently.

And it’s not that their arguments are without merit. A small hammer is a great tool if all you ever do is hang pictures. You don’t need and certainly do not want a heavy hammer. In fact, having an overpowered tool can make your life miserable.

But so can having an underpowered one.

Need is the Differentiator

As long as you have a simple problem to solve, simple tools are the best solution. But once you push a simple tool too far, you quickly learn the cost of its limitations.

The same small hammer that hangs your picture perfectly today will fail miserably when you try to use it to break up cement tomorrow. For that, you’ll need a very different hammer.

Does the eventual need for a jackhammer invalidate the small hammer. Absolutely not. We’re free to purchase a jackhammer and use whichever tool makes sense for the job.

But we’re not talking about multiple hammers here. In this world, we’re only allowed to use one specific, small hammer.

A Small Hammer for a Big Job

As your program grows in complexity, you find yourself creating modules or actually sets of modules that contain the following files:

The bigger the application, the more of these there will be. In our application we have about 50.

Except for the top-level update, we have to manage calling update in each of these lower-level Update modules. This is complicated further with the need to call

Same goes for calling view in the View module. And like update, Subscriptions are complicated by having to call

All of this Plumbing boilerplate is necessary because TEA doesn’t have a mechanism for handling anything but a single Update, Model, View, and Subscription at the top-most level. The rest is up to you.

This is only part of the problem.

State Management Nightmare

Imagine when your user navigates to a page, you need to make multiple calls to the backend to retrieve data. There are two choices: make the requests in parallel or serial.


If you make them serially, then it takes longer to render the page but your code is simpler. You make a call to the backend and when it’s finished you handle the response of the previous call and then make your next call. You repeat this until you’ve received everything you need to finally render the page.

The processing is simple:

Serial code

Here, we squirrel away the data retrieved each time we get the results from the previous call until we have all of the data we need to finally render the page.

This is an overly simplistic code example, but it illustrates the overhead of State Management in the Serial case.


If we make requests in parallel, we need to handle responses as they come in asynchronously and in no particular order:

Parallel code

Notice how we have to painstakingly check the results of all the other calls each time, an error prone process for sure. No help from the compiler here if you accidentally type data2 instead of data3.

Those new to Elm may think that this code could be helped by an async library. That was my first reaction.

I thought I could just build a function to do this, but you can’t because update functions must return to the Elm Runtime before all of the async calls have completed.

If you try to do this in a library, you’ll quickly realize that you’re burdening the caller with managing yet another update function and Model. The limitations of TEA really become apparent here.

In Javascript, you can create a library to do parallel async calls. But that’s because in Javascript you can specify callbacks that will return to the exact scope where the originating call was made.

In Elm, we always have to exit our scope forcing us to manage State unnecessarily. We have to manually squirrel away temporary data into our Model that would normally be in scope upon returning from one of the async calls.

Constantly exiting our scope is a necessary requirement in TEA so that the Elm Runtime can continue.

Spaghetti Execution

We use Websockets in our app, but since we’re no longer allowed to write our own Effects Managers and since the Websocket support was recently removed from Elm, we are forced to do Websocket communication over Ports and write our Websocket code in Javascript.

This means that we have to listen for messages coming back from the Websocket using Subscriptions in every single place Websockets are used.

We can’t give Javascript a Message constructor that’s been properly mapped using to prepare the completion message for proper delivery to the code that originating the call.

This functionality is reserved for Effects Managers. So instead, everyone who calls the backend must listen to all responses and only process ones with a matching initiator id. But now we have to do more State Management to make sure we’re expecting this call.

To make matter worse, the procedure that developers must go through to see how a single request is sent and how its response is handled is frustrating and tedious. They must weave through dozens of files, Ports, Subscriptions, Messages, Models, Updates, Javascript code, etc., before they can piece together a picture of how the application works.

What can be done

So how can we fix this? We can’t.

The real problem isn’t that the hammer is too small. It’s that the job is too big. And the carpenter had no idea how limiting the small hammer would be. Not until it was too late.

Learning the limitations of a tool is really important. And that’s the point of this article. It’s to inform all those out there thinking about using Elm that this is a small hammer and depending on your technology choices and/or requirements, you may hit some hard walls.

You can work around them, but the cost is complexity and lines of code. If you have a large team and an equally sized budget, or you expect your codebase to remain stable, then the added complexity isn’t Technological Debt you’ll worry about.

But that was not the case for us. The complexity curve grew exponentially as we pushed TEA to its limits. We don’t have “NoRedInk dollars” or their staff which helps mitigate such limitations. And to make matters worse, our codebase is brand new and will be changing for the foreseeable future.

So we are forced to live with the Technical Debt or pay an even higher price by moving the application to PureScript. Time will tell.

How to Proceed

I suggest building prototypes of your application to overcome what I call Technological Hurdles. These are all the technologies that you haven’t tried before. One of our hurdles was Websockets via Port calls from Elm.

Also, create a Proof of Concept for your application. For example, if you have a 20 page app, build 1 or 2 pages, depth first. It’s the depth in TEA that’s the killer. Not the breadth.

Make sure you implement your Encoders, Decoders and Fuzzers as soon as possible. This will help you judge the level of effort necessary for working with JSON, which in languages like PureScript and Haskell is nearly zero (thanks to Generics).

We didn’t have the time to do these steps BEFORE we embarked on our project. Mostly because we had to toss out over 30,000 lines of code when we found out that Elm would be removing Native code capabilities from regular developers, which put our project woefully behind schedule.

If you take the time to do this, you should start to see areas where complexity is creeping in and boilerplate is polluting your codebase.

If you don’t see it or it’s manageable then my best advice is to move forward with caution as you would using any new technology stack. It could just be your application requirements are perfect for the small hammer.

Unfortunately, for us, we bought the small hammer and then proceeded to use it like a jackhammer. Don’t let this happen to you.

Best of luck.



Charles Scalfani

Software Engineer and Architect, Teacher, Writer, Filmmaker, Photographer, Artist…