Why Programmers Need Limits

Charles Scalfani
15 min readAug 6, 2016

Limits make for better Art, Design and Life.

We come from a culture of No Limits or Push the Limits, but actually we need limits. We’re better off with them, but they need to be the right limits.

Censorship for Better Music

When there were externally imposed limits on what you could say in a song or book or movie, writers had to rely on metaphors to make a particular point.

Take the 1928 Cole Porter classic, Let’s Do It (Let’s Fall in Love). We all know what they meant by “It” and it’s not “Fall in Love”. I suspect they had to add the parenthetical to the title to avoid censorship.

Fast forward to 2011 and look at Three 6 Mafia’s Slob on my Knob. Except for the first stanza which actually is metaphorical, the rest of the lyric is on the nose.

Putting the merits of the artistry (or lack thereof) aside for a moment, Cole Porter’s song alludes to what Three 6 Mafia’s song hoists upon us in excruciating detail leaving nothing to the imagination.

Problem is if you don’t subscribe to love making as described in Three 6 Mafia’s lyric, you’re going to find the song at worst vulgar and at best missing the mark. But with Cole Porter’s, the listener can conjure up their own personal love making fantasy.

So limitations can give things a greater appeal.

The Shark is not Working

Steven Spielberg’s original plan in telling the story of Jaws was to show the shark. But it was always broken. Most of the time, they couldn’t show the shark, the star of the movie.

This movie which ushered in the Blockbuster wouldn’t exist in its current incarnation if mechanical difficulties didn’t impose limits on what Spielberg could do.

Why is this film far superior to one where the shark is shown? Because, once again, the missing parts are filled in by each viewer. They take their own personal phobias and project them onto the screen. So the fear is PERSONALIZED for each viewer.

Animators have known this for years. Play the crash off screen, then cut to the aftermath. This has two benefits. You don’t have to animate the crash and the crash happens in the mind of the viewer.

Most people think they’ve seen Bambi’s mother get shot. Not only do we not see her get shot, we never see her AFTER she’s been shot. But people will swear that they’ve seen both. But it’s NEVER shown.

So limitations can make things better. A lot better.

Choices, Choices Everywhere

Imagine you’re a painter and I ask you to paint me a picture and all I say is “Paint me something beautiful. Something I’ll really like.”

Now you go into your studio and sit there staring at a blank canvas. And stare and stare unable to paint. Why?

Because there are too many possibilities. You can literally paint anything. I’ve placed NO limits on you. And this is the problem that’s paralyzed you. This is known as the Paradox of Choice.

If instead, I said to paint me a landscape that I’ll like, that would at least eliminate half of the infinite possibilities. Even though there are still infinite possibilities left, any thought of a portrait can quickly be dismissed.

If I went even further and said that I love seascapes with waves crashing on the beach during a golden sunset, there would still be an infinite number of possible paintings, but those limitations actually help you think about what to paint.

And before you know it, you’re able to start sketching out a seascape.

So limitations can make creation easier.

Hardware is Easier Than Software

With hardware, you will never see a transistor or capacitor that’s shared by multiple components in the computer. Resistors in the keyboard circuitry cannot be accessed, shared or affected by the graphics card.

The graphics card has its own resistors that it exclusively owns. Hardware engineers don’t do this because they want to sell more resistors. They do it because they have no choice.

The Laws of the Universe dictate that this cannot be done without wreaking havoc. The Universe is imposing rules on Hardware Engineers, i.e. limiting what’s possible.

These limitations make thinking about and working with hardware easier than software.

Nothing is Impossible in Software

Now cut to software where nearly anything is possible. There’s nothing limiting a Software Engineer from sharing a variable with every part of the program. This is known as a Global Variable.

In Assembly Language programming, you can just jump to any point in the code and start executing it. You can do this at any time. You can even write to data causing the program to execute unintended code. This is a method employed by hackers who exploit Buffer Overflow vulnerabilities.

Typically, the Operating System limits what your program can do to things outside of your program. But there are no limitations placed on what your program can do to the code and data that it owns.

It’s the lack of limits that makes writing and maintaining software so difficult.

How to Properly Limit Software Development

We know that we need limits in Software Development and we know, from experience, that limitations in other creative endeavors can benefit us.

We also know we cannot rely on society to randomly censor our code or on some mechanical deficiencies to limit our paradigms. And we cannot expect the users to specify the requirements to such a degree that it necessitates appropriate limits in design.

We must limit ourselves. But we want to make sure that these limitations are beneficial. So what limits should we pick and how should we go about deciding?

To answer that question, we must rely on our experience and years of practice to help guide us in our pursuit for the appropriate limits. But the most useful tool we have is our past failures.

The pain of our past actions, e.g. touching the stove, informs us well as to what limitations we should put on ourselves if we want to avoid similar agony.

Let my People Go

In the early days, people would write programs with code jumping around all over the place. This was called Spaghetti Code since following this sort of code was like following a single strand in a bowl of spaghetti.

The industry realized that this practice was counterproductive and at first banned the use of the GOTO statement from code that was written in languages that allowed it.

Eventually, new programming languages were sold on the merits of not supporting GOTOs. These were known as Structural Programming languages. And today, all mainstream high-level languages are GOTO-less.

When this occurred, a few complained that the new languages were too restrictive and that if they just had GOTOs, they could write the code more easily.

But more progressive minds won and we have them to thank for the extinction of such a destructive tool.

What the progressive minds realized is that code is read more than it’s written or changed. So it may be a little less convenient for a small group of feet-draggers but in the long run we’d be far better off with this limitation.

Computers can still do GOTOs. In fact, they need to. It’s just that we, as an industry, decided to limit programmers from directly using them. All computer languages compile to code that uses GOTOs. But language designers created constructs that employed more disciplined branching, e.g. using a break statement to exit a for loop.

The Software Industry has greatly benefited from limitations being dictated by language designers.

Bring on the Shackles

So what are the GOTOs of today and what do language designers have in store for us unsuspecting programmers?

To answer this, we should look to the current problems that we encounter on a daily basis.

  1. Complexity
  2. Reusability
  3. Global Mutable State
  4. Dynamic Typing
  5. Testing
  6. The Demise of Moore’s Law

How do we limit what programmers are allowed to do to solve the above problems?


Complexity grows over time. What starts out as a simple system will, over time, evolve into a complex one. What starts out as a complex system will, over time, evolve into a mess.

So how do we limit programmers to help reduce complexity?

For one, we can force programmers to write code that is fully decomposed. While this is difficult to do if not downright impossible, we can create languages that both encourage and reward this behavior.

Many Functional Programming languages, especially pure ones, do both of these things.

Writing a function which is a computation forces you to write things in a very decomposed fashion. It also forces you to think through your mental model of the problem.

We can also can force limitations on what programmers can do in their functions, i.e. make all functions pure. Pure Functions are ones with no side-effects, i.e. functions cannot access data that is outside of themselves.

Pure Functions only deal with the data that’s passed to them and then they compute their results and return them. Every time you call a Pure Function with the same inputs, it will ALWAYS produce the same outputs.

This makes reasoning about Pure Functions far easier than non-pure since everything they can do is fully contained within the function. You can also unit test them more easily since they are self-contained units. And if the computation is expensive, you can cache their results. If you have the same inputs, you know that the outputs are always the same. A perfect scenario for a cache.

Limiting programmers to only pure functions greatly limits complexity since functions can only have a local effect and helps developers naturally decompose their solutions.


The industry has been wrestling with this problem for almost as long as programming has been around. First we had libraries and then Structured Programming and then Object Oriented Inheritance.

All of these approaches have limited appeal and success. But the one method that always works and is employed by nearly every programmer is Copy/Paste aka copypasta.

If you’re copying and pasting your code you’re doing it wrong.

We cannot limit programmers from Copy/Paste as long as we are still writing programs as text, but what we can do is give them something better.

Functional Programming has standard features that are far better than copypasta, viz. Higher-order Functions, Currying and Composition.

Higher-order Functions allow programmers to pass parameters that are data and functions. In languages that do not support this, the only solution is to copy and paste the function and then edit the logic. With Higher-order Functions, the logic can be passed as a parameter in the form of a function.

Currying allows for parameters to be applied to a function, one at a time. This allows programmers to write generalized versions of their functions and then “bake in” some of the parameters to create more specialized versions.

Composition allows programmers to assemble functions like Legos™ allowing them to reuse functionality that they or others have built in a pipeline where data flows from one function to the next. Unix pipes are a simplistic form of this.

So while we cannot eliminate copypasta, we can make it unnecessary through language support and through code reviews that disallow it in our code bases.

Global Mutable State

This is probably the biggest problem in programming that most people don’t realize is a problem.

Did you ever wonder why most solutions to program glitches are fixed by rebooting your computer or restarting the offending application? That’s because of state. The program has corrupted its state.

Somewhere in the program, state was altered in an invalid way. These are some of the most difficult bugs to fix. Why? Because they are so hard to reproduce.

If you cannot reproduce it reliably, you have no way of knowing if you’ve actually fixed it. You may test your fix and it doesn’t happen. But is that because you fixed it or because it hasn’t happened yet?

Proper management of state is the most important thing you can do in your program to ensure reliability.

Functional Programming solves this problem by placing limits on programmers at the language level. Programmers cannot create mutable variables.

At first, this may seem like they’ve gone too far and you may be sharpening your pitchforks as you read this. But when you actually work with such systems, you can see that state can be managed while making all data structures immutable, i.e. once a variable has a value it can never change.

This doesn’t mean that state cannot change. It just means that to do so you have to pass the current state into a function which produces a new state. Before you bit-twiddlers get out your pitchforks again, you can rest assured that there are mechanisms for optimizing these operations under the covers via Structural Sharing.

Note that under the covers is where mutations happen. Just like the old days when GOTOs were eliminated, the compiler or runtime still does GOTOs. It’s just not available to the programmer.

And when side-effects must happen, Functional Languages have ways of containing potentially dangerous parts of your program. In good implementations, these parts of the code are clearly marked as dangerous and segregated from the pure code.

And when 98% of your code is side-effect-free, bugs that corrupt state can only be in the 2%. This gives the programmer a fighting chance to find these sort of bugs since the dangerous parts are corralled.

So by limiting programmers to only (or mostly) pure functions, we create safer, more reliable programs.

Dynamic Typing

There’s another long and old battle over Static Typing vs. Dynamic Typing. Static Typing is where the variable’s type is verified at compile time. Once you define the type, the compiler can help you make sure that you’re using it correctly.

The arguments against Static Typing is that it puts an unnecessary burden on the programmer and that it litters up the code with verbose typing information. And this type information is syntactically noisy because it’s inline with the specification of functions.

Dynamic Typing is where the variable’s type is neither specified or verified at compile time. In fact, most languages that use Dynamic Typing aren’t compiled languages.

The arguments against Dynamic Typing is that while it cleans up your code drastically, all misuses of the variable will not be caught by the programmer. It won’t be caught until the program is run. This means that despite all best efforts, type bugs will make it into production.

So which is best? Since we are considering limiting programmers here, you might expect an argument for Static Typing in spite of the downside. Well, yes, but wouldn’t it be nice if we could have the best of both worlds?

As it turns out, not all Static Typing systems are created equally. Many Functional Programming languages support Type Inference where the compiler can infer the types of functions you’re writing by the way you use them.

This means we can have Static Typing without all of the overhead of specifying. Best practices dictate that typing be specified as opposed to inferred, but in languages like Haskell and Elm, the syntax for typing is actually non-obtrusive and quite helpful.

Non-functional languages, i.e. Imperative Languages, that are statically typed tend to burden the programmer with specifying types with little return on that investment.

In contrast, Haskell and Elm’s type systems actually helps the programmer code better and informs them at compile time when the program will not function properly.

So by limiting programmers to good Static Typing, the compiler can actually help detect bugs, infer types and aid in coding instead of burdening the developer with verbose, intrusive, type information.


Writing test code is the bane of the modern programmer’s existence. Many times developers spend more time writing test code than the original code they are testing.

Writing test code for functions that interface with Databases or Web Servers is difficult if not downright impossible to automate. Usually, there are 2 options.

  1. Don’t write tests
  2. Mock-up the Database or Server

Option #1 is obviously not a great option, but many people take this path since mocking-up complex systems can be more time consuming than the time spent writing the module you’re trying to test.

But if we limit the code to be Pure Functions, then they cannot interface directly to the Database because that would cause side-effects or mutations. We still have to access the Database but now we have our dangerous code layer as a very thin interface layer leaving the majority of the module as pure.

Testing pure functions is far easier. But we still have to write test code, the bane of our existence. Or do we?

Turns out that there are programs to automatically test your functional programs. The only thing that the programmer must provide is what properties your functions must abide by, e.g. what the inverse function is. The Haskell automated tester is called QuickCheck.

So by limiting the majority of the functions to be pure, we make testing far easier and in some cases purely trivial.

The Demise of Moore’s Law

Moore’s Law isn’t really a law but more of an observation that computing power will double every 2 years.

This has been true for over 50 years. But sadly, we have reached limits in the current technology. And it could take decades to develop some non-silicon-based technology to build computers with.

Until then, the best way to double the speed of your computer is to double the cores, i.e. the number of computing engines in the CPU. The problem is not that hardware manufacturers cannot give us more cores. The problem is batteries and software.

Doubling the computing power means doubling the power consumed by the CPU. This will drain batteries even faster than we do today. Battery technology is lagging far behind the insatiable appetite of users.

So before we go adding more cores to drain our batteries, maybe we should optimize the use of the cores we already have. This is where software comes in. Currently, Imperative Languages make it very difficult to run your programs in parallel.

To do that today, the burden is on the developer. The program needs to be sliced and diced into parallel parts. This is not a simple task. And in fact, with languages like JavaScript, programmers cannot control this since their code cannot run in parallel, i.e. it’s Single Threaded.

But with Pure Functions, it doesn’t matter what order they are run. All that matters is that the inputs are available. This means that the compiler or runtime system can determine which functions to run and when.

By limiting functions to pure functions, it frees the programmer up from having to worry about parallelism.

Functional programs should be able to take better advantage of multi-core machines without added complexity for the developer.

Doing More with Less

As we have seen, imposing limits, when done properly, can dramatically improve our Art, Design and Life.

Hardware Engineers have greatly benefited from the natural limitations of their tools making their jobs easier and allowing them to make great advancements in the past decades.

Isn’t it time that we, Software Engineers, impose limits on ourselves so that we too can do more with less.

If you liked this, click the💚 below so other people will see this here on Medium.

Update circa 2021: I have a book that will teach you what I’ve was promoting 5 years ago, Functional Programming Made Easier: A Step-by-Step Guide.

If you want to join a community of web developers learning and helping each other to develop web apps using Functional Programming in Elm please check out my Facebook Group, Learn Elm Programming https://www.facebook.com/groups/learnelm/

My Twitter: @cscalfani



Charles Scalfani

Software Engineer and Architect, Teacher, Writer, Filmmaker, Photographer, Artist…