“(pre)Maturely Optimize…” revisited

I wrote an article for Script Junkie (Microsoft) awhile back, and it was just published this week: (pre)Maturely Optimize Your JavaScript. In it, I make several against-the-grain assertions, which not surprisingly have ruffled quite a few feathers.

To start out with, I’m attacking head-on the prevailing “fear” around doing anything in your code that even remotely looks like optimization, because the magical “Premature Optimization is the root of all evil” fairy will come and slap you silly. I argue that in fact “Immature Optimization is the root of all evil”, and what we should as devs be most concerned about is learning how to optimize maturely. Mature optimization requires engineer-like thinking, critical reasoning skills, pragmatic wisdom, and above all, the sense of how to properly achieve balance.

Contrary to some of the negative outcry from a few vocal members of the community, I do not think I’m advocating for anything that would rightly be considered “premature”. Some people say that any optimization is “premature” if you haven’t first seen that code fail under load in real conditions. To me, this is like learning to drive by driving first, having accidents, and correcting your driving as you go along.

That doesn’t mean data and benchmarking is not important –of course it is.


But it equally means that the mature and right thing to do is to ALSO be aware of common pitfalls before you hit them, and strive to write code that avoids those bad patterns. Many developers shy away from such approaches because they (I believe wrongly) assume that all “code optimization” leads to uglier or harder to maintain code. While this may sometimes be true, to an extent, it’s not a hard objective fact, and there’s lots of optimizations that can become default in your coding style that do not significantly detract from the style of your code.

In fact, I believe the 7 comparison snippets (before and after) that I show in the article illustrate that, with a few very minor changes, changes which do not cause the code to become completely unmaintainable, you can improve the performance of your code, sometimes by just a tiny amount, or sometimes by a huge amount.

In NO case would I advocate significantly uglier or harder to maintain code for only a 3-10% increase in speed, in code that is not very critical. But a balanced, performance-savvy perspective on every line of code you write will tend to help you avoid the pitfalls, and let you focus your valuable energies on much more important tasks.

Also, notice: I am not focusing on esoteric and minute micro-performance details (as some do, and many assumed I was). I made no mention of things like array.push() vs. array[array.length], or str += "..." vs. str = arr.join(), or ++i vs. i++, or for (i=0; i < arr.length; i++) {...} vs. for (i=0, len=arr.length; i<len; i++) {...}, etc etc etc.

The optimizations I focused on were more broad pattern and algorithmic in nature, rather than comparing native operators and functions to each other — a fruitless task these days, with the browser speed wars still raging strong.

Profiling, “profiling”

If you already have existing code that’s a potential candidate for some optimization TLC, as Paul Irish pointed out to me, the prudent thing to do is of course to employ a script profiler. There are several tools that can profile run-time JavaScript performance — for instance, IE9′s Developer Tools includes a built-in profiler. Just yesterday, I used it to help my co-workers track down a critical performance bug in one of our client projects.

I wish I had mentioned the process of profiling specifically in the Script Junkie article. It wasn’t really the point of the article — the point was, once you’ve identified a block of code to attack, here’s some easy ways to do it — but it would have been helpful to talk about briefly. Paul was right to point out that omission.

On the other hand, the other implied goal of my article was to say that sometimes, our own brains and critical thinking skills can be effective at “profiling” the code we’re writing, as we write it. I regularly find myself halfway through an implementation, and I realize that something I’ve done is likely to lead to some performance hit later. I find that it’s usually not too difficult to adjust my approach right as I’m coding.

How do I so easily recognize a performance anti-pattern as I’m writing it? I understand and code with a performance-savvy mentality as my default. I also have plenty of previous experience to inform avoiding repeating mistakes. And most of all, I subscribe to the belief that I should always be trying to learn from others who are knowledgeable and passionate about performance, to gain from their experience and wisdom as well.

This is the same mentality I’m advocating for other developers to adopt. I call that “Mature Optimization”.


I’ve made the claim a number of times, and I stand by it: The TCO (Total Cost of Ownership) for non-performant code is higher than for performant code. In other words, pay a little bit now, by way of taking the effort to write performant code (where practical), the first time, or pay more later, if/when you have to refactor that code to address performance concerns.

First of all, in the corporate world (of which i’ve held many such jobs), you often don’t ever get the second chance to come back and fix that poorly written code that you have to just throw together to get it out on deadline. We tell ourselves we’ll always come back and fix the code later, but in reality, we often never get to. And if we do get to come back to it, it’s usually under extreme pressure because something fell apart in production under real load, and now your boss is pissed at you.

Not only does performance refactoring cost, in general, “double” the time in coding (coding it once, then coding it again), but it also costs a lot more time when unfortunate but often inevitable regressions are introduced and must also be corrected. Performance “refactoring” is more risky.

And why do I make that assertion?

Because unit-testing is not the de facto standard in the corporate world like it should be. In fact, of the dozen or so jobs I’ve held in the corporate web-application industry in my career, maybe 1 or 2 of them actually took unit-tests seriously. Most of the time, full integration tests (and not even comprehensive) were the best tests we could ever get our boss to approve us time to write.

I know we’d all like to sit on happy island where unit-testing (or TDD) is a reality and we all have 100% test coverage. But it isn’t that way in the real world, by and large.

When you don’t have proper unit-tests, performance “refactoring” is very risky. The conditions under which you are forced to do it conspire against you, and you’re just bound to mess something up, usually in a subtle way you don’t realize just then.

And, btw, even if you had a full regression test-suite that ran, and immediately notified you that your “refactor” just broke something, you still have to spend more time (often costly time) tracking down the cause of that regression, and the regression that fix causes, and so on… down the rabbit hole we go.

It’s crazy to think that internally refactoring a function’s implementation details would cause other side effects, right? Sure, because in the corporate world, we get plenty of time to write 100% high quality code with no shortcuts or assumptions. We never stoop to using a quick global variable instead of figuring out the proper closure-scoping approach, when our boss is breathing heavy down our neck. Our code is always perfectly self-contained and well-patterned, right?

Except, no it isn’t, because that’s why we have this performance bug cropping up in the first place — because we had to cut corners to get code out the door.


Some vocal critics have accused my code snippets in my article as being “premature optimization” because the improvements I suggest are “micro” or won’t have any noticeable impact on performance. I disagree with that assertion. And I created some jsPerf.com tests to illustrate.

The first versions of my test were a little rough and ill-formed, and Rick Waldron pointed out those flaws. I believe I’ve adjusted these tests to more accurately depict the point that each snippet comparison in the article was trying to make.

NOTE: these tests are not going to look like your typical tests where you are comparing two native operators or something low-level like that. In several cases, the algorithms between each test case are intentionally quite different. It should be obvious that one will run much quicker than the other, because it’s doing a lot less (or a lot different) kind of task. The whole basis for these examples in the article is to show how a common, natural code pattern that’s perfectly semantic and self-descriptive or “object-oriented” or whatever, can suffer from (sometimes subtle) performance degradation. And sometimes, an effective way to improve performance is to think about a slightly different pattern or algorithmic approach. I’m trying to help you “see the forest above the trees”.

  1. test case: function call, inner loop
  2. test case: bulk operations or iterators
  3. test-case: $(this) or collection-faking
  4. test-case: scope lookup, cached or not
  5. test-case: prototype chain lookup, cached or not
  6. test-case: nested property lookup, cached or not
  7. test-case: dom append or dom-fragment append

Each of those tests has a table below it for the captured/averaged Browserscope test-run results, per browser. For instance, consider the test results (so far) for the first test-case (“function call, inner loop”):

Consider the Chrome 9 results: 14,101 operations/sec vs. 15,607 operations/sec. That’s a difference of 1,506 operations/sec, which is 10.7% faster. What did I do between those two tests? I simply inlined the isOdd() implementation into the loop, instead of as a function call. Is 10.7% life shattering or huge? No, probably not. But it’s an example of a very common coding pattern which is slower. With slightly more thought, in the case where it was possible/appropriate to do so, we can speed things up by 10.7% and not cost much by way of development time, nor is the code appreciably harder to maintain.

And, if you find that you are writing code that is a little uglier, or maybe is a little less self-explanatory, and you’re worried that this code may cause you (or some other dev) a maintenance problem later, there’s a sure-fire easy solution: WRITE A FRIGGIN’ COMMENT TO EXPLAIN WHY THE CODE NEEDS TO BE THAT WAY.

In fact, let me just say this: if you follow my advice and code with performance patterns in mind from the start, and you find yourself writing a piece of code that is just horribly uglier and no amount of comments can adequately explain why it is that way….

STOP and back up. You have my permission to code NON-performantly in that situation, if you simply can’t do so in a reasonably clean or comment-explainable way. Sheesh, was that so hard? Am I really a crazy lune?


Quit listening to all the negative hype about “premature optimization”. It’s mostly hot air with no substance. It’s a bunch of developers following what other developers say, mostly blindly. And the few developers who actually have a reason for rallying against “premature optimization”, they’re probably mad about it not because the optimization was premature, but because it was immature.

So, let’s all just try to grow up and mature a little bit with how we approach performance-savvy coding. We’ll all get a lot more quality code written that way.

Switch to our mobile site