I’ve had some troubling and challenging thoughts swirling around in my weird brain for the last couple of months, and then two days ago, my good friend and fellow Austin.JavaScript attendee Alex Sexton prodded me with this little jab, which I obviously take light-heartedly and in good spirits. But it definitely has me thinking even more. I decided it was time to jot down some of these thoughts.

Thinking about what, you ask? Optimization. Is it good or not? Is there such a thing as too much (or, as they say, “premature”) optimization? Does it really matter if you cut corners in optimization when in the real world, few if any ever notice the difference? “If a tree falls in the forest and noone is around to hear it….”

It’s certainly no secret that I’m a complete performance optimization nerd. I consider myself a Padawan learner under the likes of the incomparable Steve Souders. And just look at projects like LABjs (“the performance script loader”) and 2static.it (cookie-less static resource loading), and you’ll see that I spend a lot of my time thinking obsessing about these topics.

And they are not simple topics to ponder. In fact, they are quite complex. It’s not quite as easy as just comparing the “big-O” notation of two algorithms and declaring one to be the winner, as we were so fond of doing in CS 101. Survey the 40-something rules in YSlow and PageSpeed and you’ll see that many of them are contradictory! And that’s just the tip of the iceberg. Start considering the tradeoffs you make in your code when deciding how to write your for-loops, etc. There’s a lot to think about and balance.

Yeah, we’ve all got a long way to go at artfully balancing what we optimize and how we optimize it.

Duck for cover!

I take a lot of flak (both the fun kind and the not-fun kind) for being so obsessed with all facets of web performance optimization. But I can’t really help that it’s kind of default in the way that I think. I am, as the Velocity Conference so keenly states, a “Fast By Default” kind of developer. Very little bugs the beegeezus out of me more than seeing (or worse, experiencing) poorly performing code, especially when it’s so easy to at least start addressing such issues.

I’m not going to go into details on all the various ways that you can look at addressing the performance of code and architecture (that’s the stuff of many a book). But know that I’m talking about both the small things and the big things. This is both a macro-topic and a micro-topic.

It’s an issue of how your code performs vs. how your code is maintained. But it’s also an issue of how important is it if your resources are not properly being cached vs. meeting a tight customer deadline to take heat off the boss’ neck. Like I said, these are weighty topics with no clear answers, and many many different factors weighing in.

For better or worse, I often find myself in the minority opinion on such topics, arguing for the principle of efficiency and performance even if the gains are not as directly measurable or tangible. And I understand why many who feel opposite of me feel the way they do.

I work in those same pragmatic job environments where the release deadlines are of utmost importance, and the extra 200ms that an inefficient n2 search algorithm takes are not something anyone around me cares to spend an extra day addressing.

In a recent informal technology-related gathering, one such conversation sparked to life, and as usual, I found myself playing the devil (or devil’s advocate, depending on your perspective) and really trying to prod at the topic of whether or not coding with optimization as primary motivator (at the expense sometimes of code readability) was a valid approach to take.

The specific topic was Array.forEach() vs doing a regular for-loop and manually iterating an array’s elements. But I’m not here to re-hash that whole conversation. The outcome of the conversation is what was more impactful to me.

…or give me death!

I was arguing that even though the performance gains of the native for-loop might only equal a few ms at best (for most real-world data sets) over the Array.forEach syntax (function call overhead, blah blah), the fact that it was more efficient was sufficient justification (without any tangible measurability) over the tradeoff of more complicated (less syntactically “pretty”) code. In fact, I believed then, and still do, that the pattern of writing efficient code pays off in the long run by all those little gains adding up.

One astute member of the group quipped “you’ll never die by a thousand paper cuts” because “you never get all the cuts at once”. In other words, if I save those few ms there, and even in 15 other places in my code, I still will not get enough gains to even be measureable or a performance problem worth addressing.

As this logic goes, the likelihood of having enough of those bad patterns all conspire at the same time to drag down the performance to a noticeable 200ms is so low that it’s not really a useful developer pattern (at least for the common man developer). “Let framework developers and JavaScript engine designers worry about such ‘weeds’ and just stick to the main path.”

Let me get right to the my point in recounting this interchange (because it’s really not to re-visit this specific topic, but the overall lessons gleaned).

It’s not death-by-a-thousand-paper-cuts that worries me. It’s death by the 900 paper-cuts you don’t know about and have never thought about before.

This post, more than anything, is an advocacy plea for starting to really think about performance issues more indepth. What concerns me from my plethora of jobs in this industry so far in my young career is how wide-spread the apathy is to such issues. If we never think about such things, we’ll never be prepared properly to handle the ones that really do matter, especially when faced with a crisis requiring fast and decisive action to address the mess.


I did actually come away from that (and many other) conversations with a more complete picture of how to think about such balancing acts, and the value of tempering principles with pragmatics.

But I still hold fast to this ethos in my own development career: It is better to err on the side of more performant than on the side of less performant.

I know that not every minute performance optimization is going to be worth the uglier code. In fact, a lot won’t be “worth it”. It’s quite likely that I will “over” (or “prematurely”) optimize in my code. But that’s not going to cause more starving children in Ethiopia. It’s gonna make my code a little harder to maintain, perhaps, though I can help that with good comments.

But I’d rather write with a default attitude of attacking problems in the performant way, and then when allowed to refactor, find specific areas to simplify, than the other way around. I believe it’s far easier, and less risky, to refactor from more performant code to less performant code, than the other way around. I know a lot of people will disagree with that statement. But I truly believe it.

Overall, here’s my takeaway to leave you with: The TCO (total-cost-of-ownership) on non-performant code/architecture is much higher than on performant code/architecture.

If you invest those pains little-by-little as you write code (but doing so in a judicious and balanced manner), you will spend less (time, money, etc) in the long run. You trade the short term pains for the long term payoffs. That’s a time-honored and tested set of wisdom, and I think it would behoove a lot of us to apply that to our jobs a little more often than we currently do.

This entry was written by getify , posted on Thursday September 30 2010at 02:09 pm , filed under JavaScript, Performance Optimization, UI Architecture and tagged , , . Bookmark the permalink . Post a comment below or leave a trackback: Trackback URL.

3 Responses to “Optimize much?”

  • Perfectly worded. I often get in a similar debate with other devs, both senior and junior, on my team – at this point it’s roughly 13. But it is not limited to code optimization, it’s also about bug fixing. We have a pretty good QA team that really rips our components apart so bugs are found on an on going basis. If you address optimization and bug fixes as you go, when you get to the end of the project you are not scrambling to get it all done.

  • Cedric Dugas says:

    I am not sure this can be true all the times, in all teams. I’m all for optimizing code, but it also depend what kind of team you got.

    If you are the only expert in JavaScript optimization you place yourself as a ‘hero’ by using complex systems in your code to save a few ms. When your team can’t edit your code, your company do not save money.

    Personally, I prefer a easy readable code, that has good optimizations than the most optimized code, harder to read.

  • getify says:

    @Benjamin — It’s sad and scary how pervasive the apathetic and avoidance mentalities to such topics are. Must combat one step at a time.

    @Cedric — I agree that not all teams are mature enough for this approach right now. But that doesn’t mean that you can’t start teaching your team better coding habits. Try brown-bag lunches at work once a month where you teach co-workers about proper (and optimized) coding. Try advocating for performance-optimization as a feature in your product cycles. Try anything to raise the level of performance awareness. Try…

    But don’t just keep a faux sense of security in “premature optimization is evil” and “we’ll optimize when we get around to it and it’s important enough to hurt if we don’t.” Those mindsets are responsible for a lot of really bad code in the world.

Leave a Reply

Consider Registering or Logging in before commenting.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Notify me of followup comments via e-mail. You can also subscribe without commenting.