Thinking about what, you ask? Optimization. Is it good or not? Is there such a thing as too much (or, as they say, “premature”) optimization? Does it really matter if you cut corners in optimization when in the real world, few if any ever notice the difference? “If a tree falls in the forest and noone is around to hear it….”
It’s certainly no secret that I’m a complete performance optimization nerd. I consider myself a Padawan learner under the likes of the incomparable Steve Souders. And just look at projects like LABjs (“the performance script loader”) and 2static.it (cookie-less static resource loading), and you’ll see that I spend a lot of my time
thinking obsessing about these topics.
And they are not simple topics to ponder. In fact, they are quite complex. It’s not quite as easy as just comparing the “big-O” notation of two algorithms and declaring one to be the winner, as we were so fond of doing in CS 101. Survey the 40-something rules in YSlow and PageSpeed and you’ll see that many of them are contradictory! And that’s just the tip of the iceberg. Start considering the tradeoffs you make in your code when deciding how to write your for-loops, etc. There’s a lot to think about and balance.
Yeah, we’ve all got a long way to go at artfully balancing what we optimize and how we optimize it.
Duck for cover!
I take a lot of flak (both the fun kind and the not-fun kind) for being so obsessed with all facets of web performance optimization. But I can’t really help that it’s kind of default in the way that I think. I am, as the Velocity Conference so keenly states, a “Fast By Default” kind of developer. Very little bugs the beegeezus out of me more than seeing (or worse, experiencing) poorly performing code, especially when it’s so easy to at least start addressing such issues.
I’m not going to go into details on all the various ways that you can look at addressing the performance of code and architecture (that’s the stuff of many a book). But know that I’m talking about both the small things and the big things. This is both a macro-topic and a micro-topic.
It’s an issue of how your code performs vs. how your code is maintained. But it’s also an issue of how important is it if your resources are not properly being cached vs. meeting a tight customer deadline to take heat off the boss’ neck. Like I said, these are weighty topics with no clear answers, and many many different factors weighing in.
For better or worse, I often find myself in the minority opinion on such topics, arguing for the principle of efficiency and performance even if the gains are not as directly measurable or tangible. And I understand why many who feel opposite of me feel the way they do.
I work in those same pragmatic job environments where the release deadlines are of utmost importance, and the extra 200ms that an inefficient n2 search algorithm takes are not something anyone around me cares to spend an extra day addressing.
In a recent informal technology-related gathering, one such conversation sparked to life, and as usual, I found myself playing the devil (or devil’s advocate, depending on your perspective) and really trying to prod at the topic of whether or not coding with optimization as primary motivator (at the expense sometimes of code readability) was a valid approach to take.
The specific topic was Array.forEach() vs doing a regular for-loop and manually iterating an array’s elements. But I’m not here to re-hash that whole conversation. The outcome of the conversation is what was more impactful to me.
…or give me death!
I was arguing that even though the performance gains of the native for-loop might only equal a few ms at best (for most real-world data sets) over the Array.forEach syntax (function call overhead, blah blah), the fact that it was more efficient was sufficient justification (without any tangible measurability) over the tradeoff of more complicated (less syntactically “pretty”) code. In fact, I believed then, and still do, that the pattern of writing efficient code pays off in the long run by all those little gains adding up.
One astute member of the group quipped “you’ll never die by a thousand paper cuts” because “you never get all the cuts at once”. In other words, if I save those few ms there, and even in 15 other places in my code, I still will not get enough gains to even be measureable or a performance problem worth addressing.
Let me get right to the my point in recounting this interchange (because it’s really not to re-visit this specific topic, but the overall lessons gleaned).
It’s not death-by-a-thousand-paper-cuts that worries me. It’s death by the 900 paper-cuts you don’t know about and have never thought about before.
This post, more than anything, is an advocacy plea for starting to really think about performance issues more indepth. What concerns me from my plethora of jobs in this industry so far in my young career is how wide-spread the apathy is to such issues. If we never think about such things, we’ll never be prepared properly to handle the ones that really do matter, especially when faced with a crisis requiring fast and decisive action to address the mess.
I did actually come away from that (and many other) conversations with a more complete picture of how to think about such balancing acts, and the value of tempering principles with pragmatics.
But I still hold fast to this ethos in my own development career: It is better to err on the side of more performant than on the side of less performant.
I know that not every minute performance optimization is going to be worth the uglier code. In fact, a lot won’t be “worth it”. It’s quite likely that I will “over” (or “prematurely”) optimize in my code. But that’s not going to cause more starving children in Ethiopia. It’s gonna make my code a little harder to maintain, perhaps, though I can help that with good comments.
But I’d rather write with a default attitude of attacking problems in the performant way, and then when allowed to refactor, find specific areas to simplify, than the other way around. I believe it’s far easier, and less risky, to refactor from more performant code to less performant code, than the other way around. I know a lot of people will disagree with that statement. But I truly believe it.
Overall, here’s my takeaway to leave you with: The TCO (total-cost-of-ownership) on non-performant code/architecture is much higher than on performant code/architecture.
If you invest those pains little-by-little as you write code (but doing so in a judicious and balanced manner), you will spend less (time, money, etc) in the long run. You trade the short term pains for the long term payoffs. That’s a time-honored and tested set of wisdom, and I think it would behoove a lot of us to apply that to our jobs a little more often than we currently do.