Optimize much?

I’ve had some troubling and challenging thoughts swirling around in my weird brain for the last couple of months, and then two days ago, my good friend and fellow Austin.JavaScript attendee Alex Sexton prodded me with this little jab, which I obviously take light-heartedly and in good spirits. But it definitely has me thinking even more. I decided it was time to jot down some of these thoughts.

Thinking about what, you ask? Optimization. Is it good or not? Is there such a thing as too much (or, as they say, “premature”) optimization? Does it really matter if you cut corners in optimization when in the real world, few if any ever notice the difference? “If a tree falls in the forest and noone is around to hear it….”

It’s certainly no secret that I’m a complete performance optimization nerd. I consider myself a Padawan learner under the likes of the incomparable Steve Souders. And just look at projects like LABjs (“the performance script loader”) and 2static.it (cookie-less static resource loading), and you’ll see that I spend a lot of my time thinking obsessing about these topics.

And they are not simple topics to ponder. In fact, they are quite complex. It’s not quite as easy as just comparing the “big-O” notation of two algorithms and declaring one to be the winner, as we were so fond of doing in CS 101. Survey the 40-something rules in YSlow and PageSpeed and you’ll see that many of them are contradictory! And that’s just the tip of the iceberg. Start considering the tradeoffs you make in your code when deciding how to write your for-loops, etc. There’s a lot to think about and balance.

Yeah, we’ve all got a long way to go at artfully balancing what we optimize and how we optimize it.

Duck for cover!

I take a lot of flak (both the fun kind and the not-fun kind) for being so obsessed with all facets of web performance optimization. But I can’t really help that it’s kind of default in the way that I think. I am, as the Velocity Conference so keenly states, a “Fast By Default” kind of developer. Very little bugs the beegeezus out of me more than seeing (or worse, experiencing) poorly performing code, especially when it’s so easy to at least start addressing such issues.

I’m not going to go into details on all the various ways that you can look at addressing the performance of code and architecture (that’s the stuff of many a book). But know that I’m talking about both the small things and the big things. This is both a macro-topic and a micro-topic.

It’s an issue of how your code performs vs. how your code is maintained. But it’s also an issue of how important is it if your resources are not properly being cached vs. meeting a tight customer deadline to take heat off the boss’ neck. Like I said, these are weighty topics with no clear answers, and many many different factors weighing in.

For better or worse, I often find myself in the minority opinion on such topics, arguing for the principle of efficiency and performance even if the gains are not as directly measurable or tangible. And I understand why many who feel opposite of me feel the way they do.

I work in those same pragmatic job environments where the release deadlines are of utmost importance, and the extra 200ms that an inefficient n2 search algorithm takes are not something anyone around me cares to spend an extra day addressing.

In a recent informal technology-related gathering, one such conversation sparked to life, and as usual, I found myself playing the devil (or devil’s advocate, depending on your perspective) and really trying to prod at the topic of whether or not coding with optimization as primary motivator (at the expense sometimes of code readability) was a valid approach to take.

The specific topic was Array.forEach() vs doing a regular for-loop and manually iterating an array’s elements. But I’m not here to re-hash that whole conversation. The outcome of the conversation is what was more impactful to me.

…or give me death!

Page 1 of 2 | Next page