A Smart(er) Transpilation Strategy

Welcome aboard the fast moving ES6 hype train! I assume you’re here because your Dilbert-like boss said at some point recently in a meeting, “I heard we need an ES6. Get on that right away.” So naturally you googled “ES6”, and a few minutes later, you googled “ES6 in IE6”. Yeah, good luck with that.

In all seriousness, ES6 is awesome and you should definitely start using it in your code bases right away. It’s not awesome because of new features, it’s awesome because it makes your code more readable, learnable, reason-able, and understandable then ever before. Remember: code is first and foremost a means of communication with other humans, and only secondarily for the computer.

But as you’ve no doubt realized by now, ES6 has a bunch of new syntax that’s not going to work directly in all those old crappy browsers you still have to support. Does that mean you have to wait for a decade until all those fade away? Absolutely not!

Get With The Tools

You’re going to need to get really familiar with polyfills and transpilers — the two main tools that will pull you from the trailing edge of progress up to the leading edge, where you should have been all along!


if (!Number.isNaN) {
    Number.isNaN = function isNaN(x) {
        return x !== x; // NaN !== NaN

A polyfill is a piece of code you drop in that will patch an older environment with API/behavior that newer environments have, so your code can rely on that functionality wherever it runs.


// ES6:
function foo(...args) {
    // ..

// Transpiled:
function foo() {
    var args = [].slice.call(arguments);
    // ..

A transpilier is a tool that you run to convert your code from the newer syntax to the equivalent older syntax, so it will work wherever it runs.

Together, these two tools will solve all your “ES6 in IE6” woes! (well, sorta)

The Base Transpilation

But this post isn’t about how or why you should transpile. It’s about what strategy you should use for managing your transpilation(s) — yes, there could/should be more than one!

The default, which is what most people seem to be assuming, is that you should transpile your entire code base, from *.es6.js files to *.es5.js files, and then always and only serve those *.es5.js files to all browsers, unconditionally.

But, how much sense does that actually make? At latest check, the newest releases of Microsoft Edge, Google Chrome, and Mozilla Firefox all have most of ES6 already implemented (heck, some are even working on ES7/2016 stuff already!). Edge has almost 90%, and Chrome is close behind with 70-80% complete.

And what about your code base? Are you using 100% of all ES6 features? I doubt it. Odds are, you are probably using 30-40% of those features. And I bet you’re probably using a set of features that are either mostly or entirely already natively supported in those above browsers’ latest releases.

So why would you ship the transpiled code to a browser that doesn’t need it and could instead run the original native ES6 code perfectly fine?

The FUD (Fear/Uncertainty/Doubt)

Some people are spreading some FUD about native ES6 being slower in those browsers than the transpiled equivalents. I call that FUD because I don’t think the performance differences matter compared to the code maintainability wins that ES6 brings. I also call it FUD because ES6 is brand new, and of course it’s not yet optimized to be as fast as an ES5 feature which has had 5+ years to be optimized.

I am quite confident that the native ES6 stuff in browsers will eventually be every bit as performant as the ES5 equivalents. And I think looking at a (possibly flawed) benchmark at this exact moment and saying “X is slower than Y” is betting against the future, which is a losing bet. Literally, next week, “X may be as fast or faster than Y”, but…

Who cares, since you already used last week’s benchmark to dictate your deployment strategy!?

Here’s why this strategy sucks…

But more importantly, here’s why you shouldn’t send that ES5 transpilation code to a browser that can already support the ES6 equivalent: the ES5 code is almost always larger, and sometimes significantly so.

If you look at the transpilation of ES6 generators for example, you’ll see for yourself just how painfully large/complex it is, plus the fact that it requires an additional not-small runtime library to support it. I for one don’t want to serve up transpiled generator code to even one browser for even one second longer than is absolutely necessary.

Also, if this is your strategy, you probably have no plan in place for ever phasing out that transpiled code. That means once that app is deployed and your whole team moves on to other stuff, the older ES5 transpilation is going to keep being served forever, even long after all browsers you care about have full and optimized ES6 support.

Transpiling is great. But unconditionally serving up the fully transpiled files forever is a bad strategy.

The UA Version Transpilation

So, if you need to set up some conditions on what you transpile and (more importantly) what versions of the files you serve to the browser, what options do you have?

A nascent but potentially wave-making trend is the idea that your transpiler can decide for you what it transpiles and what it doesn’t.

How would it perform such a trick? First, it would need to have data baked into it (or at least fetchable) about what versions of which browsers have certain ES6 features reliably and stably implemented to spec. Such data purportedly exists in the form of “caniuse” or the “ES6-compat-tables”. But is it fully trustable? Color me not convinced.

Then, once it has this data, the transpiler needs you to tell it what browsers and version ranges you care to support. For example, if you say, “Latest Chrome, IE > 10”, it has to figure out the lowest common denominator (aka “IE10”) and transpile to that. Or, you might say something relative like “Chrome – 1, IE – 1” which means “one version back from latest on Chrome, same for IE”.

Sounds great, right?

Here’s why this strategy sucks…

There’s a whole bunch of reasons this is a bad strategy. Let me try to briefly overview them:

  1. Painting with just one brush.

    Whatever version information you give the transpiler, it has to figure out the worst ES6 support, and transpile to that. I can say for sure that “Chrome – 1” needs a different set of code transpiled for it than “IE – 1”.

    Unless you manually figure out what these different sets are, and run the transpiler once for each set — that’s a lot of manual work on your part, and a lot of maintenance upkeep — you’re going to get a one-size-(doesn’t)-fit-all transpilation that misses out on the potential best fits.

  2. Version ranges are only relative to “now” if “now” keeps updating.

    Relative version ranges like “IE – 1” are much better than fixed version ranges like “IE > 10”, because they in theory account for you shifting your browser support upward over time as more browser releases happen.

    However, this strategy only works if you keep your data updated. If the data is baked into the transpiler, that means making sure someone is actively upgrading the tool itself. If the data is fetched remotely, that means making sure that the URL always stays active and alive, and having a backup plan if the data fetch ever fails.

    But more importantly, it also means that you need to re-run your transpilation build each time the data gets updated. Except, how are you going to know when the data is updated? Are you just going to set up an automatic process that rebuilds once per week, even after no one on the team is actively maintaining the project? That’s dangerous.

    The fact is, teams roll off of projects and yet those projects survive them and stay live. If your plan for keeping your project as optimally delivered as possible involves human interaction for maintenance, that strategy is problematic and error-prone at best. You should look for a strategy that’s “set it and forget it”, because in reality, someday your app will be in that mode.

  3. Version != Feature.

    There is countless precedent establishing the fact that browser versions are not reliable and accurate stand-ins for, “is this feature available?” I don’t even feel like re-debating this point, because if you still feel like a version number — an arbitrary marketing label at best — really does reliably tell you about features, you’re just not thinking critically enough and you should go away from this post now and come back when you’re ready to think.

    Just recently, I had a Twitter exchange with @codepo8 where he was asking about the support in Chrome for the ... spread operator. Documentation and support data asserted this feature was released in Chrome 45+, but in his Chrome 47, it wasn’t supported. What gives!?

    It worked fine in my Chrome 47, so something must be amiss. Ahh, flags. Damn flags. Years ago, I and many others flipped on a experimental-js flag in Chrome when they were first playing with ES6 support. I’d forgotten about the flag, honestly. But it turns out that Chrome only has released ... behind this flag. So, those with the flag on get it, and those without, don’t.

    You’re never going to be able to predict or control these sorts of situations. Period. A browser version is a poor estimate for feature support. Especially when there’s a much better answer (more on that later!).

I could go on. There are other problems. But those big 3 should be enough to convince you that building conditional transpilation into the transpiler is a bad strategy.

We need conditional deployment, and we need to pre-build our transpilations so that burden isn’t happening at request time. But we need a better strategy than we’ve seen so far.

Feature-Tested Transpilation

The only reliable way to know if a feature exists and works properly in any given browser is to run a test. Let me say that again: The only reliable way to know if a feature exists and works properly in any given browser is to run a test.

You need to run a test in a browser to see if a feature is there (and working as expected). And once you’ve run that test, you can then decide (based on the pass/fail results) if it’s OK to load the native ES6 code that uses that feature, or if you need to still load the transpiled code.

But you don’t want to figure out all those tests yourself, do you!? Of course not.

So I did it for you. es-feature-tests is a library that performs ES6 feature tests for you. It can test a browser or a Node environment.

But just having feature tests in a library only gets you part of the solution. You also need to know what ES6 features your specific code base needs. And figuring that out manually sucks and is error-prone.

So this library also includes a tool called testify which scans your ES6 code and figures out what features you’re using, and then produces the list of features that you actually need.

Feature Tests as a service

The es-feature-tests library is great, but it also isn’t a complete solution for efficient in-browser testing. You’d probably want to cache the results instead of re-testing on each page load, and you’d probably want multiple sites to share these cached results, and you’d probably want to run the tests off-thread (background) if possible, etc.

All of that important efficiency logic is already built for you: FeatureTests.io.

This site packages up the es-feature-tests library into an client-side library that will automatically and efficiently perform and cache tests and make those results shareable across all sites that browser loads.

All you need to do is:

<script src="https://featuretests.io/rs.js"></script>

    // use your test results here!


You could just use es-feature-tests the library yourself. But the FeatureTests.io library is a much more efficient path for the majority of use-cases.

How this strategy works

  1. Run testify in your build process, which produces the list of tests that should be performed to support your code base.
  2. Transpile your code during this build process, so you end up with (at least) *.es6.js source files and *.es5.js transpiled files pre-built and ready to go.
  3. Load the es-feature-tests library into a browser, hopefully by way of FeatureTests.io, on first page load of your app.
  4. Using the list of tests from testify and the test results from es-feature-tests, you get a true or false if this specific browser supports your specific native ES6 code or not. If true, load the rest of your application with the *.es6.js files. Otherwise, load the *.es5.js files.

More Granular Strategy

The above strategy assumes an all-or-nothing approach for your code base. This may be fine, or it may be sub-optimal.

You may want to make these conditional deployment decisions on a per-file basis. For example, you may run testify against file1.js and get one set of tests that tell you what that file needs, and run it separately against file2.js and get a different set of tests for that file’s support.

Now, you have two different lists of test results to check, and in the browser, you can decide to load either the file1.es6.js or file1.es5.js independently from deciding between file2.es6.js and file2.es5.js.

Decide per-file what the most optimal code to load for that specific browser at that moment in time is. Complete control.

Customizing Transpilation

The output list of tests from testify can be formatted to fit the configuration for transpilers like Babel. That means you could use the testify tool to figure out a specific set of transpilations that should occur, and then force Babel to perform only the ones needed.

You could for instance prevent the “generators” transpilation from occuring (whitelisting/blacklisting in Babel), meaning that resulting file could run in a browser that supports generators, even if it needed other ES6 transpilations to help.

In other words, you can produce multiple transpilations for your files. For example, you could start with a file1.es6.js (original source file) and produce two transpiled files: file1.with-generators.js and file1.es5.js.

Now, in the browser, you could do 3-way conditional loading, like:

if (all_ES6) {
    // load file1.es6.js
else if (some_ES6_but_with_generators) {
    // load file1.with-generators.js
else {
    // load file1.es5.js

Here’s why this strategy rocks…

Customizing what code you deliver to what the browsers can actually support, and not just a guess, means you always get the most optimal code running possible.

But more importantly, feature-tested transpilation is a winning strategy because it works as “set-it-and-forget-it”, for that inevitable time when you’re not paying close enough attention to the progress of browsers to perform version-oriented maintenance.

Since this strategy uses feature tests, those results are always, by definition, up to date with what the browsers in the wild are supporting. The best data you can possibly use is the data collected from the actual browser itself.

And that data will always be updated, so your decisions of code delivery will keep adjusting as time goes on, even if no one on your team ever touches the code again.

Here’s why this strategy sucks (or not)…

OK, so this feature-tested transpilation stuff may be great, but it does have one major flaw (according to some, anyway).

It requires an “extra round trip” to the server, because the first load of code needs to do the testing, and then a subsequent request must be made for the file(s) you decide you need.

Ouch, right!? Eh, not so fast.

Here’s why this strategy still rocks…

First, most major web applications know that it’s not optimal to ship your entire application all at once in a single file upfront, especially when a lot of that code may not be necessary right at first. A lot of web apps have been for years using a “bootstrapper” technique where they ship a small payload of code that handles the basic initial rendering/behavior, and then they dynamically load more of the app’s code over time as needed or during network idle times.

The feature-tested transpilation strategy relies on you checking the test results (probably already run and cached, even!) from your bootstrapper, and then loading appropriate files from then on. So, if you have a bootstrapper already, this is an easy extension of what you already do. If you don’t have one, maybe you should consider if that would improve the performance of your application. Maybe test it!? And if you conclude that you need a bootstrapper, now you can do your ES6 feature testing!

Second, HTTP v2 is coming, fast. It’s rolling out in more and more large sites and browser installations. I don’t think it will take too long (maybe 12-18 months) before it’s the majority of web traffic. In HTTP v2, there’s no relevant extra penalty for making subsequent requests over the initial persistent socket connection. In fact, in HTTP v2, you’ll want to ship as many individual small files as you can, which makes it a perfect companion to this feature-tested transpilation strategy.

Third, if you really did have a concern about this extra round trip, then just don’t do it that way! That’s right, it’s not even required. How so!?

The first time the user loads your site, just serve up the base (transpiled) version of your code right away — yay, performance! Once the site is loaded and running, perform the ES6 tests using the es-feature-tests / FeatureTests.io capability. Then make the decision about which version(s) of the app files this browser should use, and set a simple cookie with that decision, setting its expiration to like 2 weeks or whatever. The next time this user tries to load the site, if you receive that cookie on the server, decide what files to send based on that value.

Bam! No more extra-round-trip concerns. And you still make sure that over time, your users get the best code their browsers can support.


OK, that’s it for now. Hopefully I’ve given you some food for thought as to what kind of smart(er) strategies you could be using for your ES6 code delivery. And by the way, everything discussed here continues to be true beyond ES6 into ES7/2016, and ES…

Whatever you do, make sure you’re taking as much advantage of ES6+ as you can. Everyone who reads and writes your code will thank you. And if you do it smartly, so will your users!