This has certainly been an eventful day so far. I woke up this morning to this ominous tweet from @3rdEden. I thought surely I was reading that wrong, but I scrambled to try and install a copy of FF4 nightly (aka “minefield”) to try and confirm. Sadly, a few minutes later, I confirmed that it was in fact true: FF was no longer preserving execution order (as insertion order) on dynamically injected <script> nodes.

Then I thought to myself: surely this is just a regression bug… FF has always preserved execution order on <script> tags (hint: so has Opera). Also, the recent support for “async” and “defer” properties was specifically designed around unpinning scripts from execution order and onload dependency blocking, so the natural assumption is that the desired and correct behavior for a “naked” <script> tag without such attributes would be to continue as it always has in FF: insertion-order execution.

So I figured I’d just file a bug with Mozilla and surely they’d just fix that little hiccup. Then the bombshell… I found the bug report that changes everything. In short, Mozilla intentionally removed support for preserving insertion-order execution of inserted scripts. Not an accidental regression bug in a nightly release, but a landmark and fundamental feature change they’ve made to the browser. And no, they didn’t engage anyone like me (or others that are highly invested in this area) in any discussions ahead of time to examine the impact such a change might have — they just made the change. Bam.

Why do we care?

Let’s back up. Why do we care that Mozilla is changing this behavior with the upcoming FF4? The reason I care, and you should too, is because it severely cripples script loaders (like LABjs). If you’re not aware, the affected script loading tricks/tactics are in several different loaders (including the “Order” plugin for RequireJS), and LABjs is in use on a number of high profile sites, including the New Twitter, Zappos, and Vimeo.

Before I go any further, let me explain exactly under what circumstances and how this change by Mozilla is going to change things for the script loader landscape, especially for LABjs.

The specific use case that is affected is: dynamic script loading of multiple scripts (one or more of which are from remote domains) which have execution order dependencies so their order must be preserved.

If you are only loading local scripts, or if you are loading scripts in parallel and you don’t care at all about their execution order (they are unrelated entirely), or if you are only loading one script, then you are probably not affected. So move along. Well, unless you use another site that does, and that site will now fail in FF4. Then I guess you should care. :)

Why is this relevant to LABjs?

So, let me try to explain very simple boiled down why LABjs relied on the previous behavior in FF, and thus what the effect will now be in FF4 going forward.

LABjs’ main goal is to allow you to load any script, from any location (local or remote), in parallel (for better performance), but maintain/enforce execution order if you need to because of dependencies. What does this mean? It means that LABjs needs to be able to download all scripts in parallel, but be in control of when/how those scripts execute. Because one of the later scripts might download quicker than the earlier scripts… but we need the later script to “wait” for the earlier script to finish downloading and run.

When you use <script> tags in your HTML, you already get this correct behavior by default. But LABjs is designed to replace those <script> tags with $LAB API calls that load the scripts dynamically and in parallel, which achieves (in some cases, much) better performance than just <script> tags alone.

Unfortunately, not all browsers work the same way with respect to the loading and executing of such <script> tags. Prior to today, FF and Opera operated like we want, which is that you can inject as many <script> tags as you want, and the browser would make sure they download in parallel in whatever order, but will make sure they execute in proper (insertion) order. This is ideal and desired behavior from a script-loader (and performance) point of view. But the other browsers didn’t play so nicely, so this technique was only reliable in FF and Opera.

In IE, Safari, and Chrome, if you inject two <script> tags dynamically, they may not execute in insertion order. Which means that if we want to download them in parallel but control their order, we have to find some other trick to do that. So, for those 3 browser families, I devised the “text/cache” preloading trick.

Put simply, in those 3 browser families, you can inject a <script> tag into the page with a fake `type` attribute of something like “text/cache” (or “foo/bar”, etc). Those browsers will download that script, and will fire the `onload` handler to let you know the script finished loading, but they will not execute the script since the type is not recognized. So, the “preloading” trick for these browsers is to download all scripts in parallel (caching them in the browser cache) using this fake type attribute, and then go back and re-insert new <script> tags in the proper desired execution order, using the correct “text/script” type. The script will be recalled from cache immediately and executed, and then the next script can immediately be processed the same way, and so on.

Ugly

Is that trick pretty? Absolutely not. But it works. It works really well, by exploiting parallel downloading for sometimes HUGE performance gains. And yet it keeps a key feature of LABjs: preserving execution order if dependencies require it.

The problem is, this preloading “text/cache” trick does not work in FF or Opera! They will refuse to download a resource if the declared type of that resource is not something they recognize ahead of time. But take heart, because FF and Opera preserve execution order, so for those browser families, we just insert the <script> tags all at once, and rely on the browser to maintain the execution order.

So, that’s the uneasy but workable tension that LABjs has operated under for the last year and a half. It uses one trick for some browsers, and another behavior for other browsers, and between those two tricks, “preloading” (parallel loading) and execution order enforcement are possible. LABjs hides all the ugly details of how to do that under the covers and keeps the usage pretty simple and straightforward.

This balance of performance and dependency preservation was never (to my knowledge) achieved in a script loader before LABjs cross-browser, and is a big reason why LABjs is now powering many big sites like Twitter — performance and features are the ideal combination.

The fly in the ointment

By Mozilla deciding that FF will no longer respect/enforce insertion execution order, they have now killed the only “trick” which worked for FF family of browsers to be able to parallel load dynamic <script> tags but ensure execution order for dependencies. It’s important to note that <script> tags that appear in plain HTML source, even in the new FF4 world, do exhibit the desired behavior — what’s changed is that dynamically inserted <script> tags no longer do. BIG bummer.

What are the options for LABjs at this point?

  1. The most likely, but severely distasteful, option is that LABjs can be patched to detect that it is running inside of FF4 (only), and it can simply disable all “preloading”. What this will mean is that sites relying on LABjs running in FF4 will load AND execute completely serially (that is, one script at a time). This means that each of 3 scripts on a site will load one at a time and then execute, then the next script downloaded, executed, then… You can see why the performance side of this equation is going to lose out BIG TIME in that respect. LABjs will still work, but all of the benefits performance-wise will be completely lost.

    In fact, LABjs in FF4 will actually run LESS PERFORMANT than if the site just used regular script tags. Of course, sites will probably still want to keep using LABjs to speed up browsing for visitors in all other browsers (including especially IE’s), but they’ll have to face the sad fact that their FF4 visitors will see a greatly reduced performance profile. Pretty sad for such a cutting edge browser to actually cause much worse performance.

  2. I could patch LABjs to, in FF4+ only, use cross-domain XHR to “preload” remote scripts. However, the problem this presents is that the way FF has implemented cross-domain XHR, it requires the remote server to opt in with special response headers, which means that ALL servers, especially CDN’s, will have to be patched to start serving such headers, before the cross-domain XHR option will be viable. That’s not going to happen any time soon, even if we start now, and it’s certainly not going to happen before FF4 releases final.
  3. We can petition Mozilla to reconsider and restore this behavior to FF4 before its launch. I call on the community to help me in this respect. In fact, I’ll assert that I strongly feel that Mozilla owes more responsible behavior to the community — they should have engaged us in a discussion of such features before making such a fundamental and backwards-incompatible change. A serious dialog needs to be had regarding all the key players and compromises be made that will not leave script loaders (and the many big sites who rely on them) out in the cold. Edit: I apologize if the above language is inflammatory or offensive. I’m obviously distressed. And I wish I’d known about this so I could have helped inform Mozilla on the larger side-effects of the change as implemented.
  4. All browsers need to sit down and come up with and agree on a consistent behavior for the dynamic loading of resources (especially .js files) that takes performance concerns as well as dependency enforcement into account. If they all agreed on a single standard, LABjs (and other script loaders) could be drastically simplified or almost made entirely moot. This would be the best option. Script loaders like LABjs only exist right now because the browsers haven’t yet agreed on a consistent and performance-savvy way to handle such functionality. I hope one day they are unnecessary, but that’s going to require a lot of uncommon agreement/cooperation between the browsers.

    I have petitioned for such discussions to happen for well over a year. There’ve been a few emails exchanged among a few players in this space, but nothing has even begun to be dealt with to finally solve these issues.

I hereby renew my call for actual standards to specifically deal with how resources are dynamically loaded, including all adjacent functionality like execution order, onload event notifications, etc, and for ALL browsers to immediately agree to such standards and to publish backward patches.

Until that happens, performance-oriented script loaders like LABjs will still be a necessary evil in this web world, and we’ll continue to be at risk of browser nightly releases casually removing key features/behavior and throwing the whole script loader world into chaos.

I of course will keep you, the community, aware of the status of this issue as it moves forward. I sincerely hope that Mozilla will reconsider this situation and will engage in discussions to find a solution. I ask you, the community, to help me surface to Mozilla why that is such a critical conversation that needs to happen ASAP.

This entry was written by getify , posted on Tuesday October 05 2010at 03:10 pm , filed under JavaScript, Misc and tagged , , , , , . Bookmark the permalink . Post a comment below or leave a trackback: Trackback URL.

19 Responses to “FF4, script loaders, and order preservation”

  • I agree and will petition for them to keep the order the way it is until they can provide an alternate solution. I understand where they want to go with it, but a solution still needs to be found.

    One surmountable task on the script loader end is that most tend to load an array of scripts in an order with some sort of marker (like .wait()) in labJS to indicate when to pause.

    I believe strides could be made by using a named tree approach in loading. something like:

    LabtreeLoad([
      {branch1:['script1.js', 'script1a.js']},
      {branch2:['script2.js']},
      {branch3:[{branch3a:['script3.js', 'script3b.js']},
                {branch3b:['script3a.js']}
               ]
      }
    ]);
    // this would start the branch1,2 and 3 asychrnously but load script1a.js after script1.js
    // script2.js would load along with script1.js and script3.js and script3a.js
    //script 3b.js would load after script3.js
    
    

    You could even then use standard array operators to add, change, augment the list. You can use the hash lookup for named lists. This becomes important in a dev environment that wraps content around content.

    If you had a named branch called ‘jQuery’, you could append all your plugins to it without caring what layer of the html creation wrapping you are in.

    At the bottom of the page you would call labTreeExecute() or something.

    This could also allow you handle the FF issue by calling only the affected branch scripts in a callback (If script.onload fires after initialization AND loading – not just after the data is loaded…)

    In fact, the only reason we haven’t used labJS in our code is because of the missing tree structure that we need…

    Anyways, good luck and hope they change their mind!

  • Bri Lance says:

    This doesn’t really address the underlying issue or the need for a standard, but could you use a “real” MIME type other than text/javascript to get around the issue in Firefox 4? A quick test showed that creating a script node with type text/vbscript or text/html would bring in a script from another domain, but not execute it. It doesn’t, however, fire the onload event, so this may or may not be useful to you.

  • getify says:

    @Dave — I certainily appreciate the suggestion. It’s a fairly radical departure from the API and the underlying functionality that LABjs currently does. It’s much more along the lines of RequireJS. That doesn’t mean it’s not perfectly valid, I just mean that doing so would not be a trivial change, it would be a radical rewrite of LABjs and refocus of the core goals.

    BTW, I wholeheartedly understand the need for “trees” and “dependency-management”. LABjs was always intended to be the “nuts and bolts” of a more complex and complete system for such. The LABjs server component I’ve been working on (forever now, it seems!) is definitely what I think will serve those use cases.

  • getify says:

    @Bri- it’s a good question. To the best of my recollection, FF (previous versions like 3 and before) wouldn’t even download such assets even with real MIME-types like “text/html”. So the real question would actually be backward-compatibility even if FF4 were found to support that. Would probably require some sort of forking behavior anyway.

    It’s also been suggested that I explore “preloading” scripts using Image tags. I’m leary of doing so for a variety of reasons, but I may be forced to explore that. It very well might be eventually fraught with the same mime-type issues as the script-tag, though.

  • James Burke says:

    I was going to suggest what Bri Lance suggested, perhaps ask the Mozilla folks to see if the script onload can fire even for resources that are not actually scripts. I am not sure how feasible that is, there may be no concept in the code that a non-script could be “loaded” via a script tag, but probably good to ask.

    Still makes detection hard, looking for a property that is only in FF4. I suppose one of the new ES5 things could be tested, or perhaps a newer CSS Moz value.

    I also thought the preflight requests with xdomain XHR calls only happened if you did something other than a simple GET, for instance if you add a custom HTTP header. However, just doing an plain XHR get should not send a preflight request? That may be worth exploring more. Not sure if that populates the cache correctly though.

  • 3rdEden says:

    For preloading you could also consider using Stroyan’s technique; http://www.phpied.com/preload-cssjavascript-without-execution/

    Which actually allows the scripts to be loaded without execution, and firefox supports the \onload\ event on Object tags.

  • Paul Rouget says:

    “We can petition Mozilla” ← no need. We got the message. I’ll work on it and see what we should do.

  • getify says:

    Thank you Paul! I’m really hopeful I/we can have a conversation with Mozilla so we can figure out how to address this in a way that works for everyone.

    Several commenters and tweets have suggested more exotic possible workarounds like using the <object> tag or Image objects to remotely fetch the scripts but not execute them. I don’t want to further complicate LABjs if I don’t have to, but of course I will do what it takes to make it work. But I’m concerned if I use one of those other hacks, then another future update of FF may disable those techniques too. So, I really feel like finding a solid and supportable solution rather than hacks is the best way to proceed.

  • getify says:

    @James — read the first few paragraphs of the CORS paragraph here:

    http://www.nczonline.net/blog/2010/05/25/cross-domain-ajax-with-cross-origin-resource-sharing/

    AFAIK, even a simple GET request x-domain sends the “Origin” request-header and thus requires the server to respond with an authorization response header.

  • Ian Hickson says:

    I hereby renew my call for actual standards to specifically deal with how resources are dynamically loaded, including all adjacent functionality like execution order, onload event notifications, etc, and for ALL browsers to immediately agree to such standards and to publish backward patches.

    That’s exactly what happened here. The HTML spec requires the Firefox 4 behaviour.
    If you want to prefetch content, why don’t you just use <link rel=prefetch>? (Note that using “text/cache” is non-conforming.)

  • getify says:

    @Ian-
    I have asked for such specs (regarding how parallel dynamically loaded content is handled) from WHATWG and other players in the industry, and been mostly shot down for asking before. I’ve also looked for the spec you refer to, and haven’t found it.

    Can you please share exactly where in the spec it says that multiple dynamically inserted script tags must not have insertion-order execution? Also, can you inform on *why* that would be, because it makes much more sense for the performance standpoint and especially for script loaders that the order would be preserved.

    In addition, why do we need “async” and “defer” if the injection of a script tag basically does exactly the same thing? As I said in the article, it makes much more sense to me that the script tag without such attributes behave like it did before, and that you add one or both attributes if you want to flip on this “don’t respect the order” behavior.

    As to using <link rel=prefecth>, for that to be useful to script loaders:
    1) it must be supported in several/all browsers. to my knowledge, its support is pretty weak right now.
    2) it must support onload event emission so a script loader can definitively know when the resource has finished loading.

    If those things are not true, than script loaders can’t really rely on that solution.

  • Boris says:

    Can you use document.write to write the scripts out? Or do you need to not block the parser an/or run your stuff after parsing is done?

  • getify says:

    @Boris-
    Not only does document.write() have undesireable (negative performance) blocking performance, it’s also not possible to run document.write() after the page has loaded, which is an important use case for a dynamic on-demand loader.

  • Boris says:

    Ok, so yes, the two issues I mentioned…

    The problem is that the interaction of the HTML5 parsing algorithm and HTML5 document.write behavior means that enforcing in-order execution is no longer web-compatible: if you try to do it, websites break. Hence the change in the Mozilla bug (and this part is mentioned on comment 0).

    I agree that it would be good to have something that supports your use case; we should add something like that…

  • Boris says:

    I guess the point being that nothing was “casual” here (in the “casually removing key features/behavior sense”). This was a change we were forced to make if we’re implementing the HTML5 spec on parsing… It’s unfortunate that it didn’t become clear earlier that the change needed to be made, I agree.

  • James Burke says:

    Right, so the xdomain XHR call would work, no preflight request if the server responds with the Access-Control-Allow-Origin header. So the trick is to get all the remote servers to send that header, which I think is unrealistic, and probably dangerous to encourage.

    They may think they are just enabling an xdomain script fetch, but if they are too careless with the value of Access-Control-Allow-Origin then they can open up their server to a whole range of xdomain requests. So probably unrealistic.

    So ideally the Firefox folks could look at enabling script onload even for non-script files. It does seem like expecting the old behavior going forward is unlikely.

  • Hi, I’m the person who wrote the Gecko patch being discussed.

    Then I thought to myself: surely this is just a regression bug… FF has always preserved execution order on <script> tags (hint: so has Opera).

    That’s not a safe data point to draw forward-looking expectations from. IE and WebKit haven’t executed script-inserted external scripts in insertion order. When two browsers do one thing and two other do another thing, it should be no surprise to anyone that when standardization happens, at least two browsers need to change their behavior to comply.

    I can’t stress this enough: When you see that one set of browsers do one thing on a given point and another set of browsers does something else on the same point, you shouldn’t assume that these two sets of browser will retain differing behaviors forever.

    When you see one set of browsers doing one thing and another doing another, UA sniffing (LABjs doesn’t sniff the UA string but it sniffs “Gecko” from the presence of MozAppearance) is exactly the wrong thing to do. The right thing to do is to do something that works without browser sniffing in all current browsers. For cross-origin script library loads this means designing the script libraries in such a way that the act of evaluating a given library doesn’t cause the library to call into another library but only makes the API of the script available such that only calling the API potentially causes a cross-library call. (Yes, I’m saying that in the current environment, that LABjs probably shouldn’t have tried to provide cross-origin ordered loading of external scripts given that it couldn’t be provided without either UA-sniffing or making dangerous assumptions about the caching relationship of loads initiated by object/img elements and loads initiated by script elements.)

    The only case where UA sniffing is OK is sniffing for browser versions that are older than the current versions. You should never do UA sniffing that assumes a future release of a given browser retains its set of behaviors if those behaviors are currently inconsistent across browsers. Standardization will lead to behavior consolidation eventually, and your UA sniffing-based code breaks.

    Also, the recent support for “async” and “defer” properties was specifically designed around unpinning scripts from execution order and onload dependency blocking, so the natural assumption is that the desired and correct behavior for a “naked” <script> tag without such attributes would be to continue as it always has in FF: insertion-order execution.

    The async and defer attributes are meant for altering the defaults for parser-inserted scripts. To change the default behavior of script-inserted script, something else would be needed, because the absence of the async attribute won’t make shipped IE or WebKit load script-inserted script synchronously. (More about this later in this comment.)

    In short, Mozilla intentionally removed support for preserving insertion-order execution of inserted scripts. Not an accidental regression bug in a nightly release, but a landmark and fundamental feature change they’ve made to the browser. And no, they didn’t engage anyone like me (or others that are highly invested in this area) in any discussions ahead of time to examine the impact such a change might have — they just made the change. Bam.

    The plans have been on display in the cellar of the planning office for at least nine months.

    A site broke with the HTML5 parser. The site was used jQuery to start fetching an external script-inserted script. Then, without yielding back to the event loop, the site used jQuery’s globalEval to evaluate a script that called document.write(). jQuery’s globalEval works by making creating a script element node, putting the script text to be evaluated into the elements content and inserting the element as a child of the head element of the document. In short, it uses a script-inserted inline script. Since Firefox maintained the insertion order of script-inserted inline scripts relative to script-inserted external scripts, the script-inserted inline script got blocked on the script-inserted external script and globalEval returned before actually evaluating anything. Later, after the external script-inserted script had loaded and been executed, the inline script-inserted script ran and called document.write(). At this point, the insertion point was no longer defined (per HTML5), so the call to document.write() implied a call to document.open() which blew away the whole document.

    The reason why this “worked” with the old pre-HTML5 parser is that previously in Gecko (and WebKit) racy document.write() just inserted text into some timing-dependent point in the parser’s stream. HTML5 aligns with IE and makes such inherently racy writes not work.

    HTML5 (and the Gecko trunk and WebKit trunk) have protection against destructive document.writes roughly like IE. (It’s not clear what exactly IE does; the MS folks aren’t sharing the information.) The reason why the code for protecting against destructive writes didn’t kick in was that the code defends against destructive writes from external scripts but the script that called document.write() was categorized as an inline script.

    So I was doing what the standard says in order to fix site breakage. That’s about as righteous as browser engine changes can be.

    As for not engaging you, there were three reasons for not engaging you: First, I was unaware of you. (The Web is pretty big!) Second, I was in a hurry. Third, the change I was making made Gecko behave like WebKit and IE (well, not exactly apparently for the non-script type thing), so it was relatively safe to assume that sites wouldn’t break since sites on the Web usually are already taking IE-compatibility and WebKit-compatibility into account. Of course, such reasoning only works for estimating the breakage risk for sites that run the same code path in all browsers. If you UA-sniff, your code may break, so please, please, don’t UA sniff.

    This doesn’t mean that I don’t pay attention to what JavaScript developers are trying to do. Above, I discussed the badness of script-inserted external scripts blocking script-inserted inline scripts. Another aspect of the Gecko change was avoiding the badness of script-inserted external scripts blocking parser-inserted scripts. At least Steve Souders has been promoting asynchronous script loading for performance reasons. Even though I didn’t contact him ahead of time about this change, I expected the change to be a happy thing to people who subscribed to his school of asynchronous loading.

    As an example, after making the change to Gecko, I became aware of this article by Nicholas Zakas. If you have a third parser-inserted script after the recipe, the recipe doesn’t actually work as advertised in Firefox 3.6. The later parser-inserted script blocks on the script-inserted external script. With the change to Gecko, the recipe starts working as advertised in Firefox, too, in addition to already working in WebKit and IE. When performance recipes like this don’t actually work in Firefox, Firefox looks bad relative to IE and WebKit.

    Let’s back up. Why do we care that Mozilla is changing this behavior with the upcoming FF4? The reason I care, and you should too, is because it severely cripples script loaders (like LABjs External Link). If you’re not aware, the affected script loading tricks/tactics are in several different loaders (including the “Order” plugin for RequireJS), and LABjs is in use on a number of high profile sites, including the New Twitter, Zappos, and Vimeo.

    New Twitter works fine in Minefield. Vimeo seems to work on Mac. (There are problems on 64-bit Linux, but I’m guessing those are related to out-of-process plug-ins.) Zappos isn’t obviously broken, but I didn’t try to order anything from there.

    In fact, so far, there’s been no reports of actual site breakage arising from the Gecko script loader change that’d have reached me.

    The specific use case that is affected is: dynamic script loading of multiple scripts (one or more of which are from remote domains) which have execution order dependencies so their order must be preserved.

    Since reports of concrete site brokenness are absent, could it be that sites aren’t actually relying on the in-order property that LABjs tries to provide?

    LABjs’ main goal is to allow you to load any script, from any location (local or remote), in parallel (for better performance), but maintain/enforce execution order if you need to because of dependencies.

    Do authors actually need to? In practice that is. Aren’t libraries that one might want to load already designed so that they don’t have loading-time inter-dependencies if you wait until all the libs have loaded before calling the APIs they provide?

    When you use tags in your HTML, you already get this correct behavior by default. But LABjs is designed to replace those <script> tags with $LAB API calls that load the scripts dynamically and in parallel, which achieves (in some cases, much) better performance than just <script> tags alone.

    Am I inferring correctly that you wouldn’t want script-inserted scripts to block parser-inserted scripts?

    Unfortunately, not all browsers work the same way with respect to the loading and executing of such tags. Prior to today, FF and Opera operated like we want,

    Do you really want script-inserted external scripts to block script-inserted inline scripts and parser-inserted scripts? Or do you just want script-inserted external scripts to maintain order among themselves?

    All browsers need to sit down and come up with and agree on a consistent behavior for the dynamic loading of resources (especially .js files) that takes performance concerns as well as dependency enforcement into account. If they all agreed on a single standard, LABjs (and other script loaders) could be drastically simplified or almost made entirely moot.

    This is exactly what’s happening. (Well, except the standard says “If the user agent does not support the scripting language given by the script block’s type for this script element, then the user agent must abort these steps at this point.”, so your bogus type trick doesn’t work according to HTML5.)

    I have petitioned for such discussions to happen for well over a year. There’ve been a few emails exchanged among a few players in this space, but nothing has even begun to be dealt with to finally solve these issues.

    I now see that you posted to the WHATWG’s “help” mailing list in March. The WHATWG was already publishing a spec that covered the area and you didn’t ask for any changes, and you didn’t follow up to Hixie’s follow-up question, so the thread went nowhere.

    To engage with browser vendors on this topic, I encourage you to join the W3C HTML WG. (There’s a better chance of engaging Microsoft there than on any of the WHATWG lists.)

    I hereby renew my call for actual standards to specifically deal with how resources are dynamically loaded, including all adjacent functionality like execution order, onload event notifications, etc, and for ALL browsers to immediately agree to such standards

    It seems you aren’t too happy now that such standardization and implementation is happening :-(

    I of course will keep you, the community, aware of the status of this issue as it moves forward. I sincerely hope that Mozilla will reconsider this situation and will engage in discussions to find a solution.

    There are various options:

    Doing nothing. This wouldn’t address your use case.

    Standardizing IE’s and WebKit’s behavior for bogus types. We might have to do this, but this isn’t really a proper solution to the use case, so I’d rather not do this.

    Reverting the Gecko change and pushing for the old Gecko behavior to be standardized. This would re-introduce the problems the change was meant to solve, so I think this isn’t a real option, either. Furthermore, to get to interop via this path, IE and WebKit would have to change substantially.

    Making Gecko enforce insertion order execution of script-inserted external scripts that don’t have the async attribute among themselves and standardizing that. This solution looks attractive on surface, but this isn’t a good solution. A library like LABjs wouldn’t be able to capability sniff whether scripts will execute in insertion order (even if you tried to load some scripts and saw in-order execution, you couldn’t be sure it wasn’t by chance). A solution that relied of UA sniffing wouldn’t make JS libraries put future IE or WebKit on the new code path when IE or WebKit started enforcing insertion order execution.

    Adding a DOM-only boolean property called “ordered” to script element nodes. This property would default to false. JS libraries could capability sniff for the presence of the DOM property to see if it is supported and set it to true to request script-inserted scripts that have the property set to true execute in insertion order among themselves (but not blocking other scripts). If we address your use case at all, this is the best solution I’ve come up with so far. Note that this would give IE and WebKit a capability-sniffing upgrade path to getting to the code path that doesn’t require bogus type preload.

    (I realize that it looks offensive to say “If we address your use case at all”, but doubting the need to address a use case is a healthy standard operating procedure. After all, so far actual breakage of concrete existing sites hasn’t been shown.)

    At this point, it would be best to take this discussion to the W3C HTML WG, since that’s the best way to engage other browser vendors.

  • debradeitch says:

    Very first, each want to allow parallel downloading it associated with JavaScript assets. That’s the deserving objective however 1 that’s currently becoming dealt with through more recent web browsers. Although it’s a good academically fascinating goal to try and press away parallelization associated with JavaScript downloading within old web browsers, We don’t think it’s virtually useful. Web browsers happen to be resolving this issue for all of us, therefore piece of software loaders aren’t required to assist presently there.

    Invisalign

  • getify says:

    @debradeitch-
    Even the latest versions of browsers cannot achieve the same performance as using a script loader. Why? Because the browser has to assume that some script will have a `document.write()` in it. This means that browsers cannot fire DOMContentLoaded (aka DOMready) until all requested scripts have executed. This creates the perception of a much slower loading page, because the user can’t (in most cases) interact with the page in any meaningful way until DOMContentLoaded has fired. Even if your page without a script loader objectively loads at about the same speed as my page with a script loader, users will feel like my site is more performant, because DOMContentLoaded will fire MUCH faster on my page than on yours.

    Also, there’s lots of features that many sites need to be able to do, like load several scripts, but choose when (later) to execute some of them, and browsers provide no mechanism for this functionality. You may be content with “good enough” performance, but in most cases, you are missing out on possible performance improvements by not using advanced techniques like that (lazy-loading, script preloading, etc).

    So, I can appreciate that you’d prefer to just resign this task to the browser. But I’m convinced the browser will never be able to do as good as a script loader, until the browser completely removes `document.write()`. That probably will never happen, but even if it does, it’s years away. I’d rather have better performance, and better perceived performance, right now, on all my sites. If that’s not your priority, fine. But it is quite important to a lot of the rest of us.

Leave a Reply

Consider Registering or Logging in before commenting.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Notify me of followup comments via e-mail. You can also subscribe without commenting.