This post is the immediate “part 2″ follow-up to Mozilla & LABjs: the story unfolds. Definitely read that before proceeding to read here.

As I mentioned at the beginning of that previous post, I am writing to address the points raised in the comment by the Mozilla developer who was responsible for the change in script-load-order behavior. I already responded in part 1 to the overal condescending (in my opinion) tone of that comment, but here I will address very specific points made.

Dive in

That’s not a safe data point to draw forward-looking expectations from. IE and WebKit haven’t executed script-inserted external scripts in insertion order. When two browsers do one thing and two other do another thing, it should be no surprise to anyone that when standardization happens, at least two browsers need to change their behavior to comply.

I can’t stress this enough: When you see that one set of browsers do one thing on a given point and another set of browsers does something else on the same point, you shouldn’t assume that these two sets of browser will retain differing behaviors forever.

When you see one set of browsers doing one thing and another doing another, UA sniffing (LABjs doesn’t sniff the UA string but it sniffs “Gecko” from the presence of MozAppearance) is exactly the wrong thing to do.

I addressed this point in the previous post, but let me reiterate: It’s not my fault the browsers have carelessly left these doors wide open. I was not content to sit on the sidelines and wait for years while web performance optimization remained impossible to grasp. So I jumped into the fray and solved the use case based on what we have now.

I’m not so naive as to believe the landscape of browsers and features will never change. That’s not the point of my complaint. The whole point of my complaint with Mozilla (and with ill-conceived/short-sighted standards you’re falling back on) is that if you’re going to change behavior, you have to engage the community actively, especially the community that’s most affected.

LABjs is, I think fair to say, pretty well known at this point, and it’s not unreasonable to think that Mozilla could have done some google searches to see if there were any well-known or widely-used tools that may be affected by the change. Certainly LABjs would have come up somewhere in the first page or two of some of those searches.

I fully expect browsers to change and get better, but what must also happen is that the browsers must engage the community to see what the needs are. Mozilla has a GREAT TRACK RECORD in the community of this in the past. But the handling of this particular issue, I think was not up to that same “standard”.

What I would have hoped for, and what I hope for still, is that Mozilla will proactively try to address the valid and already existent use cases when they make a change to behavior. If you can’t support the behavior any more, give us a way to definitively detect the change (so we can still support backwards in your browser family) and also give us some alternative that still serves the use-case. Callously claiming that the use case is not valid (or even “may not be”) is not a way to endear you to the members of the community.

The right thing to do is to do something that works without browser sniffing in all current browsers. For cross-origin script library loads this means designing the script libraries in such a way that the act of evaluating a given library doesn’t cause the library to call into another library but only makes the API of the script available such that only calling the API potentially causes a cross-library call. (Yes, I’m saying that in the current environment, that LABjs probably shouldn’t have tried to provide cross-origin ordered loading of external scripts given that it couldn’t be provided without either UA-sniffing or making dangerous assumptions about the caching relationship of loads initiated by object/img elements and loads initiated by script elements.)

This is an absurd assertion to make. I don’t consider it a valid option to just say LABjs should never have happened because the browsers and standards didn’t have a solution. Would you say the same thing to Steve’s face: “Steve… stop trying to improve the web performance optimization of scripts loading, because it’s not possible to do so yet.” That’s ludicrous.

LABjs is a self-admitted and proclaimed gap technology. It’s designed to meet an interim (and backwards-compatible) need — the need to drastically improve web performance optimization by parallel loading of scripts. And moreover, the need to normalize that behavior across ALL web browsers, not just the newest generation and ignore web performance for the older ones.

My hope always has been, and continues to be, that LABjs is someday made moot and unnecessary because all the browsers figure out a common way to handle things that serves all these use cases and needs in a performance-savvy way. And I spend countless hours and effort trying to advocate for such. It’s obviously frustrating when, either directly on purpose or just be accident, one browser vendor throws all that effort into jeopardy. That’s why I’m reacting with the candor that I am.

I continue to hope an amiable solution can be found. I hope that Mozilla is committed to the same. But chastising me on “right” and “wrong” like a little school boy and then offering no valid alternative is quite disingenuous.

The plans have been on display in the cellar of the planning office for at least nine months.

I’ve read this spec section many times. Just now reading it again, I found the one phrase which I’d missed in terms of interpretation on dozens of previous readings. “The element must be added to the set of scripts that will execute as soon as possible.”

I want you to notice something about that phrase. It’s a little bit ambiguous. At least from my perspective. It says that once a script loads, it should “execute as soon as possible”. But it doesn’t explicitly say that “as soon as possible” means to the exclusion of other previous dependencies.

If the interpretation of that phrase is that “as soon as possible” should be ignorant of things like previous dynamic scripts (insertion-order wise), then I submit to you that standard is wrong. It’s wrong because doing so completely ignores the use case of dynamically loading in parallel but preserving order — a use case that’s overwhelmingly self-evident for anyone who’s built web pages with more than one script file (like jquery and a single jquery plugin, for instance). We can have order preservation if the script tags appear in the HTML, but we can’t have order preservation if they are injected? That’s beyond silly.

I’m glad I now see exactly where this bad spec writing/interpretation is based off of. Prior to now, I’ve been unable to see anything in the spec that remotely resembles the spec indicating proper behavior between multiple dynamic scripts. But I strongly assert that the spec is wrong (or incomplete) in that respect. And implementing a part of the spec that has ambiguous interpretation at best and doing so in a way that completely ignores other important and valid use cases and needs is, in my opinion, somewhat irresponsible.

So I’d say this: you have the right (and responsibility) to implement spec. But what if spec is wrong? What if the spec fails to address the needs of the community? Should you still implement it and just look the other way? I say, no. I say there needs to be more community comment and involvement. I’m sorry I wasn’t around and part of the prior discussions when that phrase was added and agreed to, but nevertheless, the spec is going to cause some seriously negative waves for a lot of us if its implemented as currently interpreted and only that.

…..jQuery…..

So I was doing what the standard says in order to fix site breakage. That’s about as righteous as browser engine changes can be.

Agreed. I’m glad you implement spec. No one is criticizing that. But implementing things which have other major negative side effects, or even just things which are open to possible other interpretations or further discussion, without engaging the community you will directly affect, is not what I’d call wise and prudent behavior.

As for not engaging you, there were three reasons for not engaging you: First, I was unaware of you. (The Web is pretty big!) Second, I was in a hurry. Third, the change I was making made Gecko behave like WebKit and IE (well, not exactly apparently for the non-script type thing), so it was relatively safe to assume that sites wouldn’t break since sites on the Web usually are already taking IE-compatibility and WebKit-compatibility into account. Of course, such reasoning only works for estimating the breakage risk for sites that run the same code path in all browsers. If you UA-sniff, your code may break, so please, please, don’t UA sniff.

“Unaware”. Ok fair enough. LABjs isn’t as popular as jQuery, nor do I work for Mozilla like John Resig does. But LABjs is not an obscure unknown project either. It’s on the radar of some pretty big sites, and dozens of other big sites have expressed curiosity in experimenting with it. It’s on a bunch of blogs, it’s in a bunch of JavaScript conference talks I’ve given… blah blah blah. I don’t think it’s unreasonable to assume that LABjs has enough presence that due diligence in the area of script loading and order preservation should have surfaced it on the radar.

“In a hurry”. Well, we all make our mistakes, right? Everything about LABjs has been the opposite of being in a hurry. The topic in general is so ridiculously over complex due to all the quirks, it’s prudent to do anything but be in a hurry about it.

“Gecko like IE/Webkit”. What about Opera? Opera has operated like you (preserving script order) for as long as I’ve been paying attention. Did you think that maybe there was a reason why Opera did it that way too? Did you consider the effects of abandoning that behavior and leaving Opera as the only odd-one out would do to the script-loader part of the commmunity? You yourself admit that you took one area to adhere to IE/Webkit, but other adjacent things (however right or wrong) you didn’t adhere to. This is a common truth — doing part of the work can sometimes be worse than doing none of the work.

New Twitter works fine in Minefield. Vimeo seems to work on Mac. (There are problems on 64-bit Linux, but I’m guessing those are related to out-of-process plug-ins.) Zappos isn’t obviously broken, but I didn’t try to order anything from there.

In fact, so far, there’s been no reports of actual site breakage arising from the Gecko script loader change that’d have reached me.

Referencing surface testing of the sites I mentioned and seeing no observed site breakage, and asserting that as evidence is a pretty weak “inference” to whether or not the site is or will be affected by the change. Why? Because I know specifically in those 3 sites cases, they are not yet using the “preloading” (they have that feature flag turned off). But they are all intending to use it in the near future. Again, as I’ve asserted before, LABjs is only useful when considered as a performance script loader for dynamic parallel loading and ordered execution.

When you turn off “preloading”, LABjs becomes a pretty dumb and useless script (in fact, sometimes less performant than the manual <script> tags alternative). The only reason any of those sites heard about or care about LABjs is for its potential performance accelerations. Just serial on-demand script loading is pretty trivial and been done much better by others long before I came around.

It’s quite valid and defensible why these major sites are rolling out their LABjs support and feature-usage little-by-little. There’s various reasons for Twitter and Vimeo doing things as they are now. But I know and regularly communicate with their developer teams, and I’m actively working on helping them get to the next (and more ideal) level.

I can tell you unequivocally that if any of those sites were to enable “preloading”, they would immediately break in Minefield. What this means, if we can’t figure out a resolution, is that those sites may never be able to turn on “preloading”, which means they’ll probably be well advised to just abandon LABjs going forward.

Moreover, if you want actual sites that use LABjs that break in Minefield, take this blog site and also flensed as two examples. Notice that if you load either of them in Minefield, some things break, like the code that does the Twitter feeds, etc. There are lots of other sites using LABjs with “preloading”, and 100% of them will break in Minefield. So if real world examples are in need, I’ll be happy to point more to you besides the two just mentioned.

Since reports of concrete site brokenness are absent, could it be that sites aren’t actually relying on the in-order property that LABjs tries to provide?

Do authors actually need to? In practice that is. Aren’t libraries that one might want to load already designed so that they don’t have loading-time inter-dependencies if you wait until all the libs have loaded before calling the APIs they provide?

No, again, unequivocally the opposite is true of the real world. It’s very rare that you see (even production) sites that will bundle all of jQuery and all of jQuery-UI and all of the dozen associated plugins for each, all into one file. What usually happens is they load jQuery and jQuery-UI both separately from the Google CDN location, and then they load one or two local scripts from their server that have all the plugins and their own site logic contained. This is far and away the more common use case for the majority of the internet. Only the most high-end traffic sites self-host such resources and package them all into a single file.

And no, it’s not reasonable to assume that all the scripts which are out there that DO currently have load-order dependency must all simultaneously be re-written because of one change made by one browser vendor. We have to be pragmatic and continue to support the way the web currently (and has for a long time and will for a long time) works.

Am I inferring correctly that you wouldn’t want script-inserted scripts to block parser-inserted scripts?

Do you really want script-inserted external scripts to block script-inserted inline scripts and parser-inserted scripts? Or do you just want script-inserted external scripts to maintain order among themselves?

Here’s what I don’t want: I don’t want an inserted script node to have any dependencies on anything else in the document, except other inserted script nodes.

I understand that jQuery’s use case for “global Eval” uses inserted “inline” scripts and that the blocking of such logic is undesired. I think there should be some sort of flag settable on such scripts to inform the browser of the need for immediate execution.

But I have a completely different use case, one that is extremely common on the majority of the web. It is that people often times will have what I call “script tag soup”, which is that they will have several script tags (external src references) and then an inline script tag with some logic, and then possibly more external script tags and then more inline script tags, etc. They have these tags strewn all over the HTML source (for various reasons, including CMS’s, templating limitations, etc).

Steve Souders gave a term for the idea of replacing an inline script block with something that would execute in the proper order after a dynamic script was loaded… it’s called “coupling”. LABjs’ entire stated goal is to replace “script tag soup” with dynamically loaded (and yes, if need be, properly “coupled”), script loading alternatives. For instance, this blog post I did late last year on using LABjs to deal with inline script blocks along side dynamically loaded external resources shows such a common usage pattern.

<script src="framework.js"></script>
<script src="myscript.js"></script>
<script>
  myscript.init();
</script>
<script src="anotherscript.js"></script>
<script>
  another.init();
  framework.init();
  framework.doSomething();
</script>

To use LABjs to deal with that “script tag soup” (both the external dynamic loadings and the “coupled” inline scripts), you do this:

<script>
  $LAB
  .script("framework.js")
  .script("myscript.js")
  .wait(function(){
     myscript.init();
  })
  .script("anotherscript.js")
  .wait(function(){
     another.init();
     framework.init();
     framework.doSomething();
  });
</script>

So, I’d say, yes, “coupling” of inline scripts to execute in the proper order along with externally loaded scripts is very important.

HOWEVER, this issue is quite confused, because LABjs does not actually use injected inline scripts to handle how this “coupling” works. Because a function is passed to the .wait() function, that function can simply be executed with no need for an inserted inline script node.

ON THE OTHER HAND, LABjs does in fact do some “inline script insertions” for some browsers (not Mozilla). For local scripts that are preloaded in IE/Chrome/Safari using local XHR, those scripts are executed at the proper time in the chain by creating an inline script node and injecting the loaded source code of the script into the node using the .text property.

So, for THAT use case for IE/Chrome/Safari, I absolutely do hope that there’s still some way to make sure the injected code can execute right away when asked to, but I do not need to be able to inject that script node ahead of time and have the dynamic external scripts pay attention to it as far as order. As I said above, having an “execute immediately” flag (or behavior, as it currently seems) similar to jQuery’s “global Eval” use case would be more than sufficient.

To answer your question directly, I only want dynamically inserted script nodes to respect insertion order among themselves — nothing more. For Mozilla, I only use dynamically inserted external scripts, so for you guys, I don’t need the behavior of also preserving order with “inserted inline scripts”. But I do need the “immediate” execution of inline scripts for the other browsers’ “preloading” tricks (for local files only) to keep working.

This is exactly what’s happening. (Well, except the standard says “If the user agent does not support the scripting language given by the script block’s type for this script element, then the user agent must abort these steps at this point.”, so your bogus type trick doesn’t work according to HTML5.)

I now see that you posted to the WHATWG’s “help” mailing list in March. The WHATWG was already publishing a spec that covered the area and you didn’t ask for any changes, and you didn’t follow up to Hixie’s follow-up question, so the thread went nowhere.

To engage with browser vendors on this topic, I encourage you to join the W3C HTML WG. (There’s a better chance of engaging Microsoft there than on any of the WHATWG lists.)

As I asserted earlier, I have been tirelessly, for over a year, exploring how I can best advocate for practical standards to be written and implemented which are informed and aware of the various performance use cases that are at play. The attempts on the WHATWG mailing list, I was shot down immediately. I did follow up with several of those people in direct one-on-one emails, and overwhelmingly the response seemed to be “we don’t need standards for this”. They made no reference to the standard that was already there, they said no standard was needed. Obviously we do need it, so I considered that to be a brick wall. I only went to the WHATWG in the first place because someone suggested that it’d be a better place to address this topic than the W3C.

When WHATWG failed to be receptive to my call for a discussion, I then pursued another avenue. I have had several email exchanges with Brendan Eich and various others he CC’d (including some people from Yahoo, I believe) to try and get the discussion rolling. While the group seemed to be receptive to the need for such discussions, the discussions themselves have yet to go anywhere productive. It wasn’t a brick wall like WHATWG, but it’s yet to produce any useful results.

I even tried (a month ago) to join the new W3C Web Performance WG, again hoping that they would be a good platform to start pushing this discussion. I’ve yet to hear back from them on my application to join.

So, in the mean time, I’ve been continuing to advocate (via Twitter, blog posts, and conference speakings) for more awareness and action on this issue. I’ve done everything I can to make LABjs and the need for performance script loading as wide-spread known as possible. Steve Souders has endorsed LABjs on several occasions, and I believe he also would agree there’s need to come up with a standard everyone can agree on, as long as that solution doesn’t ignore the needs of the web performance community.

I have no problem with trying to also join the W3C. But please don’t accuse me (wrongly) of inaccurate claims in my saying that I’ve been pushing for advocacy and awareness on this topic in a lot of different avenues for quite awhile. I don’t have a very well known or influential voice, but I’m every bit (or more) passionate as anyone else on having a productive discussion with all the players so that a good solution can be found.

It seems you aren’t too happy now that such standardization and implementation is happening :-(

I’m only unhappy because the interpretation of spec as currently written seems to be ignorant of a very important and wide-spread use case, and because the process by which vendors like Mozilla are implementing that (in my opinion, ill-conceived) spec is not properly engaging the community (like myself) to make sure that important facts aren’t being bull-dozed over.

Reverting the Gecko change and pushing for the old Gecko behavior to be standardized. This would re-introduce the problems the change was meant to solve, so I think this isn’t a real option, either. Furthermore, to get to interop via this path, IE and WebKit would have to change substantially.

Because I have always believed that Firefox and Opera were right all along in this behavior, I’d be far more in support of reverting to the previous way and getting the other browsers to agree to follow suit. All the ugliest part of the hacks in LABjs (the “text/cache” junk) is all to deal with how to preload remotely when order isn’t already enforced. So in my opinion, the direction of Mozilla (and the spec) on this is opposite of the right direction. It may be harder to get the others to agree, but I think it’s more right than what is happening now.

Making Gecko enforce insertion order execution of script-inserted external scripts that don’t have the async attribute among themselves and standardizing that. This solution looks attractive on surface, but this isn’t a good solution. A library like LABjs wouldn’t be able to capability sniff whether scripts will execute in insertion order (even if you tried to load some scripts and saw in-order execution, you couldn’t be sure it wasn’t by chance). A solution that relied of UA sniffing wouldn’t make JS libraries put future IE or WebKit on the new code path when IE or WebKit started enforcing insertion order execution.

I agree this isn’t ideal or a long-term solution, but doing so for now (basically, going back to the way it was) might be a decent solution until we can all have time to figure out the better long term solution that suits everyone.

Adding a DOM-only boolean property called “ordered” to script element nodes. This property would default to false. JS libraries could capability sniff for the presence of the DOM property to see if it is supported and set it to true to request script-inserted scripts that have the property set to true execute in insertion order among themselves (but not blocking other scripts). If we address your use case at all, this is the best solution I’ve come up with so far. Note that this would give IE and WebKit a capability-sniffing upgrade path to getting to the code path that doesn’t require bogus type preload.

I recognize this as a possible (and perhaps valid and viable) solution. I think more needs to be discussed on it or other similar approaches. I have some concerns, but nothing that would probably be a deal-breaker on that path.

(I realize that it looks offensive to say “If we address your use case at all”, but doubting the need to address a use case is a healthy standard operating procedure. After all, so far actual breakage of concrete existing sites hasn’t been shown.)

It’s fine to doubt and to discuss. It’s not ok to doubt and not discuss, which is what it felt like was happening before and why I reacted so strongly. If Mozilla will commit to opening up a public forum discussion and helping to get all the other browsers involved in that discussion so we can all agree on something that works for everyone, that to me is the ideal long-term resolution.

In the short-term though, we need to figure out what’s going to happen with FF4 scheduled to release in just a few weeks as I understand it.

Summary

I appreciate the Mozilla developer(s) beginning to engage in this discussion. I hope the end result is that a productive dialog can be had with all of us on equal footing and all important use cases addressed. That’s the only thing I’m calling for right now. Well, that and “what do we do about the short-term FF4 change?” I have an enormous amount of respect for Mozilla, but I hope the same courtesy can be granted of others like me who aren’t so “in the loop” but have valid concerns nonetheless.

This entry was written by getify , posted on Thursday October 07 2010at 12:10 pm , filed under JavaScript, Performance Optimization and tagged , , , , , , , , . Bookmark the permalink . Post a comment below or leave a trackback: Trackback URL.

8 Responses to “Mozilla & LABjs… part 2”

  • I’ve read this spec section many times. Just now reading it again, I found the one phrase which I’d missed in terms of interpretation on dozens of previous readings. “The element must be added to the set of scripts that will execute as soon as possible.”

    I want you to notice something about that phrase. It’s a little bit ambiguous. At least from my perspective. It says that once a script loads, it should “execute as soon as possible”. But it doesn’t explicitly say that “as soon as possible” means to the exclusion of other previous dependencies.

    The name of the list isn’t what says how it affect processing. The bit you are looking for is: “The task that the networking task source places on the task queue once the fetching algorithm has completed must execute the script block and then remove the element from the set of scripts that will execute as soon as possible.”

    If the interpretation of that phrase is that “as soon as possible” should be ignorant of things like previous dynamic scripts (insertion-order wise), then I submit to you that standard is wrong.

    I disagree. The spec aligns with existing IE/WebKit behavior which is the behavior that provides the weaker guarantee than the other behavior the spec could have aligned with (the old Gecko/Opera) behavior. I fully agree the default behavior doesn’t address your use case, but it doesn’t follow that changing the default to provide a stonger guarantee than the weakest existing guarantee is good spec writing. I think the spec does the right thing here, and your use case should be opt-in.

    It’s wrong because doing so completely ignores the use case of dynamically loading in parallel but preserving order — a use case that’s overwhelmingly self-evident for anyone who’s built web pages with more than one script file (like jquery and a single jquery plugin, for instance). We can have order preservation if the script tags appear in the HTML, but we can’t have order preservation if they are injected?

    It would be reasonable to provide that as an opt-in feature that capability sniffable so that libraries like LABjs can prepare for it so that once e.g. IE support the new opt-in, LABjs would start using the better code path for it without having to revise the UA inference in LABjs.

    So I’d say this: you have the right (and responsibility) to implement spec. But what if spec is wrong? What if the spec fails to address the needs of the community? Should you still implement it and just look the other way? I say, no.

    The usual procedure is to implement the spec, put the implementation in nightly builds or maybe even betas and see if anything breaks. If something breaks, analyze what needs to be different for things not to break and report back to the standards group. That’s what’s happening here.

    I don’t think it’s unreasonable to assume that LABjs has enough presence that due diligence in the area of script loading and order preservation should have surfaced it on the radar.

    Fair enough. But consider the flip side: Instead of advocating stuff in JS conferences advocating stuff on the W3C HTML WG or in the WHATWG would have put your concern on the radar of the people who might be doing the implementation in browsers. Of course, I’m not famous outside the HTML WG / WHATWG circles, so I think it would be unreasonable to expect that you should have contacted me ahead of time. But can you see how this works both ways? Now that we are aware of each other, let’s talk.

    Did you think that maybe there was a reason why Opera did it that way too?

    I expected they had arrived at the behavior either by applying logic instead of cloning IE or by cloning Gecko.

    Did you consider the effects of abandoning that behavior and leaving Opera as the only odd-one out would do to the script-loader part of the commmunity? You yourself admit that you took one area to adhere to IE/Webkit,

    I made sure that Opera employees were the first to know that Opera was now alone with spec-incompliant behavior.

    but other adjacent things (however right or wrong) you didn’t adhere to. This is a common truth — doing part of the work can sometimes be worse than doing none of the work.

    Yeah, it’s not unusual for sites to assume that certain browser traits occur together. Usually, this is releaved by implementing something, putting in a nightly and seeing how stuff breaks. In fact, the reason why the script running in Gecko was changed in a hurry was that we ran into a co-dependence situation of IE-like document.write and IE-like script ordering.

    Referencing surface testing of the sites I mentioned and seeing no observed site breakage, and asserting that as evidence is a pretty weak “inference” to whether or not the site is or will be affected by the change. Why? Because I know specifically in those 3 sites cases, they are not yet using the “preloading” (they have that feature flag turned off). But they are all intending to use it in the near future.

    Earlier, you name dropped New Twitter and Vimeo but you didn’t mention they weren’t using the feature being discussed yet, so of course I concluded that you might be exaggerating the breakage when I didn’t see New Twitter breakage in my own use.

    So if real world examples are in need, I’ll be happy to point more to you besides the two just mentioned.

    OK. I’m now convinced that there is a problem with compatibility with existing content. Thanks.

    I understand that jQuery’s use case for “global Eval” uses inserted “inline” scripts and that the blocking of such logic is undesired. I think there should be some sort of flag settable on such scripts to inform the browser of the need for immediate execution.

    I disagree about which behavior should be the default and which one should require the script author to opt in, but I concede it would be reasonable to provide both behaviors.

    For Mozilla, I only use dynamically inserted external scripts, so for you guys, I don’t need the behavior of also preserving order with “inserted inline scripts”. But I do need the “immediate” execution of inline scripts for the other browsers’ “preloading” tricks (for local files only) to keep working.

    Thanks for the clarification.

    The attempts on the WHATWG mailing list, I was shot down immediately.

    Which attempt? The one I find is <a href="http://lists.whatwg.org/htdig.cgi/help-whatwg.org/2010-March/000435.html"this. I can see how Hixie’s reply might have seemed hopelessly clueless to you. However, think about this from the implementor or standardizer perspective. All sorts of people show up with ideas. We can’t say “yes” to everything. Therefore, the usual initial answer is something to the effect of “Oh, yeah? Really?” That’s a feature, not a bug. One needs a rather thick skin on the WHATWG list. One needs an even thicker skin in the HTML WG, but in the HTML WG you have the chance of engaging Microsoft, too.

    But please don’t accuse me (wrongly) of inaccurate claims in my saying that I’ve been pushing for advocacy and awareness on this topic in a lot of different avenues for quite awhile.

    Maybe my list archive searching skills are weak. I didn’t find other emails from “getify” or “Kyle Simpson” at the WHATWG or the HTML WG.

    I have no problem with trying to also join the W3C.

    Great. I posted to public-html about this. To post to the mailing list, you have to be a participant in the HTML WG.

    If Mozilla will commit to opening up a public forum discussion and helping to get all the other browsers involved in that discussion so we can all agree on something that works for everyone, that to me is the ideal long-term resolution.

    The forum is the W3C HTML WG. Please see the WG page for how to join.

  • getify says:

    @Henri-
    I understand that I was just as much unaware of you and the Mozilla processes (and the spec interpretation in question) as you were of me and LABjs. Not meant to assign blame in either direction. I don’t think you personally acted improperly. I just think it’s unfortunate that the side effects of the change couldn’t have been explored before the change. But it happened, and now we just have to be productive about discussing how to address it. I think we’re both on the same page now.

    I disagree about which behavior should be the default and which one should require the script author to opt in, but I concede it would be reasonable to provide both behaviors.

    That’s a fair assessment. My concern is not actually which one is default, but that the spec seems to assume the other one is wholly sufficient and leaves out the use case we’re discussing entirely. I’m not particularly worried if I have to patch LABjs to opt into some behavior, as long as the behavior itself is still accessible in some reasonable way that satisfies the use case.

    Obviously, how we do that opt-in needs to be carefully considered because it needs to be something that’s reasonably arguable as something that should eventually be standardized on. I definitely don’t want a short-term monkey patch that eventually creates more fragmentation or problems in this area. I only want us to converge on behavior which adequately and performantly addresses the various use cases.

    Which attempt?
    ….
    Maybe my list archive searching skills are weak. I didn’t find other emails from “getify” or “Kyle Simpson” at the WHATWG or the HTML WG.

    As I said, there was an initial post to WHATWG which was met with some pretty obvious resistance. But I didn’t stop there. There were several direct emails exchanged with more than one person on that list trying to better inform on the issue, and despite those efforts and further explanations, the message I was almost verbatim left with is “I don’t think we need standards for this.”

    In retrospect, it’s strange to me that was the message, since you say they already had (or were working on) standards for it. Why didn’t they just point me to the existing effort? My fault was that I was ignorant, and as I said before, in my reading of spec I’d not seen anything that clearly jumped out at me as indicating things like script ordering or dependency blocking. I’ve been (incorrectly) operating for 6+ months under the impression that no such standard yet existed and was trying to advocate in various ways for that to occur.

    In addition the WHATWG exchanges, and the aforementioned email thread with Brendan and other Mozilla folks, I also have filed numerous bugs in Webkit, Mozilla, Chrome, etc bug trackers about quirky/wrong behavior adjacent to this topic — things like how scripts are cached when dynamically loaded, etc. For the most part, those bugs have sat dormant and not really addressed. But those again are part of my attempt to raise awareness around the problems in the space of dynamic script loading and how we need to figure something out.

    In any case, I understand I didn’t do enough. I didn’t look under the right rocks. I knocked on the wrong doors. I whispered when I should have yelled. Got it. All I can say it was an honest failing on my part — not for not trying but for ignorance of where “the loop” was to get in on it.

    Now that you’ve clearly indicated W3C is the right place to discuss, I’ll certainly turn my attention there. And I’ll continue to devote as much energy and attention as I can, given this is a night/weekend open-source side-project.

    Thank you for engaging in the discussion and helping get me on a better path. I look forward to discussions bringing us all closer to a better solution.

  • Boris says:

    Kyle, Brendan has no idea about anything that’s going on with the HTML5 specs; he’s not involved in that stuff…

    I’d appreciate it if you’d either cc me on the bugs you filed (put \:bz\ in the cc field) or mention those bugs here. Searching bugzilla for bugs reported by the user that commented at https://bugzilla.mozilla.org/show_bug.cgi?id=591981#c67 comes up with nothing.

  • In retrospect, it’s strange to me that was the message, since you say they already had (or were working on) standards for it. Why didn’t they just point me to the existing effort?

    I’ll leave it to Hixie to explain why he replied the way he did.

    In addition the WHATWG exchanges, and the aforementioned email thread with Brendan and other Mozilla folks, I also have filed numerous bugs in Webkit, Mozilla, Chrome, etc bug trackers about quirky/wrong behavior adjacent to this topic — things like how scripts are cached when dynamically loaded, etc. For the most part, those bugs have sat dormant and not really addressed.

    Yeah, this phenomenon is unfortunate. Bugs tend to sit dormant in some bugzilla.mozilla.org components unless the right people see them (where who the “right people” is depends on the bug). It happens even to cross-team bugs filed by people who work on some part of Gecko or Firefox. I’m sorry this happened to you. I wish I could promise that it’ll get better, but unfortunately, I can’t promise that. :-(

  • Oh, and there’s now a Bugzilla item about the short-term fix.

  • getify says:

    @Henri-
    +1 on the W3C discussion list item and on the Mozilla/FF short-term fix. Thanks a bunch for those.

    FYI: still trying to join the W3C… unclear as to how to proceed because I still have a pending application for the “Web Performance” WG and it only let’s me select one group at a time. But I’ll keep on it.

    @Boris-
    I’ll try to collect up links to the various bugs (most of them in Chrome and Webkit) that I’ve filed which are yet to be resolved. I’m frankly forgetting at this point which ones were to Mozilla and not.

  • Ian Hickson says:

    Blog comments really aren’t a good way for me to track feedback; if there’s something to discuss here I would really encourage you to send feedback to one of the places listed at the top of the spec (e.g. whatwgatwhatwgdotorg, or the W3C Bugzilla, the latter of which you can file bugs in using the form at the bottom right of http://whatwg.org/html5). It’s important to actually respond to responses, though. I’m not psychic! :-)

    I guarantee that I will reply to all substantive feedback send to whatwgatwhatwgdotorg.

  • AnonCow says:

    Hi,

    Stumbled upon this series of entries, and read up a bit.
    Apparently the issue’s been solved peacefully, with the async attribute being added to the specification.
    Would you mind following up on the issue?

Leave a Reply

Consider Registering or Logging in before commenting.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Notify me of followup comments via e-mail. You can also subscribe without commenting.