/* UPDATE: */

Found this old (from late 2005) post from Douglas Crockford himself on comments in JSON which I think validates the thinking I have presented here in this blog post. He says:

A JSON encoder MUST NOT output comments. A JSON decoder MAY accept and ignore comments.

This is exactly what I am advocating with this post. And JSON.minify() is simply one decent way to allow the JavaScript implementation of JSON.parse() to do the “MAY accept and ignore comments” thing.

/* UPDATE #2: */

From 2012, Crockford suggests putting normal JS-style comments in your JSON config files, and then before parsing them, passing them through a minifier. Huzzah! That’s exactly what JSON.minify() is all about, and what I suggest here as how I handle comments in my JSON. We agree, for once! He says:

Suppose you are using JSON to keep configuration files, which you would like to annotate. Go ahead and insert all the comments you like. Then pipe it through JSMin before handing it to your JSON parser.


{

Yesterday, I posted JSON.minify() as a little mini open-source code snippet. The idea behind JSON.minify() is to be able to take a JSON-like document (that is, strict JSON + some other “stuff” which I’ll explain in a moment) and strip out/minify the document into something that is strictly valid, parseable JSON. This may seem like a crazy or mis-directed idea on the surface, but I have my reasons, which I’ll also explain in a moment.

Anyway, I started a fire-storm on twitter when I posted it. What ensued was a day-long barrage of tweets back and forth with many different people, most of whom seem to vehemently oppose the idea of there being any benefit to what I was trying to do. The mix seemed to be about 10% in favor, 90% opposed. Although, as the day and the tweet fest wore on, a few of the original “haters” did come to see some reasoning to my madness.

I’ve said many times before that it’s kind of flattering if someone “out there” cares enough about what you say to take the time to vocally disagree with you. Days like yesterday however make me re-think that position a little bit. In retrospect, I think the problem with this assertion in the age of twitter is that the barrier, the amount of energy it takes, to responding with “You’re wrong” is far less than it used to be even a couple of years ago.

Even blog commenting usually involves entering in your name, email, and website URL, or logging in, so it takes a slight amount more effort to do so than it does to click the reply icon and fire away.

What “stuff”?

To boil it all down, JSON.minify() is designed to strip out comments (single line // and multi-line /* … */) from a JSON-like document. Oh, and it also takes out any unnecessary white-space, if any exists. The white-space is not all that important to me, nor does it matter to the official JSON.parse() parser. But the comments… those are a different story.

You may wonder to yourself, “why would I want comments in my JSON? that only makes the JSON more bloated when it gets transferred.” Hold onto that question, because I’ll get to it in a moment. But for now, lemme say: I completely agree, retaining comments in TRANSMITTED JSON messages is a ridiculous idea. As a performance optimization nerd, it shouldn’t come as any surprise that I would feel that way. But strangely, many people who know that about me well seemed to forget that I would have a performance-minded opinion on this topic.

Crockford’s “comments”

Somewhat late in the day, @tobie (via @kangax) pointed me at the JSON saga talk by Mr. JSON himself, Douglas Crockford. I invite you to take a few minutes and read through the transcript (or the 47 minutes to watch to the video), there’s some interesting stuff there.

About 1/3 of the way through the transcript, you’ll see Doug explain several reasons for why comments originally were allowed in JSON but were later removed. He says, in short, “people were using comments wrongly, so I removed them. Also, handling comments made the parser harder to implement, so I removed them. Also, comments didn’t exist in YAML and I wanted JSON to look like YAML, so I removed them.” If you go to the VERY end of the transcript, and read the last couple of sentences, you’ll see him boil it all down:

The main reason I took comments out was that I saw people who were trying to control what the parser would do based on what was in the comments, and that totally broke interoperability. There’s no way I could control the way they were using comments, so the most effective fix was to take the comments out.

I’ll just be honest with you. Those seem like crap reasons to me. Why do I say that? Him saying that he couldn’t control what people did with the parser is a spurious argument because he was in fact in control of the JSON “standard” and if it was ever going to take off it was going to be the way he wanted it to be. Somehow he magically got people not to implement JSON.parse() with extensions to the standard, so I’m not sure at all why he couldn’t keep them (by way of stern scolding, of course!) from using comments inappropriately. He had enough influence to keep JSON parsers to his standard, all he would have had to do was add “And comments shall only be ignored and not parsed.” to his standard and that would have been that.

I know that’s easy to say as “Friday-morning Quarterback” (like a decade later!) and I’m sure it was valid and made sense to him (and maybe others) at the time. But in my opinion, it was a wrong decision. Particularly frustrating is that Doug asserts in this talk: JSON is never going to change again because he intentionally didn’t version the format. So when he decided it was finished, he declared it sealed, and that’s just how it is. Imagine if WebKit got to decide “ok, <canvas> is now final, no more changes!”

So, if we ever wanted to add to JSON, like putting comments back in, we can’t. He in fact says specifically, someday JSON will be replaced by something else better, but JSON will always remain as it is now. That’s a bold and unwavering statement… but knowing the personality of Crockford (who also by the way claims that HTML5 is crap and should be scrapped), I am inclined to believe that he’ll go to his death bed someday defending the rigidness of JSON never changing. But I do hope the eventual outcome of my ramblings is not to suggest that we need JSON+ or JSONX to succeed Crockford’s JSON.

It’s true that the JSON standard itself has been quite stable for a long time. But it’s also interesting to note from the transcript that he admits that the parser “technology” has evolved on several occasions. Namely, he progressively discovered various different “security holes” in what the parser would allow through, and so he added different regular expressions as filters in JSON.parse() to make the parsing safer.

Comments don’t need to be part of the spec

Now’s a good time for me to make my first assertion. Adding a simple set of regular-expression and state machine logic to strip out comments would not necessarily change the JSON spec. Granted, this proposal is not strictly the same as the regular expression filters that Doug put in for security holes, but it’s in a similar vein. It’s also quite like the parts of the parsing which ignore whitespace.

I’m not suggesting there be anything added to the JSON spec in terms of what is functionally allowed to be declared, or how properties and values are interpreted syntactically. In fact, I’m saying the opposite… what is taken as strict JSON should not change at all. In the same way that the whitespace is ignored during parsing, I’m suggesting comments should be too.

Comments are, in my opinion, a special beast. They should not be considered to have the same weight at all as other parts of the grammar/syntax. Comments are almost universally ignored by compilers/interpreters. We all know the primary use of a comment is for developer-friendly maintenance.

Look through the whole spec for JSON on json.org and you’ll only find one tiny little statement at the bottom about whitespace: “Whitespace can be inserted between any pair of tokens.” He goes into quite detail with 3 different types of specifications for each part of the grammar/syntax, a few pages worth of text and diagrams, and then has one little off-handed sentence about how whitespace is basically not an important part of the grammar and is thus ignored.

In fact, there’s nowhere in any of JSON that whitespace has any meaning at all (well, whitespace is preserved inside of string literals… but it still has no meaning to the language).

Comments ~= Whitespace

I’d argue the same is true in practicality, and could officially be true if Crockford were so inclined, of comments. In my mind a comment is no more important or affecting of the “language” than is whitespace. If we can be tolerant of, ignore, and/or remove whitespace from a JSON string before parsing, why can’t we do the same of comments? All Doug would have to say is, “parsers should ignore comments just like they ignore whitespace.” It’s no more complicated than that.

In fact, most parsers/compilers have a pre-processing step where they go through a source document and remove all whitespace AND comments before they apply any syntactic meaning to the tokens found. That’s because in almost every other language on the planet, comments are found to be useful to developers but irrelevant to machines and thus can be safely ignored.

Can you imagine if Crockford had declared: “whitespace isn’t necessary in JSON, so the parser won’t handle/ignore them at all“?

Whitespace serves no purpose at all in JSON other than readability (if you hand-author your JSON documents… more on that in a little bit). The same is true of comments… they are for readability only. That is the heart of my argument.

So, all this is to say… I think JSON parsing could easily be extended to support stripping of comments without affecting in any way shape or form the spirit of the JSON spec. Would implementations have to change? Yes. But they had to change several times before when Crockford found security holes, so I don’t see this as much different.

I’ve also proven the logic to strip comments is pretty straightforward… it’s a few hundred bytes and is implementable in pretty much any language I can think of that I’m familiar with. Even Doug admits in that talk he was never quite sure why implementors had trouble dealing with comments. Hint: bad programmers.

Still not convinced.

It’s ok if you (and Crockford) still disagree with me that comments should be able to be ignored by JSON.parse(). That doesn’t hurt my feelings at all. I still think they should, but let’s just set that issue aside and agree to disagree.

That battle not-withstanding, in my opinion, comments in a JSON-like document still have some value in some cases.

JSON: file, document, message, database, etc?

Let’s take a step back and broaden our view of what JSON can really be used for. Clearly, it’s primarily used as interstitial messages — that is, short little bursts of data interchange between different entities (like a server and browser). In fact, it’d be rightly argued that this is BY FAR the overwhelming usage of JSON.

Just like with XML, which has similar (but even less) flexibility in usage, JSON can be used in other ways other than data “transmission” in the traditional sense. For instance, JSON can be used as a database, a persistence layer for real data. There’s a whole slew of “no-SQL” type key-value pair databases that’s cropped up recently to latch onto this usage of JSON. In this case, JSON is never really “transmitted” but rather just parsed/filtered/transformed as the data needs to be accessed or mutated.

Though these valid use cases do exist, it seems that most people who were outraged at my little JSON.minify() at the heart of things just don’t agree with using JSON in a document/file context. They believe JSON should only be used for message transmission, and in that particular use case, clearly comments have no place. It’s only if you broaden your viewpoint that you’ll see the other uses for JSON which I’m asserting can benefit from comments.

JSON for “data”

JSON is, according to Doug, just intended to be a clean way to express key-value pairs in a structured and useful way that is universal to every language. He continues to say that the primary (although I don’t think even he would assert only) use-case is for transmitting that data.

He goes to great lengths to stress that JSON was, by stroke of sheer genius or alignment of the stars, an outward expression of an almost “natural law” of the universe (ie, computing/information science). He asserts that independently and at the same time, 3 different languages (JavaScript, Python, and Newtonscript) all arrived at exactly the same syntax for expressing a nestable data structure made up of keys and values.

The interesting thing about “data” is that it can take different forms as its uses varies. For instance, pure data is usually stored in a pure “database”. But this other animal, “meta-data”, which is usually used by programs to affect how they behave… it’s not so clear-cut how that data gets stored. But when those meta-data stores end up in files, it starts to make the usage of them a little bit more akin to a “document” than just pure data store.

A perfectly valid and long-held usage for key-value pair setting is configuration files. Almost every major web application framework in existence today, and an enormous amount of the web-related software (web, DNS, etc) out there, have taken their cues from the earliest days of Unix/Linux and used simple key-value pair configuration files.

Consider PHP.ini or Apache’s httpd.conf

These files have been around forever, and have for the most part always been about exposing configuration variables and letting administrators tinker with their values. Of course, every different piece of software defines it’s key-value pair syntax slightly differently. Some use =, others use :, some use quotes, some don’t, etc.

But you know what the other almost universal characteristic of these file formats is, besides the key-value pair syntax? That’s right, you guessed it. Comments!

Why on earth would we do such a silly thing as to put comments in our configuration files? I heard various arguments to this affect on twitter yesterday, amounting to “the variable name should be descriptive enough to explain it’s possible values” or “if documentation for variables/values is needed, it belongs in a separate document called a manual.”

Well, I can’t really explain why like 99% of all web software went with flat configuration files (with comments!), but they did. And I’ll tell you, as one who regularly maintains my own web servers and thus web server software, I definitely appreciate having comments in my configuration files. I can’t imagine how painful editing PHP.ini or httpd.conf would be if I didn’t have comments right there inline explaining the various valid values and the implications of each.

Configuration files and JavaScript

If we now step back and look at the world of server-side JavaScript, we’ll see the revolution of JavaScript taking over in a lot of web software roles. But the fundamental paradigm of needing to configure this software still exists. We can configure it with command-line arguments when we start up the software, but most people who manage web software prefer to store commonly used configuration parameters in, gasp, a configuration file.

Does it make sense for the server-side JavaScript world to go out and define an entirely new and proprietary format for such flat, key-value pair configuration files? Umm, plainly, no. We should use JSON, right? That just makes sense to me! I figured it would to others too, but apparently I was wrong.

I had many people point me to examples of SSJS configuration files which were .js files. The curious thing to me when I looked at those .js files was that they almost all had a very interesting characteristic. They looked exactly like a JSON document, except for a little bit of extra “stuff”. Was that “stuff” complex JavaScript looping or operators? Nope. Was that “stuff” if-statements with conditional logic? Nope.

That “stuff” was comments. Huzzah! Oh, and yeah, that “stuff” was also something like an assignment of the JSON literal to a global JavaScript variable, or in some cases passing the JSON literal to a global function JSON-P style.

But in spirit, this was a JSON document.

It was designed to store meta-data, like configuration variables, in a structured key-value pair way. It had comments to help the developer understand/remember/maintain the document when necessary. The variable assignment (or function call parameter passing) really had almost nothing to do with the spirit of this document. It was more like a concession made to satisfy a pragmatic concern: If I parse a .js file as JavaScript, and all that file has in it is a JSON literal, the .js file will parse and “execute” validly, but that JSON object in it won’t be referenced anywhere in memory that I can use.

You see, the paradigm of needing to put the JSON into some variable or function parameter is the only real reason this file is a .js file and not a .json file. They could just as easily, and I would argue more cleanly, put just the JSON into the file, with no variable assignments or parameter/function calling, and then opened that file directly and read the contents into a string. At this point, with the JSON data in a string variable ready to be parsed, it’s no different than if we’d done an XHR call to grab that JSON into a string from the response text.

Regardless how it gets into the string, then they could pass that string to JSON.parse(), which is built into pretty much all server-side JS environments/engines, and the return value would be their created JSON object that they could assign to any appropriate local variable or pass to any other function.

This would be cleaner, in my opinion, because it would not require any global variables or functions to make it work.

Oh yeah, it was also nice that the JavaScript file parsing on their configuration files allowed for the files to have comments.

Sounds familiar

This is exactly the scenario that I found myself in with some recent server-side JavaScript. Could I have chosen to configure my SSJS application with .js files? Sure. Could I have chosen to pull the configuration from a database/datastore instead? Sure. Could I have chosen to just inline my configuration right into the code? Sure.

But all of these seemed like less than ideal to me. It seems so much cleaner to just have a simple JSON configuration file with key-value pairs to control my application’s behavior. And I foresaw that as my application grew in complexity, and as the possible valid values for those configuration parameters grew, I’d want to be able to rely on some inline comments to help keep the file more maintainable.

Not only that, but if someone else took and used my application some day, I’d want them to be able to configure it with relative ease. I certainly wouldn’t want them to have to be editing what looked like a code file (unless they too were a developer).

So, “config.json” file it was! Problem. If I put comments in, now I can’t parse the file. What do I do? Make a huge concession in the proper and clean design of my system and change to some other method of storing my configuration? Or do I take the lessened usability of my application and say that all “documentation” will just have to reside elsewhere, far away from the file it applies to?

No, I decided, very simply, that adding comments to what was otherwise in every way imaginable a JSON document was the right approach.

Call me crazy. String me up and lynch me. Let’s have a Salem witch trial. Let’s brand me for heresy.

And the simplest solution to my problem of comments being “invalid” in JSON was to create a simple filter, a “minifier” if you will, that could take otherwise valid JSON-like content, and strip it of comments so it was in fact valid pure JSON. This seemed like a decent and fairly graceful approach to my problem.

What followed on, and was the main point of the first half of this post, is that I truly believe that comments should be allowed in pure JSON documents. But since I’ll probably never win that argument with Crockford (or anyone else), the next best thing was that I defined this other thing which is INCREDIBLY SIMILAR TO JSON… namely JSON+Comments. I’m not much for silly overused acronyms, so I won’t go so far as to call it JSON+C. But you get the idea.

And here’s my very strict, and easy to enforce as a standard, definition:

JSON+Comments: valid JSON + valid JS comments.

That’s it. Nothing more. Go smoke on that pipe for awhile.

I’m not trying to turn JavaScript into JSON. I’m trying to enhance JSON ever so slightly with comments. Oh, and I’m claiming that it’s so close to real JSON that this is a silly distinction/argument to make. But nonetheless, there you have it: JSON+Comments. And JSON.minify() is a handy tool to help “convert” your bastardized “JSON+C” into real JSON. Yay.

JSON-P

Let’s go back to JSON messages for just a moment. There’s actually more than one kind of JSON message. There’s strict JSON, which in practicality could only be transferred between server and browser through a direct Ajax call like with XMLHttpRequest().

But then there’s another emerged “standard” that we label JSON-P. JSON-P defines itself as JSON-with-padding. The “padding” is a little bit of a misnomer, in that it’s not “padding” (like, gasp, whitespace) but a function call that passes the JSON literal as a parameter. In fact, I’d argue it should have been JSON-W (JSON+wrapping) because the function call Wraps the JSON literal.

JSON-P is not parseable by JSON.parse() or by any parser I know of except the JavaScript engine. And yet most of us definitely still relate it more to JSON than to JavaScript.

When JSON-P came out, I’m sure some strict purists cringed and said (in a crotchety old voice) “That’s not JSON at all.. it’s just JavaScript”. And I guess there are those who still feel that way. But in all practicality, because of same-origin security issues primarily, JSON-P has actually emerged as a de-facto superset standard of JSON, but still a much restricted subset of true, full-blown JavaScript. It’s proven it’s usefulness beyond argument as a very valid and commonly used solution to cross-domain “Ajax”.

CAN you do full JavaScript in a JSON-P message? Sure. But then that’s not really the intended spirit of JSON-P. Doing so would clearly put you in the realm of just plain ol’ JavaScript. No, I’d argue there is a strictly definable JSON-P (though it has lacked a formal definition, just the de-facto patterns of use) which is this:

JSON-P: valid JSON wrapped in a single function call.

There doesn’t have to be tolerance for JavaScript operations, boolean logic, try/catch’s, loops, or any of that other JavaScript goodness. JSON-P is a strict superset standard of JSON, and there’s no reason that it’s bad for doing so. It’s JSON + some other “stuff” that is helpful… in this case for bridging the cross-domain gap.

Just because browsers (and parsers) don’t officially have, at this time, some actual JSON-P grammar/standard to apply to parsing JSON-P doesn’t mean it couldn’t be that way if we wanted to.

For instance, we could quite easily standardize JSON-P like I’ve suggested, give it a website like “json-p.org” with fancy diagrams, and then take Doug’s JSON parser and add a few extra rules on top of it to allow for only the wrapping function call. The JSONP.parse() function could just check for the validity of the wrapping function call (“blah(….);”) and then pass what’s in between the ( ) to the JSON.parse() function. We could even get some help from browser vendors to define a MIME/Type like “application/json-p”.

If a <script> tag is found with that type, the contents (or the remote source file’s contents) could be checked against my simple JSON-P definition: look for a valid conforming function call, and take the parameter contents and validate it as real JSON.

/* UPDATE: */

Exactly such an effort, to define a safer specification for JSON-P enforceable by browsers, is now underway. Please go and have a look. :)

Wait a second

That’s kind of crazy talk, huh? That actually makes JSON-P sound a lot like my outlandish JSON+Comments. It’s valid JSON, “decorated” by a very small amount of other “stuff” that’s there for simple but valid reasons. For JSON-P, the P is there for crossing the same-origin gap. For “JSON+C”, the comments are there for cases where commenting a file inline helps its readability/maintainability.

}

I’ve rambled on for quite a bit. Let me try to close this post down in a sensible way.

My idea to allow comments in what is otherwise a perfectly valid, parseable JSON file has absolutely no bearing on my feeling that such bloated JSON should never be transmitted. I’m not in any way suggesting that the JSON messages you send between systems should have comments in them.

I’m only saying that JSON files (like, configuration files) which reside in a file system and are generally and primarily only opened/read and nothing else, can benefit from having comments in them, just like almost all other configuration files do.

If for now (or even forever) the tradeoff is that this file must be slightly sanitized before being sent through Crockford.parse(), that’s something I’m willing to accept. I think the slight payoff, for my use case, is higher than the slight cost involved in the minification.

Also, this doesn’t preclude that JSON.minify() could be used in build processes to take developer-friendly “JSON+C” files and build them to real JSON files for use by the actual production system.

Lastly, I would say that JSON.minify() is good at removing whitespace from JSON “documents” too. For instance, if you have a templating system that builds up your JSON documents before they are sent out, that JSON document may end up with extra whitespace in it. Calling JSON.minify() on it before sending it over-the-wire is going to make the file smaller and transmit more efficiently. This is a good thing, right?

(yeah, I know, why would I build a JSON document from a text-based templating engine when I could have all my data in a data object variable and just call json_encode()…? You people just like to contradict everything. It’s possible even this crazy idea might have some merit, too, even if you wouldn’t dream of it.)

Now, I’m just wishing I’d called .minify() on this long post before publishing. Probably a lot of my “comments” here could/should have been stripped out. :)

/* the end. duh. */

This entry was written by getify , posted on Wednesday June 23 2010at 12:06 pm , filed under JavaScript, Misc and tagged , , , . Bookmark the permalink . Post a comment below or leave a trackback: Trackback URL.

32 Responses to “JSON+Comments”

  • Mike McNally says:

    Uhh … YAML does support comments. I use YAML for configuration stuff a lot, and I’ve also used JSON for the same purpose. Of course you can fake JSON comments with bogus “_” properties (that is, add a dummy property and put the comment in the string value for it), but that’s obviously lame.

  • getify says:

    Interesting. I have no experience with it, but my guess is, back then Doug was right, it didn’t have comments… interesting that YAML evolved to support comments, but JSON never did.

  • Doug suggests precisely what McNally calls lame. Mike, it is not lame.

    It is in fact a prefect way of treating meta-data as 1st class instead of mere rubbish taken out with the trash. It is, in fact, aligned with the Python and Javadoc way – of including annotative, comment-ish strings right along with the data in most stages if not all stages of the data’s lifecycle.

    I strongly favor the “lame” approach.

    Monty

  • getify says:

    @Monty — why would you want to include comments in the actual parsed data if those comments had no other benefit other than to the hand-written developer maintained original document? That seems like it’ll just produce more properties in memory that are wasted.

    If the special comments-as-properties *do* something functional, then I’d submit they are not comments at all. The difference with Javadoc style comments is that those comments only *do* something when interpreted by an external process, not by the application itself.

  • Great post.

    For some of the Firefox add-ons I’m building, I’m creating server side configuration files that are transmitted to the client and then used to instantiate the contents of the add-on. (Personas Interactive is one example of this.)

    I found that JSON was the perfect thing to define my add-ons and I desperately wanted to have comments when the files are defined on the server to explain what they did. I certainly wasn’t going to transmit the comments.

    JSON+C makes perfect sense.

    I guess I could have used XML, but that seems like a lot of work for very little gain…

  • getify says:

    Awesome, glad to hear of another valid use case for this technique. Hope you find JSON.minify() to be of use! :)

  • Ben Bucksch says:

    I guess I could have used XML, but that seems like a lot of work for very little gain…

    E4X ! Allows you to access XML almost the same way as you’d use JS objects / parsed JSON.

  • getify says:

    @Ben
    a) E4X is not “widely supported” enough to likely be valid.
    b) XML is much more verbose than JSON in general, so this will lead to slower transmits
    c) I just think JSON is far better for data that will be consumed by JavaScript than XML, because JSON is already JavaScript.

  • Paul McGuire says:

    This question came up at work today, and I looked back at a JSON parser article I wrote for Python magazine 2 years ago. In that parser (you can access it online at http://pyparsing.wikispaces.com/file/view/jsonParser.py), you’ll see that I added support for comments:

    jsonComment = cppStyleComment
    jsonObject.ignore( jsonComment )

    I’m sure I wrote this to the state of the RFC at the time, but on looking this up now, I find no mention of comment support at json.org. Still, if you are working in Python, feel free to use this pyparsing-based parser, with full comment support.

  • Hi Kyle,

    After some more thought and practice, I concede. I do believe we *need* JSON comments behind the concat/compressor – especially for large datasets for which open braces span multiple screens, because unlike XML closing braces in JSON do not specify what they are closing.

    Monty

  • Camilo Martin says:

    OMG you should try writing shorter posts.

    I tried reading it! :)

  • PandaWood says:

    You might want to mention what your code is written in – did you write your “open-source code snippet” in javascript, java, ruby, C … ? I went to the GitHub page, read the entire readme and I STILL don’t know.

  • getify says:

    @PandaWood-

    If you look at the github repo, you’ll see that currently the snippet is ported to JavaScript and PHP, by virtue of the “.js” and “.php” file extensions on the “json.minify.*” files. The reason this article, and the git README, don’t specify a language is because the intent is for this snippet to be ported to all possible languages. But I started with JavaScript, and then the next one was PHP. And that’s as far as I’ve gotten.

    I apologize for the confusion that caused you.

  • Steve Hollasch says:

    I think you’re both right and wrong. I strongly agree with you that precluding comments out of spite (lest parsers use it to communicate machine-targeted metadata) is just silly. So, force them to put comments (or metadata) in specially-named variables with arbitrary prefixes — how does that solve the “problem”?

    However, a thought that’s been rolling around in my mind in the past month has been the original wisdom of CPP + C. At first, the C preprocessor seemed like an odd bag to hang on the side of the new C language. Indeed, when it was originally released, people realized that C didn’t depend on CPP at all, and there were several preprocessor variants used instead (anyone remember m4?). In a similar vein, who hasn’t wished for a nice, simple macro language when messing with large CSS files?

    I think it’s quite possible that we should begin to think of languages as layered abstractions, rather than a single monolithic chunk. For example, what is now char-encoding>UTF-8>JSON>ObjectFoo could easily be char-encoding>UTF-8>JPreProcessor>JSON>ObjectFoo. How is it that the C community has avoided this debate? Programmers see C/C++ and CPP as a single unit, and yet they are simultaneously comfortable with the idea that they’re unique and individual components. Perhaps JSON needs the same concept — surely Crockford can’t control THAT.

    BTW, consider a CPP+JSON stack. You’d have not just comments, but conditional code, macros, token pasting, and more. It wouldn’t be your grandfather’s JSON any more. :)

  • Jon Spencer says:

    We are using JSON in an environment where data must be transmitted between two executables booted on a small device to enable recovery of the device, where the recovery app must be as small as possible. The size of the JSON doc itself is pretty irrelevant, since there is no transmission (data is passed though NAND flash). What IS important is the size of the parser in the recovery executable. JSON is a good choice for this.

    As a first time user of JSON, I was disappointed that comments were not allowed. These comments are very important for the data file for operational reasons (which reasons are irrelevant for this discussion. What is relevant is that comments are considered not only valuable, but mandatory). Before reading this post, as a long time Unix bigot (40+ years, since 1970), doing something like JSON-C, or CPP as mention by Steve Hollasch, was almost reflexive. The code to strip out the comments prior to parsing the doc is trivial to the point of irrelevance.

    While I do understand the motivation to not allow comments in the context of trying to keep JSON from becoming another instantiation of XML, the attitude that comments have no place in JSON is, well, silly. If you don’t think you don’t need comments, then by all means don’t use them. And if comments are specifically defined in the JSON syntax as white space as recommended by getify, then they must be ignored by a conforming doc. The FSA to handle comments is, again, trivial. It may be more work to pull down .minify() than it would be to write it. (OK, perhaps some hyperbole, but it makes the point.)

    We just need a normative definition of what a comment looks like. I assume both C and C++ comment styles, but even here there can be issues, so I asume comments are defined by the C and C++ standards in effect as of March 10, 2011. OK? :-)

  • YAML supports comments, and is a near-superset of JSON. If you think you need commented JSON, use YAML. If you’re writing JSON, follow the spec and omit comments. End of story.

  • getify says:

    Marnen- Sorry, but that’s a terribly short-sighted comment that misses most of the point of this blog post.

    a) YAML is not a universal data-exchange format like YAML is. there are not YAML encoders/decoders built into every single language on the planet, like there are for JSON
    b) YAML is not a subset of JavaScript (as JSON is), which means that it’s not natively parsed or understood in JavaScript environments (browser or server), which means its less performant in those environments because it requires conversion before parsing.
    c) if you are already using JSON (which most people, especially readers of this blog, are), then it’s ignorant to suggest that the best way for them to get comments in their “data” files is to switch to an entirely new format.

    Think about whitespace for a moment. Pretend that the JSON parser was intolerant of whitespace. And then let’s say that a bunch of programmers said “doh, writing JSON without the benefit of ignored whitespace is too hard, too cryptic, too error prone, too hard to maintain. Can’t we please have the parser ignore whitespace, so our files can be more pleasant looking without affecting the meaning/interpretation?” Would *you* tell all those developers “Nope, sorry, JSON doesn’t ignore whitespace, find another format if you want something like that.”? That’s absurd.

    I am making the case that ignorable comments is the same thing as ignorable whitespace. They are things which, on the first pass, the parser throws away. They mean nothing to the computer. But they help the file/data be more readable and maintainable. Many of the same reasons why comments in code are useful apply to comments in (certain types of) JSON files.

    Not all JSON files need comments. But, there are clearly some use cases where JSON files could benefit from comments. One such example is configuration files (as mentioned in the post). There should be no reason we dogmatically say “JSON will ignore whitespace but it won’t ignore comments”. It’s just a patently absurd and indefensible position to take.

  • The case that proves that comments should be allowed in JSON is configuration files done in JSON that have comments in them. It’s often handy and good to comment a json config file (and any config file, for that matter).

    Any JSON validator should be able to validate a JSON file that has comments in it and pass the file.

    If you agree, here is a petition:

    http://www.ipetitions.com/petition/allow-comments-in-json/

  • Kyle: I didn’t miss your point, I just don’t agree with it. First of all, there is no excuse for writing a parser that doesn’t follow the language spec, particularly if (as here) the point in which it departs from the spec is likely to break parsers that *do* adhere to the spec.

    To your comments:

    > there are not YAML encoders/decoders built into every single language on the planet, like there are for JSON

    http://yaml.org lists libraries for 12 languages (including OCaml and Haskell as well as the major ones!), and a quick Web search lists parsers for many more, including Clojure, C# and Lua. I will agree that there are some odd languages that have JSON libraries but not YAML libraries, but I don’t think that’s relevant in the majority of cases, especially if you’re using JSON files rather than streams.

    > YAML is not a subset of JavaScript (as JSON is), which means that it’s not natively parsed or understood in JavaScript environments (browser or server)

    Neither was JSON, really (until recent browsers started including safe JSON parsers). Crockford specifically recommends against just using plain JavaScript eval() to parse JSON, because that’s insanely dangerous. See all the sanitization that goes on in https://github.com/douglascrockford/JSON-js/blob/master/json2.js .

    Besides, so what? If there’s a good reason to use a language, I don’t much care whether it’s natively parsed. What difference does it make, practically speaking?

    > if you are already using JSON (which most people, especially readers of this blog, are), then it’s ignorant to suggest that the best way for them to get comments in their “data” files is to switch to an entirely new format.

    Nope. If your current format isn’t doing what you need it to, find one that does. You won’t even need to change your existing JSON files immediately — virtually every instance of valid JSON (barring a few edge cases) is valid YAML. What’s ignorant here is trying to use a format for something it’s really not meant to work for, then trying to “extend” it beyond the spec when it would just be simpler to use a different format in the first place.

    > Would *you* tell all those developers “Nope, sorry, JSON doesn’t ignore whitespace, find another format if you want something like that.”?

    Depends on the exact case. There are too many hypotheticals in your question to give a meaningful answer.

    > Many of the same reasons why comments in code are useful apply to comments in (certain types of) JSON files.

    And that’s another reason not to have them. JSON is best for data interchange over the wire, not files. I hate to see features being added for a use case that’s so clearly wrong for the format.

    (Actually, the more I program, the less useful I think comments in code are. If your code needs comments, it probably isn’t clear enough.)

    Christopher: Configuration files should be written in YAML, not JSON. YAML supports comments and has a more file-oriented syntax than JSON. JSON was meant for data streams, and that’s where it should stay. JSON for streams, YAML for files.

  • getify says:

    Thanks for the thoughtful response (certainly more reasoned than your first snide comment).

    First of all, there is no excuse for writing a parser that doesn’t follow the language spec, particularly if (as here) the point in which it departs from the spec is likely to break parsers that *do* adhere to the spec.

    Apparently you aren’t aware that most JSON parsers are already tolerant of comments, ostensibly because the writers of those parsers felt like I did, that ignoring something like a comment is akin to ignoring whitespace. To suggest anything different is to blow a bunch of hotair.

    Also, I don’t care if every parser on the planet doesn’t adopt “ignore comments” as a feature. All I care about (as a dev) is if the parser I’m using is able to do so. I would prefer it be able to natively do so. If not, JSON.minify is a simple pre-processor that solves the problem. There’s literally no parser on the planet that can’t (by virtue of native support or JSON.minify, after porting) ignore comments. So interop is a non-issue.

    Why? Because I’m not suggesting that JSON+comments would ever be transmitted. That’s silly. I insist the opposite, actually, that you should minify your JSON (removing any whitespace AND comments) before transmitting. If your JSON only has whitespace, you should still JSON.minify() it before transmission. Or, pass it through a more traditional minifier. Which, by the way, would remove any comments, because it would treat the file like JavaScript, not some limited subset.

    Summary: JSON having whitespace is the same as JSON having comments. It’s fine when the file is latent and read only in local file I/O, and if it’s going to be transmitted, it should be minified first anyway, whether or not comments are present. And minification will pretty much always remove comments. So if you’re transporting your JSON+comments elsewhere, and you minify it first, what the heck are you so worried about as far as interop?

    > YAML is not a subset of JavaScript (as JSON is), which means that it’s not natively parsed or understood in JavaScript environments (browser or server)

    Neither was JSON, really (until recent browsers started including safe JSON parsers).

    Patently false. JSON was ALWAYS a valid subset of JavaScript. That’s how it was defined by Crockford, and is an invariant. Suggesting otherwise is absurd.

    What you mean is that you couldn’t natively parse a JSON string (like loading it from somewhere else). But that’s a misargument for two reasons:

    1. having something as a string and wanting to parse it is NOT the same as claiming that it’s not a valid subset of the language. The actual string itself is a valid subset, and the contents of the string are a valid subset, but getting from point A to point B is an orthagonal issue (that’s a conversion issue, not a parsing issue, actually). The same would be true if you had a string of XML, or a string of YAML. The difference is, to use the YAML or the XML, your conversion would actually be parsing, which isn’t native. For JSON, the “conversion” doesn’t require parsing beyond that which the JS engine is already capable of.

    2. JSON-P (and by that, I mean passing around object-literals that “look” like JSON, to functions or assigning them to variables) has been around since the beginning of JS. You could load JSON from a remote location, using a <script> tag, and in that file, just assign the “JSON”-like object to a variable (or pass to a function), and immediately use the object value, without any further parsing/conversion, because… JSON is a subset of JavaScript. You couldn’t do that with YAML, or XML (save E4X which never caught on), or any other data representation format. You couldn’t load a YAML file and have the JavaScript parser natively interpret it. You can with JSON.

    > Would *you* tell all those developers “Nope, sorry, JSON doesn’t ignore whitespace, find another format if you want something like that.”?

    Depends on the exact case. There are too many hypotheticals in your question to give a meaningful answer.

    I call B.S. That’s just a cop-out. Plain and simple, if you had a data representation format (call it “Foobar”) and your developers loved it, but it didn’t support ignored-whitespace, and they wanted to add whitespace, would you tell them to find a different format? I doubt it.

    It’d be ludicrous to suggest a file format which couldn’t tolerate whitespace. That would be seen as a defect or shortcoming of the format — one which should be resolved, and in this case is easily resovled. Comments are nothing more than a fancy form of whitespace. They should be ignored by parsers and compilers, too.

    Almost universally that’s already true (except for a few exceptions where comments are given semantic meaning — all of which I completely disagree with) — comments get ignored. They’re for humans, not computers. Computers should ignore them.

    JSON was meant for data streams, and that’s where it should stay. JSON for streams, YAML for files.

    You can insist this all you want, doesn’t make it true. It makes it your opinion. And even if pure transmission was all Crockford ever intended for it, a lot of developers have since found other expanded uses for JSON besides transmission. So the best you can say is you don’t like that trend because you’re a Crockfordian Purist™.

    But you can’t claim a factual standing that JSON as a file format is invalid or wrong. I know that’s precisely what you are trying to claim, but you’re supporting that with nothing but your own subjective opinion.

    And that opinion isn’t going to change the minds of all the node.js developers who are already using JSON as a file format (package.json, etc). Whether you like it or not, JSON is being used by some people as a file format, not a stream protocol. When used a file format, whitespace AND comments should be tolerated and ignored, and the parsers which read JSON from a file should be smart enough to do that. In fact, they’re already half-way there (whitespace). End of story.

  • getify says:

    Furthermore, I should point out that what I’m suggesting is NOT that the JSON format should include comments, but that JSON parsers should ignore comments. These are very different suggestions.

  • You wrote:

    > Apparently you aren’t aware that most JSON parsers are already tolerant of comments

    No, I wasn’t, largely because it would never have occurred to me to try to put comments in JSON. If that’s so, then most JSON parsers are broken according to the spec.

    >Patently false. JSON was ALWAYS a valid subset of JavaScript.

    Yes, agreed there. What I was saying was untrue was that JSON was natively parsed in JavaScript. Sorry about the confusion.

    > I call B.S. That’s just a cop-out.

    No, it really isn’t. I find it very hard to answer technical questions in isolation. Lots depends on the use case. That’s a fact.

    > Plain and simple, if you had a data representation format (call it “Foobar”) and your developers loved it, but it didn’t support ignored-whitespace, and they wanted to add whitespace, would you tell them to find a different format?

    If I thought whitespace was incompatible with the philosophy of the format, perhaps. Otherwise, I’d release a new version, and make clear which version a stream or document was in.

    Note: I’d release a new version. I wouldn’t encourage parser developers to ignore the spec.

    > And even if pure transmission was all Crockford ever intended for it, a lot of developers have since found other expanded uses for JSON besides transmission.

    And all of those uses that I have come across would be better handled by other formats. JSON does one thing — data serialization for transmission or (perhaps) DB storage — and does it very, very well. There is no valid use case for using it in any other context. That is my considered opinion.

    > Furthermore, I should point out that what I’m suggesting is NOT that the JSON format should include comments, but that JSON parsers should ignore comments. These are very different suggestions.

    OK, then you’re suggesting a de facto change to the format. That’s the wrong thing. Instead, fight for comments to be included in the next version of the spec, or call your format JSON-C or something. Don’t pretend that you’ve written a compliant JSON parser. You haven’t.

  • getify says:

    If that’s so, then most JSON parsers are broken according to the spec.

    From Doug Crockford’s own words, in 2005 (and quoted above): “A JSON decoder MAY accept and ignore comments.”

    From that reading, I believe it to be entirely “legal” (that is, not invalid) for a JSON parser to ignore comments.

    The spec does not say “Must throw an error if comments are encountered.” The spec is just silent on the topic of comments, and then Doug added that additional guidance several years later.

    So, based on that, why is it invalid for a JSON parser to do as Doug (and I) have suggested, which is to accept and ignore the comments? What in the spec requires a failure on such? Omission is not the same as commission.

    incompatible with the philosophy of the format

    By who’s definition of the philosophy are we going? Your’s? Or Doug’s? Because as I just quoted, Doug seemed to be OK with parsers ignoring comments. What he really didn’t want (his clearly stated philosophy) was for parsers to assign any semantic/functional meaning to the comments.

    And all of those uses that I have come across would be better handled by other formats.

    Does JavaScript (I don’t care about any other environment right now) have a native YAML parser built into it, so that I can grab some YAML from a string (a file, an Ajax request, whatever) and directly/natively parse it?

    No. It doesn’t. I have to load an additional library to parse YAML. So let’s be clear here: you’re not just telling people “go change your existing file format because I don’t think you’re doing it right”, thereby creating incompatibilities that must be handled for existing users in an upgrade, but you’re creating an additional non-native dependency on a third-party library.

    When Doug did that, I think he probably assumed (and rightly so) that there was a clear path to having JSON.parse built into the JS environment (eventually). It’s pretty unclear that YAML is on the same path in JS.

    So this question comes down to, should we pre-process JSON that has comments in it, or should we convert to an entirely new format and install a new dependency?

    I think it’s pretty clear that former, for this use-case, has far less friction and risk as compared to the latter. So I again assert that your use of “better” has a rather dubious definition at this point. “Better” in that it “better fits your opinion” or that it is “better for JS implementors” (maybe). But I would define “better” as “lesser evil, less risk, less dependency, more native support, etc”.

    ——-

    Furthermore, let’s postulate for a moment about the far-off future… imagine that my JSON.minify() pre-processing concept for JSON-with-comments catches on in the SSJS world, and lots of people are using it because lots of SSJS people are storing their config in .json files, and they see the value-add (even if you argue it’s slight) in having inline comments in those files.

    If we were at that point, would you still argue that all those thousands of people have been “doing it wrong” all along, and that it’s invalid to desire to “pave the cowpath” and make JavaScript’s JSON parser similarly lenient of comments in the same way that many other JSON parsers are?

    What do you think Doug would say (given his quote above)?

    I realize we’re not there yet. My idea is rather new and not-well adopted. But I’m trying to make the case for that movement to start, and I’m trying to make it easy for people to do it now without thinking about it too much. **Someday** I hope that pragmatic wisdom wins out over staunch language legalism/elitism.

    OK, then you’re suggesting a de facto change to the format.

    Have you read the JSON spec in detail? Nowhere in the spec does it say how parsers should handle whitespace. Nowhere does it say “whitespace must be accepted” or “whitespace must be ignored”. What it does say is, “Whitespace can be inserted between any pair of tokens”.

    This is my opinion, but that does NOT sound like whitespace handling is part of the spec, or more to your point, the “format”. It sounds much more like whitespace is a tertiary thing that parsers should allow, but ignore. By contrast, for instance, Python is quite clear about a semantic meaning assigned to whitespace (indentation). JSON’s mention of whitespace is far less important, and is, at most, a side-note.

    So, if we add more tertiary things (namely, JS-style comments) that parsers should allow but ignore, IMHO that is NOT a change to the format. That’s all I’m suggesting. Nothing more.

    fight for comments to be included in the next version of the spec

    Doug, in his infinite wisdom, has declared that he intentionally did not version the JSON spec. This means two things (according to him):

    1. the currently adopted spec is final
    2. there will never be other versions of JSON, only something to entirely replace JSON

    I am not as ambitious as you, to try and convince the entire SSJS community that JSON is out, and YAML is the new hotness. I’m not trying to replace or supercede JSON with the “next thing”. I’m simply trying to make it slightly more useful to a small subset of the use-cases.

    or call your format JSON-C or something

    Did you actually read the blog post you’re commenting on? I quote myself from above (in the article):

    “But since I’ll probably never win that argument with Crockford (or anyone else), the next best thing was that I defined this other thing which is INCREDIBLY SIMILAR TO JSON… namely JSON+Comments. I’m not much for silly overused acronyms, so I won’t go so far as to call it JSON+C. But you get the idea.

    And here’s my very strict, and easy to enforce as a standard, definition:

    JSON+Comments: valid JSON + valid JS comments.

    JSON.minify() is a handy tool to help “convert” your bastardized “JSON+C” into real JSON.”

    I wrote that to appease some of the strict police, who kept pointing out that JSON+anything != JSON. Whatever. That seems like such a pointless argument point, but I *did* in fact give the concession that it was OK to say that what I’m suggesting is (at least for now) this other thing that’s really close to JSON, can label it something arbitrary like JSON+C.

    Don’t pretend that you’ve written a compliant JSON parser.

    Umm…. where did I claim that?

    What I did suggest, which I think is a valid claim you should consider, is that Doug’s quote says (to me, and I think most people) that any parser which accepts and ignores comments is still a valid (aka “compliant”) parser.

    But I didn’t write a parser. What I wrote was a helper pre-processor since JS’s parser is not (yet) quite as kind as some of the other languages.

  • James Babcock says:

    I came across this when looking for a workaround for the fact that browsers’ JSON.parse method won’t accept comments. That was a major WTF moment. This shouldn’t be a philosophical debate; support for comments is completely mandatory for a large fraction of the applications in which JSON will appear, and for a parser to reject JSON input with comments is a *bug* and a putting a preprocessor in front of the parser is a *workaround*, regardless of what the spec says.

  • Glenn Widener says:

    I totally agree with James Babcock. JSON is extremely useful for static configuration files – if it supports comments. The spec should change to REQUIRE that JSON.parse() accept and discard comments.

  • Todd Eddy says:

    I have a perfectly reasonable reason for comments in json and why I found this post trying to find out what comments are handled. I am making a simple service that sends status message back and forth over udp (chose udp because I don’t care if I lose a couple and it’s faster than tcp). Problem is I want to be able to know that the data I received wasn’t tampered with or someone trying to inject code. My plan is to create a signature of the json payload like so:

    {“blah”: “whatever”}/* dk2hsj2ldhgkjdlwh */

    I then strip off the comment, generate an hmac signature of the json code and compare that with the signature. The only way I’d get the same signature is if the same key was used on both sender and receiver. I can do something like add a “_signature”: “sdfklslal” option to json because adding that signature changes the signature. Since I’m writing a custom server in my case I can accept whatever I want. Kinda dissapointing to see the official answer is NO.

  • Greg says:

    (Actually, the more I program, the less useful I think comments in code are. If your code needs comments, it probably isn’t clear enough.)

    This is what it boils down to. Programmers who dislike comments. In any data exchange you’re going to run into enumerations – comments are a perfect way of communicating possible enumerable values in context and in a way that makes perfect sense. And if the person transmitting/generating the document isn’t concerned about the size of the file by adding those contents, who is to say they’re not in the best position to make such a determination?

  • Bart M says:

    I’m currently writing a C++ app, heavily relying on ( a lot of ) config-files for which standard config-file formats are too limited.

    Almost all of these config-files should be modified/created by the users of my program by hand, so I ruled out XML pretty quickly – which I hate to write myself.

    Looked around stumbled upon JSON, which was easily parsable using boost::ptree in C++ – but on first sight it didn’t support comments, which would be a deal-breaker for config files. Luckily, the JSON parser in boost indeed seems to ignore comments, although I wrote my own filter to strip the comments before I found out about this.

    I fully agree with the article. Strictly for transmitting, adding comments would be stupid – but JSON nowadays is used for a lot more than only sending data back and forth.

  • Dennie de Lange says:

    Great post, exactly what I’m thinking (three years later:)).
    I’m also pre-parsing out comments before parsing the actual JSON. Using JSON in configuration files just makes sense nowadays.

    Also, when using JSON for configuration files, ordering of the entries in which they are defined makes sense. Luckily there are parsers which supports this feature (simple-json).

  • David Fregoli says:

    So, 12 (TWELVE) years after JSON was conceived nobody ever felt the need to change it yet you call it a bad decision to make it final

  • getify says:

    @David-
    Your comment sounds fairly ignorant on several fronts:

    1. Your dates/facts are wrong.
    – JSON was not standardized 12 years ago, it was standardized in RFC 4627 in 2006. True, it existed for quite awhile before that, but was hardly standardized. In the early days, it was widely variable. It didn’t settle for a long time.

    – This article wasn’t written 3 days ago, it was written over 3 years ago. That means it was, at worst, 4 years after “the JSON standard”. Certainly not 12 years after the fact.

    2. I admit in this blog post that “changing the standard” (which is not even what I’m suggesting) is probably impossible. But since I’m not suggesting that, it’s a strawman to argue against it.

    What I’m suggesting is not a change to the spec. Doug never deals with whitespace in the official grammar spec. What he does is mention, as an implementation detail, that whitespace should be accepted and ignored. So what I’m suggesting is an implementation detail, too: that the same should be true of comments.

    Thinking that JSON parsers should treat comments as ANY different than whitespace, which is that they should BOTH be accepted and BOTH be ignored, is just not rational — you can religiously resist such a change, but distinguishing between the two in this regard is NOT based on any defensible logic.

    They are BOTH merely author-added readability improvements to human-maintained files. As such, they BOTH are for humans only, and computers should just ignore BOTH of them.

    3. Many languages’ built-in JSON parsers have already long-since done what I’m suggesting here, which is that they already **accept and ignore** comments in JSON strings they parse. That’s been true for nearly a decade. So, no, I’m not coming along 12 years later. I’m trying to get JS to catch up to everyone else.

    What I suggest here is hardly “new”. I’m suggesting, at most, that the JSON.parse() in JS which wasn’t even officially “standardized” until a couple of years ago, can do the same thing that many other official language JSON parsers do.

    To not do so doesn’t make JS’s version better, it’s makes it more antiquated and less flexible/useful compared to other languages’ implementations.

    Morever, the FACT that many languages’ parsers still accept and ignore comments, “12 years after the standard” is a testament to the fact that comment ignoring isn’t a spec question, it’s an implementation detail. The smarter implementations do it, the dumber ones (like the one in JS) do not.

    4. As noted in both updates at the top of the post, I’m not even suggesting something that’s substantially contrary to Doug’s own mindset. He asserted the *real* problem “back then” was that parsers were NOT ignoring the comments like they should, NOT that they were including comments to be ignored. His unfortunate conclusion was the only way to get them to ignore them was to outlaw them. In retrospect, this WAS a terrible and lame decision.

    You cannot legitimately ignore the fact that Doug clearly has indicated on more than one occasion that a JSON parser which ignores comments is not only NOT breaking the standard, but it’s in line with the spirit of what he always intended.

  • Darek Pryczek says:

    I also strongly agree with the article.
    I came across it after “a WTF moment” (as somebody here called it) caused by realization that JSON does not natively support comments. Such a big flaw of an otherwise such a nice little text data format…

    All text data/code formats should comments.
    One good reason not mentioned above is simply to experiment and comment out some part of the configuration file, just to temporarily set some setting, and after w while has an easy way back (just remove new line and comment out the old one).

    Comments are one of the main reasons I think XML is useless.
    Comments syntax in this language is completely absurd, you cannot place comments inside an element (for example to comment out an attribute), and adding/removing comments is such a pain.

    And I’m sorry to say, but YAML is bloated, overblown, with much, much too complicated syntax and a complete lack of truly portable and licensed freely parser code. Maybe some day it will be usefult alternative, but not today, in 2013, not if you are going multiplatform.

    So I suggest we just use JSON with comments, add small preprocess steps and be happy with it :)

Leave a Reply

Consider Registering or Logging in before commenting.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Notify me of followup comments via e-mail. You can also subscribe without commenting.