Scope creep, and the failure of experts

Does W3C-blaming risk bringing about what it deplores?

There were big debates last week about “blaming the W3C for a proprietary web” (Slashdot, Paul Ellis, Molly Holzschlag, Alex Russell, Shelley Powers, OSNews, John Carroll, Steven Clark, more). There was a subthread of “Don’t blame the W3C; blame the browser vendors instead”, but that still assumes that there is blame for something to be doled about.

My bottom line: When people ask “Should we blame the W3C for being so incredibly slow on RIA specifications (or video specifications or two-way communication specifications or whatever) that proprietary corporations end up doing things on their scurrilous own?” I hear a tower of assumptions which could end up damaging the consortium’s ability to do the things it actually needs to do.

Here are some of the underlying, unquestioned assumptions I read in comments to the essays above:

  • Assumption #1: Real life can be divided into “The Open Web” and “The Proprietary Unweb“.

    To understand the question “Is the World Wide Web Consortium to blame for the rise of The Proprietary Unweb?”, you’d have to first understand real life as being composed of the same duality in the same way, then to agree that they’re pitted against each other, and then to think that action is required to save one by attacking the other.

    When people toss around “The Open Web vs The Proprietary Unweb”, it’s an open assumption that each person understands life and technology’s social effects in the same way. Such labeling tends to encourage arguments.

    Asserting an assumption does not make it so. It does make the rest of us suspect that you believe it is so, but it’s not really the basis for good communication.

    There is no need for such a kill-or-be-killed, oppositional style of technology development. Assumptions to the contrary would first need to be substantiated.

  • Assumption #2: The W3C is capable, and is solely capable, of bringing about whatever we decide our eventual networked technology choices will be.

    We humans finally achieved personal electronics just a generation ago, and we’ve had only a decade for freely networking them together. We’re still striving to move this network to the pocket, so that the computer goes with you instead of you going to the computer. I don’t even have a pair of WiFi projective sunglasses yet, for goshsake. We don’t know much yet. It’s early.

    The World Wide Web and its eventual consortium definitely weren’t created to predict all new uses of the network. They might be able to help in many of these goals, but the World Wide Web is only one part of the larger Internet: “A key concept of the Internet is that it was not designed for just one application, but as a general infrastructure on which new applications could be conceived, as illustrated later by the emergence of the World Wide Web.” The Net is bigger than the Web.

    The work of the World Wide Web Consortium is not easy work. We’ve had a hard time getting newer specifications for HTML/XHTML/HTML5, CSS, agreement on things like JavaScript 2. What’s worse, considering that it’s much easier to standardize on such simpler, well-known things first, it’s potentially destructive to attempt to standardize everything all at once, particularly in advance of any practical realworld knowledge. Asking too much of the W3C risks getting not much of anything at all.

    We can see this already in just HTML itself. What was once “so simple that anyone could learn to publish in an afternoon” has become a hulking monster of complexity, where you are judged by how little you are out of compliance. HTML creation has become much less accessible to real people with each increase in the specification’s complexity and scope. VRML and SVG are two other processes which folded under the weight of the increased expectations placed upon them. It’s hard to be simple.

    Assuming W3C primacy over all new Internet uses assumes too much of the group, its mandate, and its abilities. Assuming the consortium is “too slow to solve the video problem” or “too slow to solve the mobile problem” overburdens them, and distracts from the task of solving the hypertext problem.

  • Assumption #3: All decisions must be centralized. The proper experts should tell us what to do.

    This is more a general lifestyle assumption, but one which seems to be the basis of the two assumptions above. You hear a lot of the word “should” in these conversations.

    Committees of experts are good at noticing what works, slowly grinding away differences, making a predictable recipe for future success. The top-down process is efficient at distillation, but the bottom-up process tends to work better for discovery. Corporate marketing didn’t create a YouTube, a Flickr, a Twitter, a Techmeme. It’s small groups which usually discover new niches, and large groups which later colonize. The Five Year Plan doesn’t uncover novelty. Centralism doesn’t do as well in unknown territory.

    The liangyi is made up of both Yin and Yang, moon and sun, order and chaos, centralism and decentralism. One grows from the other, both interdependent. The standardization process works better for negotiation on the known than for discovery of the unknown.

    I think we need both centralism and decentralism to succeed. The edges are nowhere without the center, and vice versa.

I don’t see the need for setting up oppositions in internet technology… we should look at all options objectively, and not just characterize tech by who developed it. I also don’t think that the World Wide Web Consortium can be realistically tasked with the expectations to solve all the new problems of The Internet.

I fear that asking more of the W3C than it is capable of accomplishing will make their actual work more difficult, endangering the basic browser technologies upon which we’re slowly continuing to agree. Such unwarranted assumptions add risk to our future hyperlinking abilities, to our future networking abilities, to our future personal publishing abilities.

The question in this discussion is itself part of the problem. It assumes the W3C must change course to slay giants. It is not a neutral question, and asking it often enough makes it harder for the W3C to slowly accomplish what it must do.

Simple standards work best. If we agree on voluntary standards, we want everyone to be able to meet them. If the specifications are too complex to implement or follow, then these specifications won’t attract as much voluntary compliance. The simpler the rules, the better.

Disclaimers:

  • This is me talking. I work at Adobe, but it’s not the group speaking. Bloggers who clip a quote prefaced with “Adobe says” are hereby requested in advance to donate that page’s ad revenue to charity.

  • Sorry this is so lengthy. I’ve written four previous drafts, distilling and shortening each time, but it’s still too long. Next month I’ll probably be able to say it better. ;-)
  • Comments are very welcome. But this discussion breeds tangents. I’d like to focus on the core question of “Does assuming too much of the W3C risk damaging it?” If you’d like to tell me how stupid I am for not understanding how evil and proprietary things are, then one hundred words maximum, please, or a quick link/summary to your longer essay elsewhere. (And, as always, anonymous comments are less persuasive.)