Archive for September, 2005

Immediate Gratification and E-Books

I feel about e-Books the way I felt about digital photography a decade ago: devices (whether dedicated, PDA, or notebooks) are not yet nearly good enough to displace the analog experience, but it’s now clearly just a matter of time. Datapoints from early adopters are multiplying, such as yesterday’s E-Books to the Rescue (via Daily Book Report):

Two minutes after making payment, I had the text of my “book” right in front of me … My stress born from a lack of information subsided as I started to read … as I scrolled its pages upward, one after the other, I realized that I was reading it more rapidly than I would a physical book … When my eyes grew fatigued, I zoomed in on the text using the Adobe Reader; my fatigue dissipates. When I need to “highlight” something in the electronic text, I use the highlight function in the comment section of the Adobe program; no need for Highlighter-Brand markers unless I wanted to get a noxious-fume buzz…. The next day, I looked at my shelf of paperback books and realized I was looking at the written-equivalent of a large, 1980s-era CD collection. The books were taking up a lot of physical square-footage, when they, like modern music or video downloads, would more efficiently serve me as electronic files. And, the books showed wear and tear, I couldn’t enlarge their text, and I needed markers to highlight the segments that were important to me, and I couldn’t cut and paste their text to a word document. In sum, my physical books seemed archaic, like art-pieces that should be displayed, but hardly employed as modern resources of information.

Wow! I admit I’m not personally “beyond paper” in my own reading preferences, and, per Paul Saffo, we shouldn’t “mistake a clear view for a short distance”. But as with digital cameras, I’m convinced that once we have good enough reading quality on electronic displays, the tipping point will come much faster than most of us imagine. I’m excited about the prospect of helping to accelerate making access to the world’s knowledge more universal. Indeed our local Seattle Library recently began offering 24/7 access to e-Books and audio books. Our new Koolhaas-designed building is impressive but it’s only open 50 hours a week. And I probably couldn’t get in wearing pajamas.

Another College Try for E-Textbooks

Last week the Star-Ledger (Newark, NJ) reported that ten colleges are experimenting with selling textbooks in e-Book format. I don’t think that 33% off for an e-Book that can’t be printed or transferred to other users or machines, and stops working after 12 months, is a compelling value proposition. Sony’s Librie in Japan certainly didn’t get much take-up trying a similar self-destructing DRM. And the economics here are even worse: given the high and readily-obtained residual value of printed textbooks, buyers of the e-book format would in effect be paying more for less. But it’s an interesting datapoint that the experiment is being tried, and another sign that despite the burst of the e-Book hype bubble in 2002 the embers are still burning.

Over-Egging the XForms Pudding (Not)

Mark Birbeck seemed to feel I had accused him of over-egging the XForms pudding and posted a thoughtful treatise on XForms programming in reply. As usual Mark makes a lot of good points in support of his thesis that:
“… a programming model based around events, declarative mark-up, a spreadsheet-like dependency engine, automatic reflection of model state to the UI, automatic validation using standards, easy encapsulation of data submission, and so on, is easier than scripting, AJAX, C++, VB, Java, and yes, even Ruby on Rails.”
I was most definitely not accusing Mark of over-egging any puddings: as an uncouth Yank the phrase was not previously in my lexicon. And clearly Mark’s got a broader perspective than XForms. Others involved in XForms have been somewhat vocal about the demerits of script-based web applications and the inherent superiority of XForms declarative approach (e.g. for promoting accessibility and device-independence). Mark, by contrast, has taken a pragmatic and use-case-driven approach to looking at how best to utilize XForms capabilities in real-world rich-client application scenarios. Indeed Mark’s intricate sample applications cross the line into “programming XForms”.
And that’s exactly my point. It’s not a question of whether XForms is an improvement, but rather whether it’s realistic to expect programmers to adopt it all-in as an displacing paradigm. Innovations that require people to change behaviors often have a hard time getting adopted. Millions of developers build rich-client user interfaces by combining declarative XML markup for presentational artifacts with JavaScript for client-side interactivity and business logic. This approach is common across AJAX, XUL, Laszlo, Dashboard, Konfabulator, and Flex. Microsoft’s paying its typical fast-follower “flattery” in adopting this approach for it’s forthcoming WPF/E.
If these RIAs exhibit an MVC architecture at all, it is typically in classic OO programming style via rich domain objects (i.e. the data are encapsulated in programmatic objects, with method-based access). Whereas XForms requires that the presentation/controller layer of the application deal with an XML data model. This RESTful (or in SOAP terms “doc/literal”) approach can also be supported in classic script-based RIA architectures, so in some sense XForms gives programmers fewer capabilities. Less may be more in terms of the overall application lifecycle, and I happen to believe that the looser coupling of the XForms approach is a promising paradigm in an SOA world of remixed web applications. Nevertheless it’s hard to envision many developers giving up the OO approach that has been drummed into them for decades as best-practices.
And, Mark’s comments about the potential to extend the XForms actions mechanism with looping and branching constructs like “if” and “while” set off further alarm bells for me. If XForms actions is destined to become a full-fledged Turing-complete programming language – and a procedural one at that, rather than a functional-programming system like XSLT – then it’s simply in head-on competition with JavaScript/Java/C#/Ruby/et. al.. Given the extremely cumbersome XML syntax and lack of a user base, what would be the motivation for devleopers to adopt it? Once you have the looping & branching the static analysis required of a developer tool processing XForms action sequences wouldn’t seem to be fundamentally different than if it were processing blocks of JavaScript. XML syntax makes things much more complicated for the human programmer yet is a minor convenience for tooling – whether JavaScript or XML, parsing is a minor part of the tooling equation. Some folks from MIT have tried build a programming environment, Water, around simplified XML, but it appears to have gained no traction in the community.
I’m not sure what the right answer is but Clayton Christensen makes compelling arguments that the best way to get disruptive innovation adopted is to compete against nonconsumption. To me this come back to focusing on tooling that enables new workflows, ultimately the Holy Grail of applications assembled by visual designers and non-technical knowledge workers, rather than expecting to persuade OO developers to use sequences of XML actions to do the same job as ECMAScript, or give up their domain objects (luckily, E4X promises to bridge the XML-ECMAScript gap). I also agree with Mark that XForms promise is as a “component of a much wider solution” and that other aspects like the universal document still need working out. My instincts are that, for solutions aimed at developers, ECMAScript will still play a central role. In that regard I might suggest to Mark and other XForms advocates that coming up with hybrid examples that usefully blend script and declarative XForms constructs, rather than strive to use only XForms markup even at the cost of obstruseness, might be a more constructive form of evangelism of XForms as (to developers) “sustaining innovation” rather than “disruptive innovation”.

Universal Access to All Knowledge (and other ambitions)

I got a chance to say hello again to Brewster Kahle a couple of days ago. The last time we met was in the early 1990s, during the Cambrian Explosion phase of the Web. This was when alternative protocols like Gopher and his WAIS still roamed the Internet.
I’ve used and appreciated the Internet Archive’s Wayback Machine but hadn’t fully grokked the breadth of the his vision around providing access not just to the Web per se but also to books and other texts. This is a worthy project that truly merits the community’s assistance.
Brewster is also pushing the envelope on the legal front of copyright doctrine. As I see it, even though DRM is now common in the audio world, and iTunes is a $500M+ business for Apple, this licensed content model couldn’t gain a mass market until “open” MP3 audio had become widely utilized. Most law-abiding users iPods have many more MP3s ripped from their CDs (fair use), than FairPlay AAC files bought on iTunes. IMO we need to (metaphorically speaking) establish the MP3 for e-Books. Open access to a body of uncopyrighted texts is a key piece of that puzzle.
My touchpoint for the legal issues in enabling mainstream adoption of e-Books, including parallels and non-parallels with other media types, is Clifford Lynch’s seminal paper The Battle to Define the Future of the Book in the Digital World. Long, but definitely recommended reading.