Archive for August, 2005

Adobe and eBooks

David Rothman of TeleRead posted some pointed criticism of Adobe’s role to date in the evolution of eBooks. He dings PDF for “draconian” and user-unfriendly DRM, “bloated” files, and for being a format that “does not play well on PDAs, cell phones and other gizmos with small screens”. Ouch! At least David admitted “Adobe is hardly the only villain” in the Tower of eBabel.
While we might not express it in quite the same phrases, David and I largely see eye to eye as regards the industry’s initial wave of e-Book products and services. David also shares my perspective that Adobe still has the potential to play a critical role in the ultimate success of eBooks and eReading. With a thick skin as part of my job description, I look forward to further ideas from the community on how we can get there from here.

RESTful And Social Internet Applications

Mark Birbeck is always thought-provoking, as a thinker and prototyper on the frontier of next-generation Internet Applications. I particularly appreciate his perspectives on metadata and microformats.
Mark recently posted an example of a Link Manager Written in XForms. This and some of Mark’s earlier examples, such as XForms-based RSS Reader really show off the potential power of XForms as well as the appeal of the sidebar approach to extending the browser as a platform. However, these sample applications to me also show that declarative model-based application construction “without programming” is still a bit of a Holy Grail: a rich application may be able to be built script-free but while I’m a bit astonished at what Mark has shown can be accomplished within XForms, at some point the complexity of the XML markup arguably exceeds what a script-based approach would have produced. If you are really going to be implementing programmatic algorithms, having to use cumbersome XML markup with a fixed set of atomic actions and only work-arounds to enable looping amounts to programming with boxing gloves on. And while XForms doesn’t prevent scripting, if you are going to have a major script component in your RIA, then would XForms really be buying you much in the first place? A big part of the appeal of emerging frameworks like Ruby on Rails is the elimination of XML configuration spaghetti in favor of a focus on simpler, more streamlined code. As a coder this makes a lot of sense to me. And with introspection and other techniques tools can do a lot of analysis on programmatic code – further diluting the supposed advantages to programmers of declarative approaches.
It seems to me that if declarative model-based approaches to application construction are going to take off, advocates need to change the rules, and stop trying to oversell programmers and scripters on non-programmatic approaches. Instead we should create new classes of tools that really are for non-programmers, that do things like hide the complexities of XML Schema and XPath. When the day comes that non-programming users are really building production-level RIAs, I’ll wager long odds that they won’t be doing so by editing XML XPath node-set expressions in Notepad.

WebOS In A Nutshell

Jason Kottke just published GoogleOS? YahooOS? MozillaOS? WebOS?. Great post, and a must-read for those interested in the whole “Web 2.0” concept. I do have a few minor bones to pick:
– Today’s Web is more than HTML. I think it’s fair to consider the de facto proliferated Web client platform to also comprise SWF, PDF, XML data formats (e.g. RSS), and to a lesser extent Java. It would be great if these client capabilities were all first-class and well-integrated with each other – more on that some other time.
– On a similar “let’s not be too HTML-centric” note, arguably the least important thing in making offline apps is serving up the presentation. To support client-side data and business logic, WebOS as posited seems to need a local app server more than a local web server. With the right browser caching strategy a headless app server might be all that’s needed for occasionally-connected worfklows like offline Gmail, especially as with both AJAX and SWF-based RIAs an entire app’s user experience may be embodied in a single page/URI.
– A number of software solutions have used local web servers, notably Userland Radio and Perforce. Why none of these approaches took off probably deserves some analysis.
– The list of companies who could be thinking in these terms was shy a few names. ‘Nuff said.

Atom an IETF Proposed Standard

Last week the Atom specification advanced to proposed standard status. Less than two years from initial community thought experiment to ex officio standard is practically a demonstration of fast-than-light travel.
I’m a huge believer that XML-based microformats are a transformational part of Web 2.0, and that the advances in Atom are going to enable entirely new classes of services. Of course the unfortunate “syndication format wars” between flavors of RSS and now Atom are still raging, with RSS still far in the lead in terms of adoption. Perhaps Richard MacManus is right and it’s just all going to be called feeds – while I get queasy when Microsoft tries to rebrand open technologies, if calling it all feeds helps bridge the RSS schisms, I’m all for it.

Openness – Necessary, Not Sufficient

In comments on my post on proprietary architectures being obsolete stevex pointed out that other hard-drive based MP3-supporting players (e.g. Archeon) predated iPod and didn’t do nearly as well. Being “more open” clearly wasn’t sufficient for Apple’s competitors. But my claim isn’t that differentiated openness drove iPod’s success, only that openness was a necessary element of the successful solution. Apple is still by no stretch of the imagination the first company one might free-associate with “standards-based open architecture”. But I believe it is clear that today’s Apple (Unix, TCP/IP, USB, VGA, Intel) is not the Apple of old (MacOS, Newton, AppleTalk, idiosynchratic hardware & peripherals).
I share stevex’s perspective that critical success factors for iPod included great industrial design, brand/marketing, and the integrated user experience of the iTunes/iPod combination. The point really is that had Apple in 2001 behaved like the Apple of 1991 they would have built around a proprietary audio format and specialized mass storage, and all the above success factors wouldn’t have made iPod a hit. Steve & team clearly get the need for appropriately building on open architectures in order to innovate elsewhere. I even see signs that Sony, like Apple well-known for favoring proprietary architectures, is changing its spots. More on Sony in a future post.
At the end of the day all the openness in the world won’t make a hit product. For that matter, nor will focusing on incremental price/feature wars in an existing market space. Apple largely ignored existing entrants like Archeon to create and capture new demand with iPod. I believe Apple’s determined Blue Oceans mindset is ultimately their core strength.

eBooks: When, Not If

eBooks have been a bust so far: anemic consumer adoption and a miniscule total market. But recent signs that a second wave may be coming have reached the mainstream press. News that an Arizona school is eliminating textbooks in favor of iBooks hit ink this week (naturally, the story made enGadget over a month ago). Personally I think that that the overall trend away from traditional reading and towards electronic reading is already clear (how many people read blogs on paper?) and eBooks emergence as a significant market is a “when” not an “if”. When it is likely to happen, and what should Adobe do to help?
First, let me be frank: eBooks are a bit of a sore subject at Adobe right now. We and other SW vendors promoted and made substantial investments in eBook infrastructure during the initial hype wave. Although we succeed in establishing PDF as a leading eBook format, when a forecast $3B market for downloadable books by 2004 turned out to be less than 1% of that number, discouragement is perhaps not surprising. Meanwhile general adoption of PDF and Acrobat is going great guns, and we are seeing significant traction in the enterprise space. So why the heck would I be bullish on consumer eBooks?
Besides congenital optimism, one explanation is naivite: I didn’t participate in the first wave of eBooks, so I don’t have the associated scar tissue. But I’d like to think there’s a some substantive reasons to believe that eReading is growing, and that eBooks as a significant business will soon follow.
Most importantly, I believe that the initial failure of eBooks had much similarity to the initial failure of PDAs (Go, Newton, et. al.): devices simply haven’t provided a “good enough” user experience. One breakthrough device – the Palm Pilot – singlehandedly rehabilitated what had become considered a no-go segment. I believe that a breakthrough device is needed to catalyze electronic reading. Despite the name, iBooks are not by a long shot to reading as iPods have become to listening. I much prefer to print long documents vs. read them on my notebook computer. The breakthrough device could be a new class of PC – something like the much-rumored Apple Tablet , that busts out starting with college dorms and frequent-flying executives. It could come via reading on mobile phones – where some innovative solutions are starting to pop up. But my access to a Sony Librie has tipped me to the opinion that the breakthrough in adoption may well be sparked by a dedicated reading device. Librie, while clearly a first-generation effort, delivers a reading experience like no other electronic screen I’ve ever encountered. My spouse is a hardened techno-skeptic a few clicks away from being Waldorfian. She not only read whole books on it, but wanted to bring it to her “women’s book group” gathering (particularly notable since this group seems to do far more wine drinking than book reading).
The other reason is sheer economics. Paper-based book distribution is is costly and a drag on the environment (rumor has it the Chinese are pursuing eBooks because they’ve calculated that there aren’t enough trees in the world to supply adequate textbooks for every Chinese student). More importantly, the economic costs and constraints prevent businesses from adequately capturing the long tail . Does anyone really think that consumers, who are poisted to get online access to effectively every movie and music track ever made, will remain content with the limited book selection at their local mall’s chain bookstore? Some of this long-tail can be satified via print-on-demand – which may grease the wheels for truly electronic distribution, just as suppliers of film processing to digital pictures arguably catalyzed the transition underway to completely digital photography.
So how soon will all this happen, and what should Adobe do to promote eReading and eBooks? Hard questions. It’s obvious that there are things that we need to continue to work on: striving to make Reader a great end-user experience for immersive reading; increased support for electronic workflows and technologies like XML in our publishing tools. But I’m pretty sure we need to do more, and I welcome your suggestions.

“Integrated Proprietary Architectures”: Obsolete in IT?

Clayton Christensen has many Adobe fans, with good reason. He has truly pioneered in analyzing the underlying economic models that drive innovation. However Christensen is bent on establishing overarching general principles, which are necessarily to some level oversimplified in the context of a particular industry. In particular, I believe his model of a cyclical relationship between integrated, proprietary architectures and open modular architectures needs to be viewed in a larger context in the software sphere: these days, even the most closed and proprietary-seeming innovations are substantially built on open architectures. It’s even arguable that there’s no longer any major role in our Web-based ecosystem for integrated proprietary architectures.
Apple is an illustrative example. Christensen writes of the iPod that “Apple’s success illustrates how integrated, proprietary architectures allow companies to improve along dimensions that are not yet good enough to meet customers’ needs … when those dimensions become more than good enough … dis-integrated, modular architectures will thrive.”. This and other examples have led some to conclude that Apple’s “core competency” is innovating in early markets by being closed and proprietary.
Yet, the iPod’s success was clearly in large part driven by its adoption of two open modular architectures: the open MP3 format and commodity hard-drive technology. iPod took off because it gave people on-the-go access to all the music they had already RIP’d (and ripped-off). Even the vaunted tight connectivity between iTunes and iPod is built on the open USB standard. iPod/iTunes has only a small dose of proprietary architecture in its FairPlay DRM – everything else is essentially open and modular. In fact it’s arguable that Apple’s core competency of late has been its ability to integrate open standards in a user-centered manner with great “fit and finish”, with the minimum necessary proprietary secret sauce. OS/X as compared to Windows can be viewed as a veritable alphabet soup of open source and open standards. Including PDF, of course.
Another example is Dashboard widgets, widely viewed as one of the most prominent innovations in Tiger. Apple took some heat for exhibiting NIH by not just adopting Konfabulator yet Apple’s main innovation was a major move towards openness and standards: Dashboard widgets are, unlike Konfabulator’s proprietary markup, fundamentally just standard HTML.
My real point here is that, contrary to perception, Apple has (at least in recent years) not been egoful or tried to reinvent the wheel where standards already exist, enabling them to innovate elsewhere. I believe the key point here is that in today’s software industry leveraging standards and building solutions on open modular architectures is a fact of life. We can innovate most efficiently when we also keep in mind that most Innovation Happens Elsewhere.
Now this situation could be seen as just a moment in the cycle, and that things will roll around to proprietary architectures dominating in software again. I tend to think that it’s sticky. Once DNA-based life forms took off, evolution on that “open architecture” was so rapid that there was no room for alternative life forms to emerge. Similarly, I believe open source and open standards and the Web ecosystem are driving so much innovation (elsewhere) that they are here to stay as foundational building-blocks. What do you think?

Rules and Tools for Accessing & Remixing the Deep Web

Michael Bazeley wrote today in the print Merc News and on about a startup scouring the deep web. They are also doing some HousingMaps-style remixing to display job listings geographically.
Deep linking and remixing are clearly a major trend. One reason the Web has so rapidly evolved to become the world’s dominant communication backbone is that its compositional architecture enables these kinds of unintended combinations. And the separation of information feeds and presentation which encourages novel applications is a primary hallmark of Web 2.0.
But when remixing apps start using robots that simulate human users accessing content databases I believe there are some ethical and perhaps legal issues that deserve more attention. Unlike an Amazon RSS feed, job site A was not necessarily expecting to enable job site B to access its postings, and its business model may or may not support such access. Bloggers can get pretty bent out of shape at RSS “feed stealing”. And some Ivy League applicants got in serious trouble for what seemed to be pretty much a manual version of what the Merc News is glorifying as “Mining the Deep Web” but which was (IMO mis-)portrayed as “hacking”.
I believe we need to think through the legal & ethical rules for accessing & remixing the deep web, as well as providing infrastructure support to adequately enforce these rules in the Web 2.0 platform. Otherwise, service providers may find themselves incented to stick with Web 1.0 solutions where presentation and data remain tangled up. And surely the right answer is not just more captchas.
A related issue is the personal-level remixing that’s going on via tools like Greasemonkey. As a user who wants control, I applaud Greasemonkey. And while these scripts may be somewhat parasitical, parasites can also accelerate evolution (some ancestor’s virus infection became my mitochondria). But as an app developer it is not a good thing if my internal JavaScript APIs become effectively frozen because some popular 3rd-party script uses them, or worse yet get mis-used for a purpose counter to my business goals.

Lifting the Veil

It is exciting to be part of the initial wave of corporate bloggers here at Adobe. OK, OK, I’ll say it: it’s about time! Actually many of us have been personal bloggers, Adobe hosts user to user forums, and Adobe employees also participate in many external forums and lists. So fostering corporate blogs is seen as an incremental step in communication, not a radical change. Yet I know that to some, Adobe has appeared somewhat opaque in its internal workings, if not out and out secretive. I’m hopeful that through blogging we will expose more of us to more of you in the community (and visa-versa) and help change that perception.
I’m responsible for platform product management at Adobe, which includes product strategy & requirements for our desktop and mobile Reader and PDF technology. While I’m always interested in how we can improve the end user experience, I have a particular interest in developer concerns and I expect this space to cover a relatively broad set of topics ranging from open source to web standards to rich-client applications. If you’re looking for Reader power user tips or PDF format arcana this will probably not be the Adobe blog of choice.