Archive for September, 2006

The End of Literacy

In a comment on my post Does Reading Really Matter?, bowerbird challenged me to name names on who’s arguing the position that “the end of literacy is nigh, and that’s OK”.
Well William Crossman for one is practically exuberant about post-literacy:

By 2020, electronically-developed countries will be well on their way to becoming oral cultures… Reading, writing, spelling, alphabets, pictographic written languages, written grammar rules, and all other written notational systems will be rapidly exiting the scene

Crossman’s carrying the torch for speech-recognition; another group of techno-enthusiasts who are down with post-literacy are found among those who are pushing to institutionalize the concept of multimedia literacy. Straight from Wikipedia:

The concept of Literacy emerged as a measure of the ability to read and write. In modern context, the word means reading and writing at a level adequate for written communication. A more fundamental meaning is now needed to cope with the numerous media in use, perhaps meaning a level that enables one to successfully function at certain levels of a society.

Some related skepticism about the written word iis found in constructivist learning theory . The increasinlgy trendy Waldorf education philoosphy delays teaching reading until 3rd grade or later. On a broader plane, the debate about the relative merits of the spoken and written word is as old as Socrates and Plato.
Personally, I envision the future of content as a cornucopia of entertainment and learning options. Multimedia training beats “read the instruction manual” nearly every time. I also see merit in the experiential Waldorf model. Certainly speech recognition will drastically reshape our interactions with computers and devices, as digital photography, podcasting, VOIP, Second Life, and YouTube are already transforming how we communicate and interact over distances with each other. But I still believe that in the future mix, the written word will have continued centrality. Symbolic text is by far the most effective means of recording and transmitting complex ideas and information that humans have come up with. Indeed the most powerful aspect of “Publishing 2.0″, in terms of delivering incremental value, may be to give text – liberated from paper – the ability to freely combine with other forms of media and interactivity. The traditional schoolbook may ultimately evolve into an entirely new kind of Illustrated Primer – but I still believe it will have words at its core, and that traditional publishers will have a path from the present to this future that leverages the value of their words, and the continued importance of reading and literacy to an informed and empowered citizenry.

Does Reading Really Matter?

Scary hypothetical from Michael Roger’s Practical Futurist column on MSNBC.com:

December 25, 2025 — Educational doomsayers are again up in arms at a new adult literacy study showing that less than 5 percent of college graduates can read a complex book and extrapolate from it.
The obsessive measurement of long-form literacy is once more being used to flail an education trend that is in fact going in just the right direction. Today’s young people are not able to read and understand long stretches of text simply because in most cases they won’t ever need to do so.
It’s time to acknowledge that in a truly multimedia environment of 2025, most Americans don’t need to understand more than a hundred or so words at a time, and certainly will never read anything approaching the length of an old-fashioned book…

It’s clear that short-form consumption of digital texts is rapidly displacing “old-fashioned” book-length publications for reference-oriented material and other contexts where packaged assemblies of content were an artifact of the economics of paper-based distribution. And, as consumers now have many more portable entertainment options to choose from, it seems logical that novel reading is likely to occupy a smaller slice of the average leisure-time budget. I don’t view these as bad things, even though the result may well be a net decrease in long-form reading (although given the rise of Harry Potter et. al. it’s not 100% clear that there’s actually a decrease yet, even among younger demographic).
But this doesn’t mean that literacy is worthless. I believe Michael probably was being tongue-in-cheek but there are people seriously arguing this position. At its core it’s an elitist argument, as highlighted later in this:

A broad written vocabulary and strong compositional skills are also powerful ways to organize and plan large enterprises, whether that means launching a new product, making a movie or creating legislation. But for the vast number of the workers who actually carry out those plans, the same skills are far less crucial. The nation’s leaders must be able to read; for those who follow, the ability should be strictly optional.

I’m passionate around enabling digital publications, rather than say music, video, or games (even though these are a lot sexier areas of digital content right now). This comes in part from a core belief that the above outcome is 180 degrees from where humanity needs to be headed. People need to empowered to access and create information in all its forms. Rich media is great and will be increasingly prevalent, but is not sufficient to convey all types of information. And books are not just for those fortunate enough to live near a Barnes and Noble or where Amazon.com will deliver overnight.

Digital Editions Need Standards, Content Availability – and Experience!

Spot-on Tim O’Reilly post mis-titled Standardized Hardware for eBook Readers . Tim does start by referencing a good post Makezine post on Sony Reader that touches on that topic. But the real meat comes when he riffs into a concise call to action:
“In short, we need standardized hardware, standardized software, and standardized document formats. And then we need publishers to get their books into those formats”
Yes, indeed!
Tim also reminds us of the different jobs books do. As we think about a “digital edition” we shouldn’t be thinking about a monolithic single-function publication, much less a “digital replica” of paper. We need to be thinking about the flexibility to distribute and recombine content assets in a wide variety of ways, shapes, and forms. Clearly “short form” article- and snippet-length content bites will become more popular in the digital on-demand medium. As will collaborative community-based authoring.
Yet it seems obvious that “curated” assemblages of content – what I think of as a “digital edition” – will still have a strong role to play, both for long-form learning and entertainment reading and because to our tribal human brains the gestalt of an assembled whole will often remain ineffably greater than the sum of its even seemingly independent parts. An issue of Make: is a great example of this. The idea that text-based content of the future won’t be any longer than a page on MySpace or an article on Wikipedia is as fundamentally misguided as the notion that YouTube clips will completely replace Hollywood movies.
I think there’s one key issue leading some to the false conclusion that long form text is doomed: it’s that we don’t yet enjoy compelling user experiences for immersive reading in the digital world. Movies on PCs and music on iPods deliver essentially the same consumption user experience as their analog predecessors. But many of us have a hard time imagining reading a novel-length work on a PC, whether in HTML in Firefox or PDF in Adobe Reader. The user experience is far different from, and far inferior to, paper. We all read snippets on screens, so some people leap to the radical notion that long-form texts don’t make sense at all in the digital world. Of course a number of the jobs best done in paper in long form assemblages, such as reference works, are more effectively accomplished by searching out short form snippets in the digital medium. But there’s no reason to expect novels and other long-form content to vanish. But we do need to make immersive reading a great experience.
For long-form content to really take off in the digital world we definitely need standardized hardware, software, and formats. But we need to ensure that these technologies also deliver compelling user experiences for consuming that content. We’re getting closer, but to make “Publishing 2.0″ happen we clearly have some more work to do.

“Google” Books And The Entitlement of Possession

In a recent column in Make: magazine [subscriber access only - see below], Cory Doctorow unloads on museums that attempt to control reproductions of out-of-copyright works. His line of argument is interesting to consider in light of Google’s recent distribution of PDFs of scans of public domain books.
Cory visits the Greenwich Observatory museum and is told “no photos”. When asked the curator admits “It’s not really copyrighted per se, but we want to be the exclusive purveyors of photos and picture-postcards and so forth”. He finds that a similar restriction entails to access to Michelangelo’s David. Cory writes that:

There is no nice way to say this: a museum curator who takes this attitude to the exhibits in her charge is a traitor to history, to heritage, and to science. The point of a museum is to spread culture, not restrict it in order to run a penny-ante postcard racket… The sciences and the arts are built on copyihg, on observing, on measuring, on the public disclosure of facts and discoveries… No curator of human knowledge has the moral right to restrict the recording of our shared history… at a museum – generally funded by your own tax dollars

If you replace “museum” with “library” and “exhibits” with “books” then you approximate my perspective on the recent Google Books scanning project, with respect to works in the public domain. Google has embedded a pseudo-license into their scans of public domain books – books whose archival in public university libraries was funded by our own tax dollars, not by Google. This pseudo-license attempts to limit access to “personal, non-commercial use” and to require you to retain their per-page advertisements. When asked Google says in effect the same thing as the museum curator Cory encountered “well no, it’s not really copyrighted per se, but we want to be the exclusive purveyors of commercial this and that around these scans”.
Well, I’m grateful to Google for funding scanning of public domain works just as I’m grateful to museum curators. But there’s a principle here. That principle is open, unfettered access to knowledge that’s in the public domain. Just because you’ve got a huge pile of cash and were first in line with a cozy no-bid deal to do this scanning – a deal that cannot even be repeated given the wear and tear on collection items – doesn’t create a special exemption to this principle. Just because you possess something – whether the work itself or a digital reproduction thereof – doesn’t entitle you to a new copyright or license when the work is in the public domain. Indeed this is established legal precedent in the United States (Feist v. Rural Telphone Service, and Bridgman v. Corel).
So it’s no more reasonable for Google to expect perpetual ads on every page of these works any more than Bennetton should expect every reproduction of David to display clothing ads, just because Bennetton provided funding to the Galleria dell’Accademia. Of course corporations will periodically attempt to subvert the public domain, of intellectual property and of the environment: that’s a natural consequence of the profit motive. But it’s up to the community to call “bullshit” on them when they try. So, Google, don’t be evil.
P.S.. I did grimace at the irony of finding that an article condemning restricted access to content was itself access-restricted on the Web. Cory Doctorow, a pioneer of the emerging digital publishing industry, is not against authors making money, just exploring the fluid boundary between open and paid forms of expression. But it still seemed somehow a bit off. given his EFF politics and the subject matter. Yet Make: magazine just rocks so much that I can’t feel too badly – at our house an incoming issue instantly becomes a fought-over item. So subscribe already, you won’t regret it.

Publishing 2.0 and the Architecture of Participation

In a Publishers Weekly piece the indefatiguable David Rothman writes about Razing The Tower Of e-Babel. I agree with his main points – we have to make digital reading simple and compelling for end users, and the plethora of proprietary eBook formats has created consumer confusion. Where I disagree is that I see the answer at hand.
First, for paginated final-form “ePaper” there is no Tower of e-Babel – PDF is the answer, it’s game over. PDF is highly proliferated, its ecosystem extends far beyond Adobe, even to Adobe’s competitors. PDF is now very open, with PDF a full ISO standard. Microsoft is pushing their alternative XPS but (trying to put my Adobe hat aside) there is just no rational reason for the industry to reinvent PDF, much less to buy in to letting the monopolists in Redmond do it. So David’s attempts to beat up on PDF are way off the mark: when pages is what you’ve got, PDF – open, widely adopted – is just the ticket. There are many kinds of digital editions of print content for which this kind of ePaper representation is, at least for now, the best that can be done. Highly composed technical books, textbooks, and childrens books for example. Or when you have scanned a physical book or a print-based workflow and it’s simply not economic to do human-assisted OCR to get full text. David argues that eBooks aren’t taking off; the fact that Pragmatic Programmers is making close to 40% of their revenue from selling PDF-based technical eBooks shows that in some segments, that’s simply not true.
But as David is wont to point out, final-form paginated content is not always a great solution for digital editions. Paperbacks are not just shrunken version of a hardback’s pages, and when reading on a small screen, or a larger font size is desired, users deserve pages formatted to the viewable area, not a “pan and zoom” experience. Many kinds of books are better represented in a reflow-centric representation, which doesn’t preserve a particular composed page set but instead supports dynamic pagination. PDF can support reflow but that’s not its strong suit. In the reflow-centric domain lies the Tower of eBabel and it’s an acute problem – you have Microsoft .LIT, eReader.com PalmDoc, Mobipocket PDB, Sony BBEB, ZIP-packaged web pages, etc. It’s uneconomic for content publishers to distribute in all these formats, and confusing as heck for consumers.
Yet this problem is now being rapidly and effectively addressed by the IDPF. Publishers, vendors (including Adobe), and library and educational groups have come together in IDPF to create the open standards that will end the Tower of eBabel.
David Rothman’s main criticism of the IDPF work seems to be his desire that publishers stay out of this process and adopt a “Cargo Cult” mentality in which some nebulous set of “techies” (presumably David and his OpenReader cohorts) will do all the technical work specific to publishing reqiurements (later to be rubber-stamped by an IT-generalist group like OASIS), whilst publishers stay focused on “trade association matters” and just wait for the solutions to arrive.
Well, I think this is elitist, egocentric nonsense that hasn’t a prayer of working. No doubt it’s messier to have publishers, competing vendors, and nonprofits all coming together. True, not every publisher has industry-leading technologists on staff. But many do, and this kind of participative process is part of building successful (by which I mean broadly adopted) solutions. RSS for example with its many dialects is technically a mess – I wish there had been a few more techies involved in its creation – but it was created by and for the content publishing community and has gotten rapid adoption therein. OASIS is great as a general IT standards group (Adobe is a member) but its wheels grind slowly, and with a Board composed of the likes of GM, Oracle, SAP, and IBM OASIS can’t be expected to put much focus and attention on catalyzing “Publishing 2.0″.
I believe that we’re all a lot better off with publishing industry working together in an architecture of participation, and IDPF is where this is happening now. The results of this, including the IDPF OCF container format standard, are already bearing fruit and the Tower of e-Babel that plagues reflow-centric eBook formats will start falling down over the coming months. There’s always room for more contributors in the IDPF.
Meantime, when anyone tells you that their self-anointed “techies” are going to give you the answer ex cathedra, in the form of some new and incompatible eBook format of their own devising, ask them why – if they really want to help tear down the Tower of e-Babel – they seem to be headed on a path likely to just make it worse. I just don’t get it, but then again I don’t get why LInux is shooting itself in the foot with incompatible windowing toolkits. Perhaps – as with most cathedrals – it all comes down to egos. As for me – I hope to see you in the IDPF bazaar.