Archive for June, 2008


Two significant projects hit the web this week. Both were played pretty low-key, but I suspect their longterm significance will be greater.

Earlier this week the United Kingdom’s BBC service launched the next generation of their web-based audio/video services. The BBC is a regional service, and so most of us in the world can’t see the content directly. But consider the evidence….

The first paragraph from the press release summarizes the change: “The BBC today unveils a new-look BBC iPlayer which fully integrates radio and TV in one interface, as the service records over 100 million requests to view programmes in the six months since its launch.”

And here’s how The Register starts off: “The BBC has launched a raft of updates to its iPlayer on demand service, including the long-awaited ability to rewind music radio stations online. The new version is set to go live in beta on Thursday and will run alongside the current site for a few weeks. The most significant structural change is the full integration of the BBC’s popular on-demand radio player, which until now had merely been superficially rebranded to fit in with iPlayer’s video site. The new on-demand radio streams will be 128Kbit/s MP3s, replacing the current Real and Windows Media ‘Listen Again’ offerings. Integration of live radio streaming into the iPlayer site is in the pipeline.”

There’s a longer description at The Guardian… the BBC Internet Blog is very lively, almost a firehose, but Anthony Rose has a dramatic behind-the-scenes perspective:

The existing iPlayer website works really well, and has proven hugely successful. However, in Internet Land nothing stays still for long and the iPlayer site that you see now is based on a somewhat inflexible static-page-rendering platform that’s now over a year old.

That technology platform has proven robust and reliable, but we’ve pushed it to the limit in terms of features that we can add using the existing site architecture. It’s now time to move onto an all-new dynamic-page-rendering architecture which will give us with a platform that can provide a personalised TV and radio experience, can adapt itself to different display sizes – and a whole lot more.

Matthew James Poole, from the BBC team, mentions use of Flash Media Server and: “Interestingly the source for the EMP is all one highly comfigurable SWF of around 200k that configures and scales itself for all TV and Radio format possiblilties (which is a lot, take it from me). Its currently still implemented in AS2 mainly so we can support the WII in Flash Player 7. However starting development soon will be a FP9 AS3 version that will be maintained along side the AS2 one until further notice. Watch this space.”

More: Erik Huggers of BBC describes future work and has the public video presentation… a press release last October describes some of the relationship with Adobe… comments at The Reg get pretty dire towards the end… info from last March on the staggering technical demands of their video workflow.

My takeaways: They’re integrating audio, video, and web. They’re doing so with near-realtime streams, in multiple formats, to a wide range of devices and services. They’re mandated to be accessible to their audience. They’re moving away from a page-refresh model, to more of an application model. Their traffic has gone up dramatically since removing installation barriers for computers.

They’re making it work, offering fully integrated audio and video to their entire audience. It’s an important project.

Another high-profile project this week: Major League Baseball’s Gameday has added realtime data collection and visualization. The Gameday development blog has background info:

“Rather than seeing each pitch from a fixed viewpoint, you can use the controls in the 3D interface to zoom in or out, rotate around the action, or tilt up and down. There are a set of pre-defined positions (behind the pitcher, behind the batter or ‘high home’) available if you choose to stick to those, or you can take over the controls at any time for a better look at the pitches from any point in the game: Use the multi-directional arrow button to rotate the camera around the field, or tilt it up or down; Use the white slide bar to zoom the view in or out; Click the View button to move between pre-selected positions, which auto-adjust based on the batter’s side and/or pitcher’s throwing hand; Mouseover the circle at the end of any pitch trail to highlight that specific pitch.”

The cool thing? They’re doing realtime capture of the actual path of the ball, and sending that to each browser so that you can choose your own view of the pitch. The information is fully abstracted from its presentation: a ball is thrown in some ballpark, cameras and computers turn the ball path to data, numbers are sent across the internet, and your local client reconstructs that data, on demand, from any angle, any inning of the game, any archived game on any day.

We’ve seen realtime data represented on a screen before… Nasdaq’s AIR app lets you visualize realtime data in various ways… but the MLB Gameday project also turns analog capture into digital data at runtime too. It turns the real world into bits, and back again.

Beyond that, there are additional implications. Take a look at the early comments at the development blog… people are asking “How come I can’t see the batter’s stance? How come I can’t see the fielders’ positioning, the runner’s lead?” People already accept the usefulness of realtime visualization, and already desire more. From a developmental perspective it makes sense to start with the hard problem of pitch visualization… if this is successfully solved, then adding additional datastreams for other players’ position would be easier to resolve. People not only accept and understand the new ability, but they demand more. That’s a great indicator for any project.

There’s a deeper implication. Consider how is building a database of every pitch’s location, every game. Players and coaches already study such records, whether through a scout’s scorecard or a park’s video archive. Having live searchable data available will likely change how players prepare for a game. There are multiple pressures to continue in this vein of work.

Even if you don’t like American baseball, please do check out an archived game at What they’re doing in realtime data-collection and interactive data-representation is really very innovative.

Two projects, both very high-profile, both reaching new beachheads in digital interactivity. Neither really plays up the details of their publishing process… the Adobe Flash Player is sort of a taken-for-granted background capability on the web today. But both show why the general public supports such rich-media publishing capability, contributing to the success of all other Flash-based work.

This week’s releases from the BBC and MLB internet teams may not have produced a furor, but I believe they’ll produce a change. Significant projects.

Development vs deployment

Sorry to be banging this drum so consistently, but I’m not sure why this new “Apple will rule JavaScript!” meme is so big.

Ryan Carson has an essay on how different methods of creating JavaScript will somehow transcend the browser they run in. Here’s a sample snippet from up top:

Right now, people are generally building web apps with CSS, HTML, a sprinkling of AJAX and their framework of choice (Rails, Django, Symfony, etc). The basic client-server model still dominates.

Objective-J and SproutCore change all that. They allow you to create true desktop-like apps right inside the browser. They don’t rely on a continous web connection and they are as quick as desktop apps. In fact, if you run them inside a site specific browser like Fluid, you probably would think they were real native desktop apps.

Everyone already generally agrees that we’ll see a melding of the desktop and the browser, but Objective-J and SproutCore are the first solid step in this direction. They’ve abstracted away all the basics so developers don’t have to re-invent the wheel for every web app they build.

But… you still end up with JavaScript, running in the various browsers. I agree with many of the comments at the cover article from Dion Almaer, who ask what’s new about this. Being able to develop with Objective-C-like constructs doesn’t change the final runtime experience at all.

It’s reminiscent of many of the Silverlight discussions a year or so ago, where the focus was on porting someone’s habits with Visual Studio, while silently ignoring the increased enduser participation costs.

You first calculate deployment costs to figure whether a project’s worth doing. If so, then you look at development costs and figure the best way to build it. You don’t start out by just focusing on what you can build and then doing it… you look at what needs doing, then how you might best do it. (That “lookit what icando!” approach leads to skip-intro-itis, “looks best in IE” and other problems.)

I keep re-reading these pieces, in hopes of finding some nugget that will suddenly turn the whole discussion sensible. The closest I’ve got so far is that this whole SproutCore discussion promises existing Mac-only developers a way to get to cross-OS delivery, similar to how Silverlight offered a way for investors in .NET a way to reach the wider world. Maybe this whole topic pits Mac-desktop devs against JavaScript devs. That’s a weak hypothesis, but it’s the most plausible I’ve seen so far.

There are some basic realities, some very simple concepts which many of the lengthy essays ignore:

  • A content transaction is based on a creator and audience agreeing with each other. It’s not just what the creator wants to do, and (contra PCF) not just what each audience member might want to do either. It’s what both can agree on, together. Ignoring enduser costs and only trumpeting development costs is silly.
  • Defining the format and then soliciting implementations (the HTML path, eg) is a useful strategy. But it’s not the only strategy the ecology needs. Providing a predictable capability is also useful, particularly if that predictable capability can be universally deployed. It’s easier to test atop one engine than eight.
  • Similarly, one runtime engine will grow faster and more consistently than will the set of engines attempting to implement a predefined standard. 8-bit alpha support in images is a good example… depending on your audience’s choices for IE6, you may still not be able to rely on it, even a decade after the first web implementations.

Google Gears has a little bit of hype attached to it… it has a potential, but that potential would be clearer once we see an actual deployment path. The HTML 5 VIDEO tag has much hype attached to it, because of the continued avoidance of the question of codecs and production workflows (there has been a little acknowledgment recently, thankfully). But this recent spate of Apple-aligned commentary seems to beat them all.

It’s still just JavaScript, folks. You’ve got to look at what it can do, realistically. No slamming of it, because I believe JavaScript is useful technology. But the sooner we cut down on the hype, the sooner we can get some real communication going.

What SproutCore might represent

This is just me playing amateur psychiatrist, tossing an idea out for dissection. Don’t read too much into it.

Google News turned up an article at tonight… after reading it I realized I had read it already. But I still had to rethink it to recognize it.

Here’s how it starts: “APPLE is not content to dominate the post-PC era of mobile devices alone. It also wants to remake the internet in its own shiny, user-friendly image. While Microsoft, Google and Yahoo are preoccupied with one another, Apple has been laying the foundations for the next phase of the web.” We’ll come back to this later.

Here’s the part that triggered the Google News hit: “SproutCore is shaping as a challenger to Adobe’s Flash format, the de facto standard for visually rich interactive applications on the web. Flash has been widely adopted, but as a closed, proprietary format it is controlled by Adobe, and developers must rely on Adobe to maintain and support it. Flash support on the Mac and iPhone, for instance, has been lacklustre.”

If I had someone say that to my face, I’d bite my tongue, then try to get them to narrow down to their main concern first, rather than chase all the little red herrings strewn about the path. This might also be possible in a mailing list. But I don’t see a real way to talk with the original author in weblogs, to learn what their actual concerns are. When someone goes around asserting a half-dozen things that others might not agree with, the lack of two-way openness foils communication.

(Some of the red herrings: A JavaScript framework’s capabilities are limited to the intersection of those browsers in which you might be running; “closed, proprietary” is the new dismissive pejorative but I don’t see if he’s even heard of Open Screen Project; the browser runtimes are just as controlled by Others as the media runtime, and you’d be bound by the *set* of other-controlled browsers your audience chooses; it’s browsers which have had the actual problems with adequate support the past decade; when people say “Flash sucks on a Mac” I have to refrain from pointing out that all the browsers on Macs seem to trail those on Windows; and the statement “Flash support on iPhone has been lacklustre” just makes me throw up my hands. Those are used as evidentiary assertions in the original, but I suspect they’re not really the main objection, so let’s let them slide.)

What’s my own position? If Apple is blessing a JavaScript framework, that’s great. The more tools the better. But you need to rationally compare them, not just go on with the branding.

Frameworks promise developmental ease, and hope to reduce testing time. But they don’t add capabilities to the audience’s machines. Runtimes do that. Targeting the intersection of various runtimes (as with HTML/JS/CSS) costs more than targeting a single universal runtime (Flash), whether in testing, support, or (critically) range of supported functionality. Calling a framework a “challenger” to a runtime is strange, because they’re at different logical levels. The narrative falls apart.

Charles Jolley and his work seem valuable to me, but it’s the storytelling around it that makes me uneasy.

Tim Anderson had a similar observation last week, in “The RIA dilemma: open vs predictable”: “It is easier to get away with a requirement for, say, Flash 9, than to insist that users choose a particular browser or operating system.” (And Player provides capabilities beyond browsers, as well.)

So why might this whole storyline have gained so much attention? Take a look at that front paragraph again: “APPLE is not content to dominate the post-PC era of mobile devices alone. It also wants to remake the internet in its own shiny, user-friendly image. While Microsoft, Google and Yahoo are preoccupied with one another, Apple has been laying the foundations for the next phase of the web.”

There’s probably someone in some boardroom somewhere who really worries about domination. There’s a wider number of people within any company who want to be successful, who want to reach their quarterly sales goals and get a new car. But the base of geek culture is built upon doing something useful, that you and others find rewarding. Domination and re-makes are a foreign thought. When I read prose like that, I go “wha…huh?”

But ideas of domination are useful to those who wrap themselves up in a brand, and who feel their self-worth erode as “Microsoft, Google and Yahoo” advance their work on the net. The article may serve emotional needs in its readership.

Test this hypothesis out… change around a few nouns and pronouns, but keep the sentence structure the same: “You are not content to dominate your environment alone. You also want to remake the world in your own image. While people who frustrate you in daily life are preoccupied with trivial conflicts, you have secretly been laying the foundations for the new world order.” That’s the *form* of narrative being spun — that’s the emotional basis of the storyline. Psychiatrists call this “projection” or seeing in others what we actually see in ourselves.

That’s the hypothesis: “The SproutCore story was more about some readers’ inner needs than about objective reality.” I don’t know whether I believe this or not, but it’s a viable hypothesis, and explains much of the unreality. How does such a way of looking at it seem to you, can you refine the hypothesis or its tests further…?

‘Cause It Ain’t Got Flash

Techmeme is clustering on a survey of 402 people in Japan, and found that 91% were not immediately planning on entering The iPhone Experience. Couple of different takeaways on that:

  • Anthony Ha had a telling line this afternoon: “iPhone is the tech blogosphere equivalent of Brad and Angelina.” Same thing with the mention in the Adobe Analyst Call… didn’t warrant the attention… same with the discussions about a new JavaScript framework. This is audience-driven news (“Would you like to supersize that?”), with a life of its own.
  • The native-English techblog elite are not the whole of technology today. Silicon Valley prompts a lot of conversation, but most people who use technology are not native English speakers. Yes, other people may make different choices. It’s important to understand why they do.
  • Mobile adoption is based on what your friends are doing. Apple has an exceptionally strong brand in Japan, but a new type of choice in pocket devices has a strong social aspect. We can expect regional differences in how devices and applications are received, and must learn from them.
  • California has been backwards and weirdly retro in mobile phones for a long, long time. Apple’s iPhone helped this impoverished culture embrace advanced pocket functionality. Techmeme and similar sites are very California-centric. It’s a distorted worldview.
  • Japan has had a very strong mobile culture for a very long time. A new entrant would have to prove they could support how people already use their phones. And manufacturers attuned to the local markets have not been standing still….
  • Considering that Flash Lite has been ubiquitous in Japan, Korea, and other mobile-savvy societies, you could say “People in Japan don’t want an iPhone, ’cause it ain’t got Flash” but that would be too facile. I just put that in the title to attract some trolls…. 😉

Large computers may not be used all that differently across cultures, but personal devices sure are driven by our surroundings, as are the social applications built atop them. When we see a shocking difference like this iSHARE survey, it’s good to take some time, and see what the difference is all about.

Slashdot on SproutCore

Slashdot got onto today’s story about “One particular JavaScript Framework can Rule The World — *if* it’s from Apple.” I’ve been tracking this story because of the “Flash Killer” moniker. I liked how some of the Slashdot crowd dragged the Techmeme pumpers back down to earth, and so annotated a few comments which caught my eye….

An early comment describes how the different runtimes offer different capabilities, then: “More on-topic: This ScriptCore looks like Yet Another Javascript Framework (YAJF?). Some choices seem particularly odd, such as choosing to reimplement buttons through javascript code instead of using native browser widgets.” That was the big disconnect for me in today’s conversation too… whatever you build atop HTML/JS/CSS is still bound by the HTML/JS/CSS runtimes of your particular audience. Chad Udell had the same perspective mid-day. (The “native chrome vs neutral chrome vs branded chrome” issue is one I stay out of. 😉

This seems a clear case of confusing the sizzle with the steak, the map with the territory: “SproutCore is pretty impressive for building real JS web applications, although the story doesn’t real end there. There’s a convergence of other improvements, such as HTML5, CSS, and SVG, that are filling a lot of the multimedia roles previously the domain of flash. For example, WebKit already supports CSS transforms, gradients, client-side database storage, animation, HTML5 media, downloadable fonts, masks, reflections, etc.” Just because the paper map is flat, does not mean the territory has no hills. Your labels are just a model, and it’s more useful to look at what you can actually do, with which near-term and long-term costs. SVG theory and practice differ.

There were some references to traffic/revenue strategies from commercial bloggers, and charges of sockpuppetry. There’s some (apparent) engagement by a principal, but no resolution… more ad hominem than ad rem.

I got foiled by an Ajax text editor stripping out linebreaks (maybe it wanted paragraph tags?). Text should be simpler…. 😉 [Later: I got modded down to 0 and marked as “Troll”!]

A strange but common line: “You’re missing the point… flash and silverlight require plugins to work in a web browser. Not only is this an extra install for the end user, it also means not all platforms and browsers will be supported (A great example being no flash/silverlight on the iPhone…) The nice thing about “SproutCore” is that is 100% based on web standards (HTML, XML, JavaScript, etc) and will work on any platform and in any browser that follows those standards out of the box, no plugins needed!” We all already know that the world does, overwhelmingly, choose to install a certain small predictable media runtime. People out there do adopt Flash as standard capability. (Power to the People, baby. 😉 But he misses that advances in HTML/JS/CSS require bigger browser installs, sometimes browser switches, sometimes OS switches! If your browser runtime doesn’t display their HTML or run their JS as they expect, then it’s you that’s at fault for “not choosing a standards-compliant browser”. Dude, NONE of them are “fully standards compliant”! We’re all just trying to get better, improve things month by month.

Some reality: “I’m with you all the way as far as preferring standards over proprietary stuff. However, the iPhone seems like a bad example to me. It’s a proprietary platform, controlled by Apple.”

A common slam: “Still think Flash isn’t all that proprietary? Try selling a competing editor or changing the spec.” If the speaker would look around, the speaker would see that lots of people sell tooling atop the Flash platform, servers atop the Flash platform. Same with the PDF platform. I would agree that the governance of SWF is still within Adobe, rather than released to ISO as PDF recently was, but it’s a lot more approachable than the governance of iPhone. I’d prefer a face-to-face meeting with such a speaker, before guessing whether they’re aware of the mismatch between their speech and reality.

Another commenter brings the discussion back to ground: “From TFA, SproutCore is basically a rich set of JavaScript libraries. Flame/mod away, but it’s true. Flash/Silverlight don’t only contain the same app struts for you to build upon, but they are also incredibly powerful application hosting frameworks with rich graphics and multimedia libraries to go beyond what HTML can render. Comparing SproutCore to Flash and especially Silverlight is nonsense. Saying it’s a Flash/Silverlight killer is delusional.” (He seems a bit MS-delusional himself, because you can deliver with Flash while Silverlight remains academic, but he’s accurate about the different layers of capability.)

I haven’t personally confirmed this report, but it shows why every Ajax app has a responsibility to disclose on which specific runtimes they test, and which of those runtimes they recommend, and on which runtimes they’re just plain inaccessible: “The photo gallery [] demo on fails to work on Opera – the right photo pane not even rendering. Although Opera isn’t widely used, with its exceptional standards-compliance it’s a great barometer for how compatible something may ultimately be.” Followup: “It doesn’t seem to work quite properly on Camino, either…”

I think this may be a little harsh: “Web 2.0 exists because you don’t have to code your apps for each and every device separately. This is not the case with iPhone – anything not specifically built for iPhone is just awkward to use.” (A lot of browser apps are awkward, on a lot of devices… it’s still early days, we’ve got to improve design portability. OS-specific coding is a separate issue.)

A good example of the reaction to learning new conventions for a limited audience: “I [worked a lot of languages and environments but] never bothered to get deep into Objective C, because while it’s theoretically transferrable, it is really just used to write for the Apple Carbon/Cocoa/Core/Whatever/Don’tNitPickItsJustAnExample* stack. Same went for DirectX on Windows when I still wrote software for Windows. I would like to make apps that do whizzy things with Core Animation or whatever, but I just can’t make myself get excited at the prospect of learning yet another vendor-lockin technology.”

Oddly, Google has few relevant hits on the front page for “webobjects coldfusion”… much of the SproutCore evangelism today seemed like it would have worked for WebObjects as well.

Olympics video news

Max Bloom has a great article at, discussing aspects of video delivery for the Beijing Olympics this summer. I’ve been tracking this project, because of all the prior publicity about a “Flash Killer”. I found this article after reading of Akamai’s delivery of Olympics video (more), and the piece has lots of detail I hadn’t seen before.

Main takeaway? Like, Silverlight is offered as a new option to an existing WMV9 workflow, and Windows Media Player will still work as a client.

(The deal achieved a lot of positive publicity for Microsoft in 2007, but the reception upon release was not as positive, and today doesn’t even seem to appear in Microsoft marketing materials. All through this, MLB Gameday has continued innovating in Flash.)

Regional restrictions enable regional licensing, and so enable funding for the event itself. A YouTube-style model wouldn’t work for such a large, capital-intensive event. Adobe Media Player offers similar capabilities today, but the WMV architecture has been in trials for the past few Olympics. The entire deal makes more sense when seen in this light.

Other details include the hiring of Schematic and the business angles for UI design, how viewing varies across different regions, and this bit on the details of delivery: “As of press time, had yet to make a number of important technical decisions. A slew of DRC-Stream software and encoder boards from Canada-based Digital Rapids are being deployed in Beijing to populate’s encoding farm, but other than committing to VC-1, has yet to confirm encoding bitrates, frame rates, or frame sizes. (Without offering more specifics, Miller says will be streaming through a managed bitrate solution to optimize the user’s connection, with a target maximum bitrate of 650KB/sec.) Digital Rapids is also supplying software to enable transcoding from other digital media formats into VC-1.”

After reading the article and sleeping on it, I’m left with the impression that this is less “Microsoft buying audience share by subsidizing big events” than it is “existing Windows Media sites not defecting because there’s now an option for in-the-page viewing”. I’m guessing Silverlight will still receive a boost from the Olympics deal, but it doesn’t seem as dramatic as it did before the details became available.

Yes, CS3 is for sale

This is a followup to a problem Steve Flowers encountered… after CS3.3 was announced (with Acrobat 9) for shipment next month, he couldn’t find a link to actually purchasing and receiving CS3.0 today. I tried too, and couldn’t find the link. Turns out it was under various layers of interface mystification down at the bottom of the page… full CS3 Web Premium link is here. My apologies for the snafu-iness, and delay in reply. On the good side, once Steve raised this issue it caused a little firedrill internally, and the website’s interface should change to make things easier to find.

Related issue: I’ve been similarly UI-mystified by comments on this new weblog… administration panel would say that comments were published when they didn’t show on archive pages, showed the correct comment count but I still couldn’t see comments, and weird error messages showed up when I tried to write test comments myself. It turns out that the center/edge proxy server at just can’t handle “Publish all comments immediately”. Now I’ve flipped comments back to manual approval, and I think I should have two-way communication again here. The downside is that comment publication needs to wait until I check the administration page. Sorry in advance for any delay (I hate comment-moderation queues myself), but I think I might start resembling a normal human being again here soon….

This weblog

For what it’s worth, I still haven’t been able to figure out why this new blog’s MovableType installation holds comments for an unpredictable amount of time. I’ve set all entries to approve all comments immediately, but it doesn’t… even if I manually “publish” each one they still don’t turn up. My apologies for my seeming closed-ness, and the hassle of commenting and checking out and not seeing what you took the time to write, but it’s the computer’s fault, honest it is. 😉

Twitter’s down a lot, but it’s less hassle, even less noise-factor than weblogs these days. The HTML version of this page has recent tweets included, but if you’re in RSS, then my Twitter account has more links, more timely. I think this blog will end up being the occasional longer entry.

[Aside to Scott Flowers: I’m currently investigating that Adobe Store issue, thanks for the heads-up, and sorry for the system problems.]

Bullseye equations

At eWeek, Larry Seltzer raises some good points in his article “The Big Bullseye on Adobe”… definitely worth reading.

But I think the main reason the bullseye has been growing has been because it’s increasingly financially rewarding to attack any widely-distributed code. The growing value of your digital data and digital identity now draws attacks — in areas which were previously considered safe.

Browser security practices which seemed acceptable ten years ago now entice exploit research… window requests, “javascript:” requests, cross-domain mashup requests… many of last year’s Player issues were closing off plugin requests that browsers and servers should no longer fulfill.

And even coding practices which seemed acceptable ten years ago now need to be redressed, as April’s null-reference pointer discussion showed. In networking code, domain-pinning is now seen in a very different light than even three years ago.

Web technology tends to be too accepting of innovations, and we only look for the dark side later. (That’s “imho”, btw. 😉 The people who created email didn’t foresee spam. The people who created TCP/IP didn’t anticipate actual denial-of-service attacks. The people who thought email needed colored fonts, images and JavaScript didn’t handle the subsequent problems of web bugs and beacons. Early blogging hosts didn’t anticipate comment-spam or spamblogs. The holes were there early, but weren’t valuable enough to exploit until later.

The risk of an attack does grow with the “attack surface” (the amount of code, functionality, and entrypoints), but the risk also grows with the “attack incentive” as well. When technology leaves a hole open, it remains ignored only if no one finds it rewarding enough to exploit. Lynx may have a small attack surface, but there’s little financial incentive for attack research as well. Successful technology draws continuous re-examination.

Do things like AIR and Acrobat 9 increase the total attack surface? I’m not sure… adding existing things together doesn’t concern me as much as some of the new abilities, like local file access or invoking third-party code. The team here is pragmatically paranoid about increasing any attack surface, but I think it really requires a few years of realworld probing to test whether a new combination of abilities is immune to exploitation. I’m not sure I can yet agree with Larry’s initial point about SWF-in-PDF directly adding to attack risk.

But I do agree with Larry that Adobe’s clientside runtimes are drawing increased security research, by people with vastly differing motives. Adobe Flash Player lets you reach practically anyone, and criminals will also seek to exploit such realworld accessibility. That’s why the security team here is so important, and Larry included a description from Erick Lee about their approach:

“The Adobe Secure Software Engineering team, which I manage, has industry-leading experience in building secure applications and is a core service provided to all Adobe product teams, independent of any specific business or product line. Our secure software engineering practices include threat modeling, automated code audits, in-house fuzzing, bringing in third parties for external security reviews and more. “

(I also agree with Larry on “people update Flash, but maybe not fast enough”. Last week’s “China exploit” story (which was subsequently retracted by Symantec) may not have been possible without the wide publicity given to the null-reference paper in April… the blogosphere ended up arming attackers without reminding civilians to keep their software current. Giving the public some time to react would be helpful, and appropriately updating articles and accepting comments is, of course, a vital responsibility.)

Larry Seltzer is one of the better security writers out there, and he’s got a valid point here…. Player, Reader, AIR are all under increased examination by hacker gangs. They have great incentive to perform such research. It’s hard for Adobe product teams to push back against developers who want more local-access features, but it’s best to do things slowly, open new doors one at a time, and listen for the actual results. A phrase like “The Big Bullseye On Adobe” is a realisitic description of the situation today.

Today’s workflows

Last week CSS guru Dave Shea started a workflow discussion by describing how he starts visually in Photoshop, only moving to code as the visuals are approved, and has discrete “handoff” stages where he passes generations of finished visual assets to interaction designers. I thought the comments were notable for the diversity of workflows that others described.

This week Jason Fried at 37 Signals described their approach, which goes from paper sketch directly to HTML/CSS, and comment here again show the diversity of workgroups and workflows. Jeff Croft had a good followup essay showing how the workflow is dependent on the workgroup and the types of tasks each group is attempting: “So who’s right? The answer is simple: we both are.”

One angle that Jeff touched on is how the project’s approval process dictates so much of the workflow… a single-person project doesn’t need to gain consensus on ideas the way that a single-company team would, and the situation is even more complex for a multi-company project, such as an agency achieving approval on an implementation from multiple individuals and teams within a client company. Projects have vastly different needs to explain themselves before completion.

People creating content come from different backgrounds, have different mental models, prefer different ways to bring an idea into reality. Then each different team creating content has different needs, different flows of passing work among the group, building pieces up into a structure. And then atop that, different projects have different constituencies that they must satisfy during development. Individual creation, team coordination, stakeholder approval… layers of variables piling up atop each other.

Back in the early part of the decade Macromedia introduced “the designer/developer workflow”, DevNet and the rest, to unite the different disciplines of serverside applications (ColdFusion) and clientside applications (Flash Player). That was a necessary step, but not a sufficient one, as the three conversations above show. Few workgroups have just a designer component and just a developer component. The process is iterative, and usually involve many more stakeholders than one individual creating interaction logic and a second creating business logic.

I was impressed by the range of workflows, skillsets, and approval needs people added in comments to the essays from Dave and Jason. Adobe has been working in this area for awhile, but it’s a very difficult problem to solve… or, rather, set of problems to solve. We can’t just satisfy a single workflow, but must work to support the range of different ways that groups work together today, all while remaining accessible (in learning curve, cost, and flexibility) as well as open (so your data remains portable, and other components can be plugged into the workflow).

Hard task, but that’s what we’re trying to accomplish here….