Archive for August, 2008

Firefox video dropouts

Rafe Needleman asks today “Why can’t they fix the Flash/Firefox bug?”, pointing to a lengthy set of Bugzilla comments about intermittent halts of audio/video streaming in some Firefox 2 and 3 installations. The problem is not yet reproducible on demand by others, and so has been difficult to address.

I don’t have the answer, but I do have some context, observations. (Warning: This is long, and the only useful info in it is how to think about problem description. If you’ve got real work to do, then don’t waste time reading my blogpost here…. ;-)

First and most important, the way to confirm that you have addressed an issue is by being able to make the problem occur on demand, and to tell others how they can also make it happen on demand. That way they can test whether they can stop it from occurring.

A “Steps to Reproduce” is not what you must do to see the problem. A “Steps to Reproduce” is what an engineer needs to do to see the problem. You may not be able to instruct others completely, and knowing what you did is certainly a first step, but a steps-to-repro description should be written from the reader’s point of view, not the writer’s.

And intermittent problems are certainly the most difficult and timeconsuming to address. We need to be able to make the problem happen with assurance, in order to assure that it has been truly removed.

Check out the comments in the thread — “I read in a forum that someone else had the problem too” — that’s not a useful comment. If a capability is not working on your system, then we all believe you, and want to improve things. There’s no need to prove that other people have it. What we need to do is to be able to see it ourselves, so that we can test whether we can make it go away successfully. You don’t need to validate yourself. We all need to explore the argument. It’s not you, it’s it. Relax, we believe you.

There’s one comment from the original poster (identified solely as “M Z”) that “No, I have not managed to reproduce it in safe mode.” This is potentially a killer bit of info. I’m assuming he means “Windows Safe Mode”, an F8-key start which disables many system customizations. If the problem actually *never* occurs with system customizations turned off, then we know to look more closely at the system customizations. But unfortunately, the description is amibiguous… might mean that he rarely tests in OS “safe mode” just as well. Tantalizing, but still less-than-fully-useful.

Someone asks “What Firefox extensions do you have installed?” and then various lists are produced. It’s more useful to know whether the problem still occurs with a stock Firefox installation. If you ever see the symptom on that system when not running any browser customizations, then we’ll have more info than knowing which brands and versions of extensions you’ve customized the browser with. Key refactoring: “Have you ever seen the problem when all Firefox extensions have been turned off?” Even one such incident would exculpate all extensions.

Someone identifying themself only as “” offers another potentially useful bit of detail: “I encounter it only after FF has been running for a while (>60 min).” The original-poster needs to be asked whether he has ever seen the problem immediately at system/browser startup, or whether it also needs a significant period of browser use before the problem has ever appeared. If so, I would also ask them to look at their Windows Task Manager, to see how much memory their copy of Firefox is currently using. In the past, media dropout has often been associated with low-memory situations. It wouldn’t be too hard to quantify the current reports, to see whether the well-known Firefox memory consumption issues are in play when they lose audio/video.

Subsequent comments of “I have the same problem” do not help at all. No one doubts the original reporter. But these low-info confirmations just muddy the discussion, making a resolution more difficult to reach.

There are additional contributions such as “the problem is your adobe flash player version”. That’s it, no citation, no reason offered. This is a great example of why public bugbases should be scrubbed for buggy comments. The conversation may be open to this person’s participation, but he is increasing readership costs for everyone else. A group needs a smart mix of inclusion and exclusion in order to function well.

Mike Beltzner, a Mozilla staffer, makes a little progress with this comment: “Can we at least call this Windows only? I haven’t seen any reports of it happening in other OSes.” He’s trying to craft a recipe for reproduction of the problem. If we can be sure it’s Windows-only, then we’d know an engineer shouldn’t bother trying to reproduce it on a Mac. It’s a start.

Let’s switch back to Rafe’s article. It’s got the headline “Why can’t they fix the Flash/Firefox bug?” This sets off warning lights for me, because of his use of the word “the”… implies that there is only a single problem, and that it’s famous. In reality, it’s not even yet well-defined. There are also semantic issues with the rhetorical “why”, as well as the cognitive issue of knowing what the problem actually is before starting to think about whether it is possible to fix it. When I read a loaded phrase like that, I start wondering how well the writer has started thinking about what they will be writing about. Not a big flag, but a small warning flag of possible confusion ahead.

“Both Mozilla and Adobe have been aware of the issue since late May, but as yet no solution has been found.” It would be fairer to see that no way to reproduce it on demand has yet been found. The current stopping-block is in the original descriptions, not in any lack of effort by people writing code.

“One workaround solution is to install the Flash 10 player, which is still in beta.” I have no assurance that this changes the problem… in the Bugzilla talk we haven’t seen that anyone who has had the symptom has been able to make it go away by using Astro, and make it return by going back to Player 9.

Matter of fact, after reading Rafe’s article we don’t know whether he has been able to see the problem himself. I sorta suspect he might have, which would explain his interest in this one not-yet-fully-formed Bugzilla entry, but there’s zip info on his personal work at reproduction of the issue.

(Later: Yes! Down at the very bottom he says that he sees the problem too, but that he doesn’t see it in Internet Explorer. Not much more detail, but it’d be useful if he contributed to the solution.)

There’s a quote from Mike Beltzner that implies “ah, if only all Player code were published, then the problem would be easy to solve.” Baloney. [expletives deleted] Mike should know, from Mozilla’s experience with the Tamarin Project, the massive study curve that even brilliant new engineers need to do to get up to speed, to understand what is going on. And Tamarin is just one small part of the tiny engineering marvel which is Adobe Flash Player.

“He also took a minute to trumpet Mozilla’s open-source philosophy. Since Firefox’s code is open, Adobe can look at it to try to determine what is going on. But Mozilla’s team can’t look into Flash. Beltzner didn’t blame Adobe for the bug itself, but he did say that Adobe’s traditional closed software architecture is slowing down their investigation. ‘We hit a wall when it’s a closed-source solution,’ he said.”

The truth is that you simply need to distill the public complaint into an actionable item. The problem actually lies in Bugzilla’s conversational style. Right now it’s just “Oh I saw someone on a forum describe a similar thing.” You need to show engineers how they can see it. Playing the “proprietary” card instead is just weak. I’m watching my language here, but….

Comments at Webware are interesting. Too bad they close it off by registration (yeah, like I’m going to open new accounts and track new passwords for each special little site), and too bad some commenters hide their identity when commenting on others. (Tip to indy Silverlight evangelists: Including a verifiable identity will reduce the taint of possible astroturfing.) The comments section is not very useful overall, but there’s some realistic thought in there, which I appreciate, thanks.

I’m with Rafe completely on his penultimate paragraph: “Finger pointing is common in software troubleshooting, and I give both Mozilla and Adobe credit for only generally waving, not pointing, their fingers at each other. Unfortunately, neither team seems to have developers who can reproduce this issue, which just keeps the ping-pong game going.” Making the problem occur on demand is the first necessary step in making sure the problem has really gone away.

But his final paragraph seems like rankest fantasy and fairytale to me: “What I find most interesting is the way the differing philosophies of Mozilla and Adobe are slowing down resolution of this issue. If both companies were open then any developer–at Mozilla, Adobe, or elsewhere–could get into things and start experimenting to find a fix. If both companies had closed philosophies then their engineers could swear each other to the secrecy, swap source code, and together fix the issue.”

To solve the problem quickly, focus on what it is.

Summary: From the little I can see in the descriptions, I’d really want to check reporters’ system memory consumption when the problem occurs… not a sure thing, but a quick and easy diagnostic that may zero-in on the cause of the problem. (To put it gently, Firefox is rather famous for its memory issues.)

Bugzilla needs (imho) to tighten down, get rid of the conversational bloat. Doing tech support is an acquired skill, and not everyone can think directly about a problem, but a good bugbase would instruct new contributors on how to help isolate the true problem, how to describe things so that others can usefully attempt to reproduce it. Readers should not have to read through stream-of-consciousness from strangers. Refactor it, make it functional.

And finally, that line “but it’s proprietary” needs to go away. It’s a replacement for branding issues. Even Apple, the most proprietary, closed, secretive company of them all, reflexively reaches for it when they don’t know what else to say. You and I have near-zero chance to influence the W3C or Mozilla to do something — they are not more “open” in process than Adobe, or even Microsoft. “Opensource” code tweakability means more for things which run on your own machines (Linux, Apache) rather than on everybody else’s machines, when “predictability” becomes more valuable. I’m tired of conversations getting derailed when someone resorts to this weak “proprietary” tactic. Think. We need you to think. Just be honest and think. Quit the blaming and think.

And check your system memory if video stops. Not a guarantee, but it’s a start.

Update Oct 13: I’ve closed comments on this. I was amazed that some of the early comments just added anecdotal noise to the basic “you have to break it before you can fix it” principle, but more amazed these kept coming in for months later — seems pretty obvious this post has become a top Google link for “firefox video problem”, and anonymous people just want to talk without listening. They’d do better by reading, and communicating.

“Let’s use Microsoft Runtimes!”

Startling to consider, I know, but… isn’t that what “Standardistas” and “Open Web” people are actually saying, when they say “Only HTML/JS/CSS is acceptable”?

Hear me out before judging. I’m pretty surprised at having such a thought myself, so I’m still looking for ways to invalidate it. If you’ve got a good argument, I’d like to hear it. But it’s a simple thought, and so seems strong.

We do know that “Flash subverting The Web” and such are bandied about. The rap is that you “shouldn’t” rely upon the Adobe runtime because it’s “not HTML, CSS and JavaScript“. When asked why, the most common response is something along the lines of “Because Adobe might do something bad someday.” (At this point I want to ask, “What, like they did with PostScript or PDF?” ;-)

According to the best stats I’ve seen — Google worldwide queries Jan07-Jun08, over a billion browsers — Microsoft Internet Explorer 6 is still used by almost 40% of the people out there. That’s a lot. Beyond that, there’s also about 40% of the world using Microsoft Internet Explorer 7. Another big audience. Beyond that, Firefox? One person out of six… 16%. A meaningful audience, but still, only one person out of every six. Safari has half the remainder, Opera is bigger on mobile, and 1.4% use even rarer browser brands.

Microsoft has overwhelming, crushing marketshare in rendering websites’ HTML. 80% of the time your JavaScript will run in a Microsoft logic engine, against a Microsoft DOM, with Microsoft styling, and it’s 50/50 whether you’ll be running inside IE6 or 7. A TV network is ecstatic to get a 40% marketshare. A political party is completely satisfied with a 51% marketshare. Google dominates search with 60% marketshare.

For running Ajax, Microsoft has an 80% marketshare.

You can’t choose. Your audience makes their own choice. And 80% of the time they choose a Microsoft runtime to render your HTML, CSS, and JS productions. Microsoft runs your code for you.

When you create an HTML page, 80% of people will view it in a Microsoft runtime. A pure “web standards” site from a CSS guru? Four out of every five people will see it rendered by a Microsoft runtime. A JavaScript application which can retrieve text from a server without refreshing the page? Your scripting will overwhelmingly be interpreted by a Microsoft runtime.

Microsoft Internet Explorer 7, getting close to 50%. Microsoft Internet Explorer 6, dropping down towards 30%. Mozilla Firefox, less than 20%. Safari, Opera, Konqueror, and more which must be supported.

But inevitably rendered 80% of the time in a Microsoft runtime.

(I know, I know, there is the promise that the standards process will someday Shame Microsoft Into Doing The Right Thing, and that Firefox must eventually rule the world, and “Better IE than SL!”, but please bear with me, I was born a skeptical fella…. ;-)

If you’re objecting to Adobe runtimes “because they’re proprietary”, then why would it be preferable to run nearly-all-the-time in Microsoft runtimes instead?

Such a simple question, seems like it should have a simple answer….

Clipboard pollution

Just saw a Friday article in The Register titled “Mystery web attack hijacks your clipboard”. The symptom was that someone was surfing and something started perpetually writing his clipboard. Dan Goodwin referenced “sandi” at a blog (sorry for not quoting your last name, Sandi, but you don’t make it obvious and I didn’t remember it), which in turn referenced a number of forum threads which were said to describe the issue.

This forum thread seems to have the most descriptions (possibly of multiple issues), but the screenshots and partial descriptions don’t seem to mention any particular SWF at As in previous Flash warnings through this venue, it’s hard to summarize the main evidence, drawn from various disconnected forum posts. Dan Goodin said Sandi mentioned Flash, but I didn’t see where she did (other than with her weblog template about “flash malvertizements”). There’s not yet a succinct case.

It’s plausible that some webpage has some rogue SWF which acts obnoxiously with the clipboard. Might be a JavaScript thing too. But let’s say that there’s indeed some rogue browser element which just yak-yak-yaks into your clipboard.

Two questions:
1) How did you get to be executing some logic which acts so obnoxiously?
2) If you’re using a browser to surf the web, should strangers have so much power?

(The answers are already here, but let’s run it fresh again anyway…. ;-)

How’d some rogue interactivity get into your browser? Probably because of a trustworthy webpage with untrustworthy third-party content. Ad networks are big vectors for third-party resources. Web-based services are another way to introduce third-party scripting into a composite webpage. Even a third-party GIF can no longer be completely trusted. Sandi’s page is pretty secure, but even this is executing scripts from three domains… the article at The Register is executing scripts from six different domains.

As Nat Torkington described, if you’re republishing third-party JavaScript, even trustworthy sources may prove untrustworthy. If you’re accepting interactivity through an ad network, then they don’t seem to have formal processes to vet the people they forward to you for republishing.

If you use Firefox and AdBlock Plus, or have another way of inspecting third-party content on webpages, take a look at just how many domains are involved in creating the page you’re viewing. Each HTTP request for a GIF or a JavaScript or an RSS or even a ping is registered on a server log at those unanticipated third-party sites, and for interactivity (.SWF, .JS, whatever), your browser will be accepting instructions from parties other than the site you’re visiting. Modern sites like TechCrunch invoke dozens of scripts and ping even more domains whenever you visit.

Should webpages have so much power, as to be able to copy to the clipboard? Probably not, because you can’t trust everyone else we allow on the network. Early email architects didn’t imagine spam, but spam is what we got. If we want to safely click from link to hypertext link on the World Wide Web, the most stable solution is to give the browser experience few privileges.

(The alternative (which failed for Microsoft in the 1990s, and which Google is reviving in a different way with their search warnings) is the concept of giving some groups of publishers greater trust than others, which leads into an additional class of permission-raising exploits, spoofing, and so on, as well as all the subsequent social opposition from the less-privileged classes. In these days, when even your local domain-name server can’t always be trusted, favoritism doesn’t scale at all well.)

Web browsers need to be able to safely visit any hypertext link, safely execute any instructions they may contain. To gain greater privileges, it seems smarter to use a separate codebase with a more generous sandbox, than it is to set up permission schemes. This is the fundamental reason that I believe the various brands of WWW browsers won’t be able to act very much like desktop apps… the needs of visiting any strange site safely conflict directly with the needs to be trusted and powerful parts of your daily environment. Theoretically possible; pragmatically fragile.

Anyway, on this story at The Register, I haven’t yet been able to identify the exact situation from the descriptions. Clipboard-spamming does seem a possibility. And the trends of composite webpages with third-party content makes it increasingly difficult for in-browser apps to act like desktop apps.

Summary: This report needs further investigation. aftereffects

Some early notes, after reading ‘way too much all week…. ;-)

Biggest takeaway: People like rich video experiences. The big sitback screen is still first choice… broadcast served far more traffic than Web. Pundits who argue “Web vs TV” are missing that it’s “Web *and* TV”. But when people can experience a “Video RIA” they like it. Good validation.

But when people are excluded, they don’t like it. Microsoft was heard as saying “we’re bringing Olympics to the world”, and only later people realized this was a US-only deal. Linux users were cut out, as were Mac/PPC owners. Then 10% of US broadband folks were cut out atop that. Microsoft would have drawn less criticism were they a little more realistic in setting expectations.

What are the numbers for Silverlight? Hard to say… still seem contradictory. Nielsen Online says in an Aug 13 press release that the video section of received 2,030,000 unique visitors on Mon Aug 11. Microsoft is saying they got eight million “downloads” one day. When you combine geo-restriction, platform-restriction, and failed installations, the NBC site may have prompted a million successful installations one day. Looks teeny.

Whatever the actual numbers turn out to be, it doesn’t seem to mean much for making Silverlight deployment to the general public any more practical… a site would still have to eat those support costs. I risk turning into a gaming target by mentioning it, but Saturday morning still shows less than 2% Silverlight 2 support. The DNC doesn’t seem like it will change this either. The numbers are still fuzzy, but it seems pretty clear Silverlight’s silver bullet shot blanks.

Still unclear to me is the mobile angle. Some US-oriented quotes seem to show this at 25% of the desktop browser video viewing. Considering there are probably device restrictions, atop the OS restrictions and geo-restrictions, this could be a big deal. Needs more detail.

Also unclear so far is the overall global picture, and how people worldwide actually used web video this Olympics. China has a bigger internet audience than the US, and much more interest in the games themselves… news services uniformly use Flash video these days… regional licensees seemed to mostly deliver in non-beta software their audiences could actually view… there was massive peer-to-peer delivery this time as well.

It will take awhile for the world to really understand this worldwide video event. Signs look good that it changed expectations in a positive and useful way. We humans do like smarter video. Good sign.

Two other bits this week, Microsoft-related, but not Olympics:

ECMAScript fell down and went boom. The best numbers I’ve seen show IE6 at 40% marketshare, IE7 at 40%, and Opera/Firefox/Safari/etc at 20%. That’s the real world. For the specification process, ECMAScript has been working on its next version for almost a decade. It’s been clear for a year Microsoft won’t implement it, and so the world won’t support it. End of story. HTML, CSS and JavaScript continue to evolve relatively slowly. Makes the whole VIDEO/RIA/Aurora predictions seem even more unrealistic. [nb: I rewrote this paragraph an hour after initial post.]

ISO fell down and went boom, too. Microsoft pushed through the OOXML proposal. Doesn’t matter that no one can implement it, and perhaps no one might even want to implement it… Microsoft Office is no longer barred from governmental purchase because it’s not a politically-mandated “open standard”. Circus all around on that one.

Put those two items together and it gets really silly… Microsoft saying “ooh ES4 is too hard for us to implement” (despite it being already deployed to over 90% of consumer machines today!), then pushing through “an open standard” that even they can’t implement. Just business, not personal.

Anyway, for in-the-browser delivery, it’s still “Flash Just Works”. I can understand that committed .NET developers might want to believe otherwise, and those heavily invested in cross-browser JavaScript 1.x frameworks might want to believe otherwise, but no amount of bloviation changes the basics. Adobe Flash Player provides universal publishing capability, and truly rapid evolution atop that. The Adobe Integrated Runtime is bringing this beyond-the-browser, to trusted Internet apps. Flash Just Works.

And people do indeed like live video communications. The trend’s our friend.

BBC video move

If you’re ever deciding between On2 VP6 and H.264, then here is info on how the BBC went about it.

I micro-blogged this earlier today on Twitter, but want to call out some main topics in the weblog.

An intro to video delivery choices:

The video you see in BBC iPlayer today is encoded using the On2 VP6 codec, at a bitrate of 500Kbps. The On2 codec (a video compression technology from a company called On2) is pretty much the standard for video delivery over the internet today. It’s optimised for moderately low data rates (300Kbps to 700Kbps, rather than the 2Mbps to 4Mbps needed for HD content), and low CPU usage, allowing it to work reasonably well on older computers. In short, On2 VP6 is the video workhorse of the internet.

… Compared to On2 VP6, H.264 delivers sharper video quality at a lower data rate, but requires more CPU power to decode, particularly on older machines, and the user needs to have the latest version of Flash installed.

Back in December of last year, relatively few people had installed the Flash player needed to play H.264 content; now almost 80% of BBC iPlayer users have it. More machines now have graphics cards with H.264 hardware acceleration. Additionally, Level3, a content distribution network (CDN) is now able to stream H.264 content to ISPs in the UK, and the content encoding workflows that we use (Anystream and Telestream) are now able to support H.264.

… The good news for those looking for video quality improvements in BBC iPlayer is that, starting this week, we’re going to be encoding our content in H.264 format at 800Kbps. Additionally, our media player now supports hardware acceleration in full-screen mode, giving a greatly improved image at lower CPU usage than before.

So they’ve got the clientside runtime technology already installed (Adobe Flash Player), and the production workflow almost migrated (changing to MainConcept encoders), and their content distribution network is about ready to go H.264 too.

Final element? User experience. You can’t yank peoples’ habits, expectations out from under them. That’s why the release will be in stages. First stage is offering parallel VP6 and H264, with VP6 as default, and H264 available via a “Play high quality” button. Once this is realworld-tested, the next stage is to turn on automatic bitrate detection, meaning that H264 will become the default on good connections. The stage after that would be analyzing bandwidth changes and audience desire. They’re getting their feedback a little at a time, not asking the viewing audience to change to too much, too quickly, without recourse.

Also see Erik Huggers, who gives the larger picture about the move.

In comments at Anthony Rose’s technical discussion: “Is this new codec going to be compatable with the Nintendo Wii?” This is a tough question… but it’s a valid question. iPhone and PlayStation owners ask the same thing. Nokia Internet Tablet, iRiver, and many other devices achieve standard capability via Adobe Flash Player. But it did take awhile before office printers standardized on Adobe PostScript… there will always devices which don’t include standard capabilities, especially during the early days.

Innovative file-format types do tend to be commodified over time… bitmap formats work better across devices now, and text is easier than in the early years. Mozilla will be adding the On2 VP3 codec next year, as has Opera. But I imagine it would be expensive for realworld video production workflows to distribute an additional older format of compressed video for a minority audience… desirable, sure, but expensive. See how it goes.

You’ve got to get all four legs of the stool solid: the production workflow, the distribution process, the clientside capability, and then the user experience. The BBC is a good example of how a video production group actually goes about this testing. I’m glad the BBC is so open about how they’re bringing about this work.

I Like Aurora

Folks at Adaptive Path put together a concept video, “Aurora”, of how we might improve computing in the future… see the series here… commissioned for the new Mozilla Labs.

Lots of commentary the past few days focused on the details, but I think it’s more the overall shape that’s important. Wouldn’t you want seamless synching among devices, and wall displays, and integrated telecommunications, and more satisfying interface customization, and easy data capture/transformation, and strong location-awareness? Those seem like good things. I’d like to see them happen.

Whether a particular comp’s interface is “busy” or context-menu design isn’t as important to me… practically, multiple implementations of interfaces would eventually handle these different audience needs. I’m looking at the overall direction, and I definitely like it. There’s other stuff to accomplish too, true, but what I see in the video are good directions in which to strive.

You and I can see ways to accomplish lots of this lifestyle today… I had fun watching the video and thinking how it might have been produced. ;-) But it’s not yet a widespread and easy way of using digital devices. If the Aurora videos can bring more people into believing that these are important goals, then that’s to all our benefit.

(I’m not sure of the video’s focus on “Web and Browsers” instead of the larger “Net and Clients”. We need an ability to visit any published page in the world without fear. Doing that with the same codebase as extreme personalization seems trickier than the alternatives. I see future computing as more of an Internet thing than just a Web Browser thing. But that’s a separate issue, as is the video compression.)

Check out the series of four videos, if you get the chance this week. There’s some good stuff in there, and I think this campaign will be successful in getting more people anticipating these evolutions.

Blast from the past: Kevin Lynch, 2003, device cooperation.

Factors affecting realworld adoption rates

Alex Russell has a good essay on ways to improve browser adoption rates, which I picked up through a recommendation by Dion Almaer. I wrote a comment there, but am not sure if there’s a comment-moderation queue or if it got lost. Considering that I was wondering whether to make a blogpost of the comment beforehand, I’ll just paste a copy here so I don’t lose it…. ;-)

Update: Fixed two typos about Player 9 release dates… originally read “2006″, should have been “2007″.

Continue reading…

Software Impersonation

At ZDNet, Ryan Naraine of security firm Kaspersky Lab advises to doublecheck the links you click in Twitter or weblogs: “A Twitter profile has started lending links with lures to a pornographic video of Brazilian pop star Kelly Key… If you click on the link, you get a window that shows the progress of an automatic download of a so-called new version of Adobe Flash which is supposedly required to watch the video. You end up with a file labeled Adobe Flash (it’s a fake) on your machine. In reality, this is a Trojan downloader that proceeds to download 10 bankers [password-theft malware] onto the infected machine, all of which are disguised as MP3 files.”

Bottom line: Clicking on links in social media is like not washing your hands after being out in public — you just can’t know what you will pick up.

The part that worries me the most is the “says it’s Adobe Flash” part. We’ve seen such impersonation before with files (“Naked Wife”, eg). But to actually impersonate a very well-known runtime? I’m not sure how that will play out. Some people will fall for it, and I feel for them, but most would see through it. Still, some real people will be hurt.

David Lenoe, from Adobe’s Security Team, had a blogpost up about it today. I don’t think that the people who need that reminder would ever see it though. I’m still concerned.

Adobe is not directly involved, but the infection relies upon using the existing goodwill towards the overall Adobe Flash ecology… without all those sites which made Flash a standard, this social exploitation would not work. (And Ryan’s article doesn’t clearly state whether the link is to an .HTM, .EXE, or other file, so it’s unclear to me yet whether URL-shortening services are currently enabling the exploit.)

A bigger bottom line: Someone out there in the world is going to get their bank accounts stolen because they saw a dialog that said “Adobe Flash” and they said “Okay”. I don’t feel right about that.

Do you have thoughts, advice, observations on this? I’m seeking different ways to look at this problem, different approaches we might take. Open to anything, thanks in advance.

Closed above, closed below…

… and a wee little bit of “The Open Web” sandwiched in the middle? holds an interview with Novell’s Miguel de Icaza, about their Mono and Moonlight emulations of Microsoft runtimes for Linux. Miguel also points out the convenient blindspots of those who argue against technology solely on the grounds of “It’s not The Open Web”:

I mean, how many people outside of the technology world really know about Linux at the moment? And even the Mozilla guys – the keynote we had here was done on a mac, every single Mozilla developer uses a Mac. And it’s funny, they constantly attack Silverlight, they constantly attack Flash and then all of them use proprietary operating systems, they don’t seem to have a problem doing it. And then they had the Guiness record thing for Firefox 3 and you went to the website and it had a flash map to show where people are downloading – so there definitely is a double standard here. And that’s after all their claiming that you can do everything in AJAX – so they definitely don’t ‘walk the walk’.”

If evangelists try to say that practical realworld web technologies can be tossed aside because of alleged philosophical impurity, then why aren’t these proselytizers using some type of Linux box, instead of the super-secret tightly-controlled Apple hardware?

And it goes up a level too — if you’re really concerned about open use of the World Wide Web, and are against proprietary secrecy, then wouldn’t you avoid accepting primary funding from Google, who has the biggest databases tracking consumer behavior on the Web, and who refuses to allow people to access the files Google holds on them? (If you’re not up-to-speed in this area of cross-site tracking via third-party content, then try EFF, Wikipedia, or me.)

When your mortgage is ultimately paid by selling consumers’ attention, it’s a little disingenuous to throw rocks at others who just sell software.

We accept “proprietary hardware” and “proprietary OS”, and run through “proprietary service providers” to bulk up “proprietary ad networks” and “proprietary social services”, all to build “proprietary behavioral databases” for a sugardaddy, but dadgum we can’t be using no “proprietary plugins”, nosir (unless’n they’re our “proprietary plugins” that is)!

It’s like seeing a supermarket ad for “all natural ingredients”… nice enough at first listening, but just what does it mean? And if you met someone who insisted on eating only “all natural ingredients”, but couldn’t describe what they were, then that could get more than a little weird too.

I think it makes a lot more sense to just neutrally weigh the benefits and potential risks of various choices, and not to dismiss any choice out-of-hand for religious reasons. But if I were to argue that certain choices may not be tolerated, then I’d likely try to make for some reasonable consistency in that intolerant stance. Why feed Apple below and Google above, if you insist “Flash is subverting ‘The Open Web’”…!?