Adobe’s RTMFP Profile for Flash Communication Released

Adobe’s Secure Real-Time Media Flow Protocol (RTMFP) is a general purpose data transport protocol designed for real-time and peer-to-peer (P2P) communication, and is the foundation of Flash’s P2P capabilities. RTMFP is documented in RFC 7016  published November 2013 by the Internet Engineering Task Force (IETF).

We have now submitted a companion specification, “Adobe’s RTMFP Profile for Flash Communication” to the IETF Internet-Drafts repository.

This new specification shows how developers can create applications that interoperate with Adobe Flash and Adobe Media Server in client-server and direct client-to-client modes using RTMFP. We believe RTMFP also has applicability beyond Flash, and this specification can help developers understand how to use RTMFP in their own innovations for next-generation real-time and P2P applications.

Adobe continues to develop technologies using RTMFP within Adobe Flash and Adobe Media Server including features like Multicast and Groups, being used today by our customers to deliver high quality video experiences across public and corporate networks.

We are excited to continue making contributions to standards organizations such as the IETF that further Internet technologies for developers and users. As a technology leader, Adobe collaborates with stakeholders from industry, academia and government to develop, drive and support standards in existing and emerging technologies, policy areas, and markets, in order to improve our customers’ experience.

We welcome comments and feedback to help us improve the quality, clarity, and accuracy of this specification and we are excited to see what the Internet community creates with it.

Michael Thornburgh
Senior Computer Scientist

The Business of Standards, Part 1

Andrew S. Tenenbaum, professor of computer science at Vrije University in Amsterdam once said:  “The nice thing about standards is that you have so many to choose from.”

I like this quote because it, like so many trite sayings, covers a rather more complex issue that most in the Information and Communications Technology  (ICT) arena prefer to ignore. The issue is, simply, why are there so many standards?  Beyond this, where do these standards come from and who pays for them?

The answers are simple – standards don’t appear magically, and are often created by the very industries that criticize their proliferation.  Industry members invest a lot of time, resources, and energy in creating standards. Case in point, Andy Updegrove  – a lawyer who helps create consortia – lists 887 ICT consortia in his tracking guide.  All of these consortia are funded by companies and individuals who are busily engaged in writing standards.

So why do these companies support such a vast standards industry? Because the act of standardization, if properly managed, can confer competitive advantage.  Basic to this idea is that a standard is a change agent – its only function is to change the market in some way or another.

Most often, standards are described as being used to “level the playing field.” This is true only in a commodity arena, such as standard wheat or standard cotton. Nearly everything in the ICT industry that is “standardized” has associated differentiators (from performance to speed to cost) that are vital for market share retention and growth.

However, occasionally, a company or other entity may find creating a differentiator to the current standard difficult due to extenuating business reasons, such as IPR (intellectual property rights) payments, lack of technical expertise, or even possibly owning significant competing technology. In this case, the organization can try to create a competing product that incorporates the (newer/better/more open/other) technology. All the organization needs are enough allies and/or market share to either support and embrace this competing offering.   If it wants to do this more openly, it can create an organization to help.

This scenario has been played out at least 887 times. Every time it is repeated, at least one new Standards Setting Organization (SSO) is created, which in turn sets about creating standards.

Companies find it to their benefit to claim that their product conforms to a standard – it reassures buyers, builds confidence, and allows markets to be opened. However, this also creates a morass of conflicting standards and standards organizations, thereby limiting the value of all standards – both the good and the bad.

One question is what is the legal basis of  this proliferation of standards setting organizations (SSOs)? Well, it turns out that the doctrine of “unanticipated consequences” is to blame.

The next post will examine the roots for this proliferation and how the business of standards started.

Carl Cargill
Principal Scientist

Update: Alignment of Adobe-Approved Trust List (AATL) and EU Trust List (EUTL)

As mentioned in our previous post, Alignment of Adobe-Approved Trust List (AATL) and EU Trust List (EUTL) we have been busy working on the integration of the EU Trust List into Adobe Acrobat and Reader software. Our January 14, 2014 release of Adobe Reader and Acrobat 11.0.06 takes another significant step towards that ultimate goal. In this version of the product, you will notice new UI to manage the EUTL download. For instance, we’ve added new controls in the Trust Manager Preferences as shown below.

EUTL pic

 

 

While we continue with our beta testing phase of this process, the general user will not be able to download an EUTL. But, as soon as beta is complete, we’ll be moving the EUTL into production, where everyone will have access.

Steve Gottwals
Senior Engineering Manager, Information Security

John Jolliffe
Senior Manager, European Government Affairs

The W3C Updates Process for More Agile Standards Development

For much of my 40 year career, the development of standards, both IT standards and those of other fields, proceeded in much the same way. A group, often under the guidance of a national or international standards organization, would work out a proposal, pretty much in private, and submit it to a series of reviews by other people interested in standards. The process under which this happened was quite formalized and participation was rather limited. Though the process worked, it was originally designed for a time when communication was by mail and it often took a long time to conclude.

The Internet, e-mail and other social technologies have changed how we communicate. Experience with software development has changed how we organize and realize projects. Communication is immediate (and often continuous) and software is constructed using agile methodologies that break the work into relevant chunks that are realized in succession (rather than trying to do the whole project at once and then test and deliver – i.e. the “waterfall model”). It is time that standards development, at least for IT standards, exploits these cultural changes to make the standards development process more effective.

The World Wide Web Consortium (W3C) has undertaken updating its process to allow more agile standards development. Over the last five years, the W3C has opened up much of its standards work to public participation on a daily (if desired) basis. But, there are aspects of the current W3C Process that act as barriers to more agile standards development. One of these is the assumption that all parts of a potential standard will progress at the same rate; that is, all parts of a standard will be accepted and reach deployment at the same time.

In the past, a specification would go through a number of public Working Drafts, become a Candidate Recommendation and finally a W3C Recommendation. When the relevant Working Group believed that they had completed their work on the specification they would issue a Last Call. This Last Call was a request for confirmation that the work was complete by people outside the Working Group. All too frequently, this Last Call was both too late (to make changes to the overall structure of the standard) and too early because many relevant detailed comments were submitted and needed to be processed. This led to multiple “Last Calls.” When these were done, the next step, Candidate Recommendation, was a Call for Implementations. But, for features that were of high interest, experimental implementations began much earlier in the development cycle. It was not implementations, but the lack of a way of showing that the implementations were interoperable that held up progression to a Recommendation.

So, the W3C is proposing that (where possible) standards be developed in smaller, more manageable units, “modules.” Each module either introduces a new set of features or extends an existing standard. It’s size makes it easier to review, to implement and completely specify. But even the parts of a module mature at different rates. This means that reviewers need to be notified, with each Working Draft, which parts are ready for serious review. This makes the need for a Last Call optional. It is up to the Working Group to show that they have achieved Wide Review of their work product. This involves meeting a set of reasonable criteria rather than a single specific hurdle. With this change, Candidate Recommendation becomes the announcement that the specification is both complete and it has been widely reviewed. This is the time at which the final IPR reviews are done and the Membership can assess whether specification is appropriate to issue as a Recommendation. It is also time that the existence of interoperable implementations is demonstrated.

With these changes, it becomes much easier to develop all the aspects of a standard – solid specification, wide review, implementation experience and interoperability demonstrations – in parallel. This will help shorten the time from conception to reliable deployment.

Stephen Zilles
Sr. Computer Scientist

CSS Shapes in Last Call

(reposted from the Web Platform Blog)
The CSS Working Group published a Last Call Working Draft of CSS Shapes Module Level 1 yesterday. This specification describes how to assign an arbitrary shape such as a circle or polygon to a float and to have inline content wrap around the shape’s contour, rather than the boring old float rectangle.

A Last Call draft is the signal that the working group thinks the specification is nearly done and ready for wider review. If the review (which has a deadline of January 7th, 2014) goes well, then the spec goes on to the next step. At that point, the W3C invites implementations in order to further validate the specification. We at Adobe have been implementing CSS Shapes in WebKit and Blink for a while, but this milestone opens the door for more browsers to take up the feature.

This means that 2014 will be the year you’ll see CSS Shapes show up in cutting-edge browsers. The effort then moves to making sure these implementations all match, and that we have a good set of tests available to show interoperability. My hope is that by the end of next year, you’ll be able to use shapes on floats in many browsers (with a fallback to the normal rectangle when needed).

Alan Stearns
Computer Scientist
Web Platform & Authoring

SVG-in-OpenType Enables Color and Animation in Fonts

After two years of discussion and development, the W3C community group (CG) SVG Glyphs for OpenType recently finalized its report for an extension to the OpenType font format. This extension allows glyphs to be defined in the font as SVG (Scalable Vector Graphics). Such OpenType fonts can display colors, gradients, and even animation right “out of the box,” that is, when rendered by a font engine that supports this extension.

While the initial use case, emoji, is described in our Typblography article “SVG in OpenType: Genesis,” this extension to OpenType allows for a broad range of applications: colored illuminated initial capitals, creative display titling, logo and icon fonts, and educational fonts that show kanji stroke ordering and direction. These use cases are central to Adobe’s roots and continuing exploration of how best to thrill customers with rich visual expressiveness, in this case through typographic innovation.

The bulk of the specification was crafted by Adobe and Mozilla. I teamed up with Cameron McCormack and Edwin Flores of Mozilla to edit the report. Chris Lilley of the W3C chaired the CG. Mozilla’s implementation in Firefox was the first such (the Typblography article above contains instructions on how to see SVG-in-OT in action in Firefox). We will be presenting the CG’s final report to ISO MPEG’s Open Font Format (OFF) committee in January 2014 for formal inclusion in their specification. An MPEG approval of the specification would mean automatic acceptance into the OpenType specification.

Adobe has contributed extensively to the OFF and OpenType specifications since their inception, starting of course with inclusion of our Compact Font Format (CFF) into OpenType in the mid-nineties. In subsequent years, we proposed several additional tables, table extensions, and features to improve text layout and to accompany advances in Unicode (variation sequences, for example) – extensions which have been implemented by OpenType font engines across the board, including those of Apple, Microsoft, and Google.

Sairus Patel
Senior Computer Scientist
Core Type Technologies

 

Forking Standards and Document Licensing

There’s been quite a bit of controversy in the web standards community over the copyright licensing terms of standards specifications, and whether those terms should allow “forking”: allowing anyone to create their own specification, using the words of the original, without notice or getting explicit permission. Luis Villa, David Baron, Robin Berjon have written eloquently about this topic.

While a variety of arguments in favor of open licensing of documents have been made, what seems to be missing is a clear separation of the goals and methods of accomplishing those goals.

Developing a Policy on Forking

While some kinds of “allowing forking” are healthy; some are harmful. The “right to fork” may indeed constitute a safeguard against standards groups going awry, just as it does for open source software. The case for using the market to decide rather than arguing in committee is strong. Forking to define something new and better or different is tolerable, because the market can decide between competing standards. However, there are two primary negative consequences of forking that we need to guard against:

  1. Unnecessary proliferation of standards (“The nice thing about standards is there are so many to choose from”). That is, when someone is designing a system, if there are several ways to implement something, the question becomes which one to use? If different component or subsystem designers choose different standards, then it’s harder to put together new systems that combine them. (For example, it is a problem that Russia’s train tracks are a different size than European train tracks.) Admittedly, it is hard to decide which forks are “necessary”.
  2. Confusion over which fork of the standard is intended. Forking where the new specification is called the same thing and/or uses the same code extension points without differentiation is harmful, because it increases the risk of incompatibility. A “standard” provides a standard definition of a term, and when there is a fork which doesn’t rename or recode, there can be two or more competing definitions for the same term. This situation comes with more difficulties, because the designers of one subsystem might have started with one standard and the designers of another subsystem with another, and think the two subsystems will be compatible, when in fact they are just called the same thing.

The arguments in favor of forking concentrate on asserting that allowing for (1) is a necessary evil, and that the market will correct by choosing one standard over another. However, little has been done to address (2). There are two kinds of confusion:

  1.  humans: when acquiring or building a module to work with others, people use standards as the description of the interfaces that module needs. If there are two versions of the same specification, they might not know which one was meant.
  2. automation: many interfaces use look-up tables and extension points. If an interface is forked, the same identifier can’t be used for indicating different protocols.

The property of “standard” is not inheritable; any derivative work of a standard must go through the process standardization itself to be called a Standard.

Encouraging wide citation of forking policy

The extended discussions over copyright and document license in W3C seems somewhat misdirected. Copyright by itself is a weak tool for preventing any unwanted behavior, but especially since standards group are rarely in a position to enforce copyright claims.

While some might consider trademark and patent rights as other means of discouraging (harmful) forking, these “rights” mechanisms were not designed to solve the forking problem for standards.  More practically, “enforcement” of appropriate behavior will depend primarily on community action to accept or reject implementors who don’t play nice according to expected norms. At the same time, we need to make sure the trailblazers are not at risk.

Copyright can be used to help establish expected norms

To make this work, it is important to work toward a community consensus on what constitutes acceptable and unacceptable forking, and publish it; for example, a W3C Recommendation “Forking W3C Specifications” might include some of the points raised above. Even when standards specifications are made available with a license that allows forking (e.g. the Creative Commons CC-by license), the license statement could also be accompanied by a notice that pointed to the policy on forking.

Of course this wouldn’t legally prevent individuals and groups from making forks, but hopefully would discourage harmful misuse, while still encouraging innovation.
Dave McAllister, Director of Open Source
Larry Masinter, Principal Scientist

Updating HTTP

Much of the excitement about advancing the Web has been around HTML5 (the fifth version of the HyperText Markup Language) and its associated specifications; these describe appearance and interactive behavior of the Web.

The HyperText Transfer Protocol (HTTP) is the network protocol used to manage the transfer of HTML and other content, as well as applications that use HTTP. There has been significant progress in updating the HTTP standard.

The third edition of  HTTP/1.1  is nearing completion by the HTTPbis working group of the IETF. This edition is a major rewrite of the specification to fix errors, clarify ambiguities, document compatibility requirements, and prepare for future standards. It represents years of editing and consensus building by Adobe Senior Principal Scientist Roy T. Fielding, along with fellow editors Julian Reschke, Yves Lafon, and Mark Nottingham. The six proposed RFCs define the protocol’s major orthogonal features in separate documents, thereby improving its readability and focus for specific implementations and forming the basis for the next step in HTTP evolution.

Now with HTTP/1.1 almost behind us, the Web community has started work on HTTP/2.0 in earnest, with a focus on improved performance, interactivity, and use of network resources.

Progress on HTTP/2.0 has been rapid; a recent HTTPbis meeting at Adobe offices in Hamburg made significant advancements on the compression method and interoperability testing of early implementations. For more details, I’ve written on why HTTP/2.0 is needed, as well as sounding some HTTP/2.0 concerns.

Larry Masinter
Principal Scientist

The Internet, Standards, and Intellectual Property

The Internet Society recently issued a paper on “Intellectual Property on the Internet“,written by Konstantinos Komaitis, a policy advisor at the Internet Society. As the title of the paper indicates, the paper focuses on only one policy issue – the need to reshape the role and position of intellectual property. The central thesis of the paper is that “industry-based initiatives focusing on the enforcement of intellectual property rights should be subjected to periodic independent reviews as related to their efficiency and adherence to due process and the rule of law.”

The author cites the August 2012 announcement of “The Modern Paradigm for Standards Development” which recognizes that the economics of global markets, fueled by technological advancements, drive global deployment of standards regardless of their formal status. In this paradigm, standards support interoperability, foster global competition, are developed through an open participatory process, and are voluntarily adopted globally.” These “OpenStand” principles were posited by the Institute of Electrical and Electronics Engineers (IEEE), the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), the Internet Society, and the World Wide Web Consortium (W3C).

Komaitis conveniently overlooks the nearly 700 other organizations (formal and otherwise) that develop standards. And that nearly all industries depend upon standards. And that governments are aware of the power of standards to create economic policy and drive and sustain economic growth. Instead, the author focuses on one small aspect of standards – intellectual property.

Another issue conveniently overlooked is how to fund standards development. Komaitis asserts that “…industry-based initiatives ….should be subjected to periodic independent reviews… ” He misses the fact that industry funds nearly all of the standards organizations in existence. Absent industry funding for participants, charging for standards, and acceptance of standards in product creation, would cause the entire standardization arena to become extinct.

The author seems to be arguing for a revision of intellectual property rights (IPR) rules in standardization – when, in fact, there is no real demand from the industry as a whole. Komaitis is really asking for an “intellectual property rights carve out” for standards related to the Internet. Looking at the big picture, the plea that it is necessary to rejigger world-wide IPR rules to prevent putting the State or courts “in the awkward position of having to prioritize intellectual property rights over the Internet’s technical operation…” seems trite and self-serving.

There is a claim that “the Internet Society will continue to advocate for open, multi-participatory and transparent discussions and will be working with all stakeholders in advancing these minimum standards in all intellectual property fora.” Perhaps the Internet Society could look at what already exists in the International Organization for Standardization (ISO) or the World Trade Organization (WTO) or perhaps even the International Telecommunications Union (ITU) to see how a majority of the “stakeholders” worldwide already deal with these issues – and then maybe get back to actually solving the technical issues at which the IETF excels.

Carl Cargill
Principal Scientist