Posts in Category "Uncategorized"

Open@Adobe Anniversary

This month marks the fourth anniversary of the Open@Adobe initiative, a site presenting a definitive view of the openness efforts at Adobe. Over the past few years, Adobe has released over 100 pieces of technology under open source licenses, as well as contributed to many notable open source projects. Adobe has also contributed to the community in the form of active members and chairs/co-chairs of numerous standards bodies such as  IETF, ECMA and ISO and authors of W3C standards like CSS, WCAG and ARIA.

Learn more about Adobe’s role in driving innovation and making the Web open in the Open@Adobe Fourth Anniversary blog post.

The Business of Standards, Part Two: The Catalyst for Change

The proliferation of Standards Setting Organizations (SSOs) began in the mid-1980s as a response to a perceived threat by the Japanese Fifth Generation Computer Systems project (FGCS) to the U.S. semiconductor industry. Several major U.S. chip and computer makers decided that a joint research initiative would help them meet the threat to U.S. chip making dominance. Unfortunately, U.S. law considered joint activities of this type to be in violation of anti-trust and anti-competitive laws. The remedy to this issue was to pass enabling legislation (National Cooperative Research Act of 1984, Pub L. No. 98-462) (NCRA) which allowed the creation of consortia for joint research and development.  Soon afterward, the Microelectronics and Computer Consortium (MCC) was created to engage in joint research in multiple areas of computer and chip design.

It should be noted that – at this time – the formal standards organizations in the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) had significant standardization activities occurring – especially in the area of the Open Systems Interconnect (OSI) arena.  Recognizing that the environment was becoming more important to business, the leaders of the ISO and IEC IT standards bodies proposed a merger of the ISO and IEC committees – creating the first (and so far, only) Joint Technical Committee  (ISO/IEC  JTC1) to make standardization easier.

However, the formalists failed to provide adequate testing and verification of the very complex Open Systems Interconnect standards, and this need was quickly met by the Corporation for Open Systems (COS), (1986), a consortium of suppliers and users to ensure that OSI implementations were interoperable.  Although short lived (in standards years), COS showed the way for the use of the NCRA to be used “differently.” It showed that a private organization (a consortium) could accomplish quickly what the formal standards organizations couldn’t – and do it with a highly focused approach that didn’t need all the “international” approvals and compromise.

The late 1980s and early 1990s saw an explosion of similar organizations – all of which were created by companies to “expedite” time to market (as well as the creation of the market, it was hoped). The most successful of these was the Object Management Group (OMG) founded in 1989 to create a heterogeneous distributed object standard. The Manufacturing Automation Protocol/Technical Office Protocol (MAP/TOP), championed by GM and Boeing respectively, came to life during this time, as did the User Alliance for Open Systems. There were also consortia created to push a particular provider’s technology (88open Consortium and SPARC International come to mind).

Of course, these groups began to strain the limits of the 1984 cooperative R&D legislation, so Congress modified the law in 1993 and passed the National Cooperative Production Amendments of 1993, Pub. L. No. 103-42, which amended the National Cooperative Research Act of 1984, Pub L. No. 98-462 and renamed it the National Cooperative Research and Production Act of 1993. (NCRPA)

And it is this Act that most consortia use to legitimize their existence. It provides limited immunity from anti-trust, provides some cover for anti-competitive behavior, and provides a basis for an organizational framework upon which to build your own consortium. However, this is not the end of the quest for “the nice thing about standards is that you have so many to choose from” syndrome. While the tools and mechanism for creating a consortium were now in place, the actual creation takes a little more effort.

The next post will look at how the “business of standards” has grown in the 20 years since the NCRPA was passed – and how consortium have changed standardization in the Information and Communications Technology (ICT) world.

Carl Cargill
Principal Scientist

Adobe’s RTMFP Profile for Flash Communication Released

Adobe’s Secure Real-Time Media Flow Protocol (RTMFP) is a general purpose data transport protocol designed for real-time and peer-to-peer (P2P) communication, and is the foundation of Flash’s P2P capabilities. RTMFP is documented in RFC 7016  published November 2013 by the Internet Engineering Task Force (IETF).

We have now submitted a companion specification, “Adobe’s RTMFP Profile for Flash Communication” to the IETF Internet-Drafts repository.

This new specification shows how developers can create applications that interoperate with Adobe Flash and Adobe Media Server in client-server and direct client-to-client modes using RTMFP. We believe RTMFP also has applicability beyond Flash, and this specification can help developers understand how to use RTMFP in their own innovations for next-generation real-time and P2P applications.

Adobe continues to develop technologies using RTMFP within Adobe Flash and Adobe Media Server including features like Multicast and Groups, being used today by our customers to deliver high quality video experiences across public and corporate networks.

We are excited to continue making contributions to standards organizations such as the IETF that further Internet technologies for developers and users. As a technology leader, Adobe collaborates with stakeholders from industry, academia and government to develop, drive and support standards in existing and emerging technologies, policy areas, and markets, in order to improve our customers’ experience.

We welcome comments and feedback to help us improve the quality, clarity, and accuracy of this specification and we are excited to see what the Internet community creates with it.

Michael Thornburgh
Senior Computer Scientist

The Business of Standards, Part 1

Andrew S. Tenenbaum, professor of computer science at Vrije University in Amsterdam once said:  “The nice thing about standards is that you have so many to choose from.”

I like this quote because it, like so many trite sayings, covers a rather more complex issue that most in the Information and Communications Technology  (ICT) arena prefer to ignore. The issue is, simply, why are there so many standards?  Beyond this, where do these standards come from and who pays for them?

The answers are simple – standards don’t appear magically, and are often created by the very industries that criticize their proliferation.  Industry members invest a lot of time, resources, and energy in creating standards. Case in point, Andy Updegrove  – a lawyer who helps create consortia – lists 887 ICT consortia in his tracking guide.  All of these consortia are funded by companies and individuals who are busily engaged in writing standards.

So why do these companies support such a vast standards industry? Because the act of standardization, if properly managed, can confer competitive advantage.  Basic to this idea is that a standard is a change agent – its only function is to change the market in some way or another.

Most often, standards are described as being used to “level the playing field.” This is true only in a commodity arena, such as standard wheat or standard cotton. Nearly everything in the ICT industry that is “standardized” has associated differentiators (from performance to speed to cost) that are vital for market share retention and growth.

However, occasionally, a company or other entity may find creating a differentiator to the current standard difficult due to extenuating business reasons, such as IPR (intellectual property rights) payments, lack of technical expertise, or even possibly owning significant competing technology. In this case, the organization can try to create a competing product that incorporates the (newer/better/more open/other) technology. All the organization needs are enough allies and/or market share to either support and embrace this competing offering.   If it wants to do this more openly, it can create an organization to help.

This scenario has been played out at least 887 times. Every time it is repeated, at least one new Standards Setting Organization (SSO) is created, which in turn sets about creating standards.

Companies find it to their benefit to claim that their product conforms to a standard – it reassures buyers, builds confidence, and allows markets to be opened. However, this also creates a morass of conflicting standards and standards organizations, thereby limiting the value of all standards – both the good and the bad.

One question is what is the legal basis of  this proliferation of standards setting organizations (SSOs)? Well, it turns out that the doctrine of “unanticipated consequences” is to blame.

The next post will examine the roots for this proliferation and how the business of standards started.

Carl Cargill
Principal Scientist

Update: Alignment of Adobe-Approved Trust List (AATL) and EU Trust List (EUTL)

As mentioned in our previous post, Alignment of Adobe-Approved Trust List (AATL) and EU Trust List (EUTL) we have been busy working on the integration of the EU Trust List into Adobe Acrobat and Reader software. Our January 14, 2014 release of Adobe Reader and Acrobat 11.0.06 takes another significant step towards that ultimate goal. In this version of the product, you will notice new UI to manage the EUTL download. For instance, we’ve added new controls in the Trust Manager Preferences as shown below.

EUTL pic

 

 

While we continue with our beta testing phase of this process, the general user will not be able to download an EUTL. But, as soon as beta is complete, we’ll be moving the EUTL into production, where everyone will have access.

Steve Gottwals
Senior Engineering Manager, Information Security

John Jolliffe
Senior Manager, European Government Affairs

The W3C Updates Process for More Agile Standards Development

For much of my 40 year career, the development of standards, both IT standards and those of other fields, proceeded in much the same way. A group, often under the guidance of a national or international standards organization, would work out a proposal, pretty much in private, and submit it to a series of reviews by other people interested in standards. The process under which this happened was quite formalized and participation was rather limited. Though the process worked, it was originally designed for a time when communication was by mail and it often took a long time to conclude.

The Internet, e-mail and other social technologies have changed how we communicate. Experience with software development has changed how we organize and realize projects. Communication is immediate (and often continuous) and software is constructed using agile methodologies that break the work into relevant chunks that are realized in succession (rather than trying to do the whole project at once and then test and deliver – i.e. the “waterfall model”). It is time that standards development, at least for IT standards, exploits these cultural changes to make the standards development process more effective.

The World Wide Web Consortium (W3C) has undertaken updating its process to allow more agile standards development. Over the last five years, the W3C has opened up much of its standards work to public participation on a daily (if desired) basis. But, there are aspects of the current W3C Process that act as barriers to more agile standards development. One of these is the assumption that all parts of a potential standard will progress at the same rate; that is, all parts of a standard will be accepted and reach deployment at the same time.

In the past, a specification would go through a number of public Working Drafts, become a Candidate Recommendation and finally a W3C Recommendation. When the relevant Working Group believed that they had completed their work on the specification they would issue a Last Call. This Last Call was a request for confirmation that the work was complete by people outside the Working Group. All too frequently, this Last Call was both too late (to make changes to the overall structure of the standard) and too early because many relevant detailed comments were submitted and needed to be processed. This led to multiple “Last Calls.” When these were done, the next step, Candidate Recommendation, was a Call for Implementations. But, for features that were of high interest, experimental implementations began much earlier in the development cycle. It was not implementations, but the lack of a way of showing that the implementations were interoperable that held up progression to a Recommendation.

So, the W3C is proposing that (where possible) standards be developed in smaller, more manageable units, “modules.” Each module either introduces a new set of features or extends an existing standard. It’s size makes it easier to review, to implement and completely specify. But even the parts of a module mature at different rates. This means that reviewers need to be notified, with each Working Draft, which parts are ready for serious review. This makes the need for a Last Call optional. It is up to the Working Group to show that they have achieved Wide Review of their work product. This involves meeting a set of reasonable criteria rather than a single specific hurdle. With this change, Candidate Recommendation becomes the announcement that the specification is both complete and it has been widely reviewed. This is the time at which the final IPR reviews are done and the Membership can assess whether specification is appropriate to issue as a Recommendation. It is also time that the existence of interoperable implementations is demonstrated.

With these changes, it becomes much easier to develop all the aspects of a standard – solid specification, wide review, implementation experience and interoperability demonstrations – in parallel. This will help shorten the time from conception to reliable deployment.

Stephen Zilles
Sr. Computer Scientist

CSS Shapes in Last Call

(reposted from the Web Platform Blog)
The CSS Working Group published a Last Call Working Draft of CSS Shapes Module Level 1 yesterday. This specification describes how to assign an arbitrary shape such as a circle or polygon to a float and to have inline content wrap around the shape’s contour, rather than the boring old float rectangle.

A Last Call draft is the signal that the working group thinks the specification is nearly done and ready for wider review. If the review (which has a deadline of January 7th, 2014) goes well, then the spec goes on to the next step. At that point, the W3C invites implementations in order to further validate the specification. We at Adobe have been implementing CSS Shapes in WebKit and Blink for a while, but this milestone opens the door for more browsers to take up the feature.

This means that 2014 will be the year you’ll see CSS Shapes show up in cutting-edge browsers. The effort then moves to making sure these implementations all match, and that we have a good set of tests available to show interoperability. My hope is that by the end of next year, you’ll be able to use shapes on floats in many browsers (with a fallback to the normal rectangle when needed).

Alan Stearns
Computer Scientist
Web Platform & Authoring

SVG-in-OpenType Enables Color and Animation in Fonts

After two years of discussion and development, the W3C community group (CG) SVG Glyphs for OpenType recently finalized its report for an extension to the OpenType font format. This extension allows glyphs to be defined in the font as SVG (Scalable Vector Graphics). Such OpenType fonts can display colors, gradients, and even animation right “out of the box,” that is, when rendered by a font engine that supports this extension.

While the initial use case, emoji, is described in our Typblography article “SVG in OpenType: Genesis,” this extension to OpenType allows for a broad range of applications: colored illuminated initial capitals, creative display titling, logo and icon fonts, and educational fonts that show kanji stroke ordering and direction. These use cases are central to Adobe’s roots and continuing exploration of how best to thrill customers with rich visual expressiveness, in this case through typographic innovation.

The bulk of the specification was crafted by Adobe and Mozilla. I teamed up with Cameron McCormack and Edwin Flores of Mozilla to edit the report. Chris Lilley of the W3C chaired the CG. Mozilla’s implementation in Firefox was the first such (the Typblography article above contains instructions on how to see SVG-in-OT in action in Firefox). We will be presenting the CG’s final report to ISO MPEG’s Open Font Format (OFF) committee in January 2014 for formal inclusion in their specification. An MPEG approval of the specification would mean automatic acceptance into the OpenType specification.

Adobe has contributed extensively to the OFF and OpenType specifications since their inception, starting of course with inclusion of our Compact Font Format (CFF) into OpenType in the mid-nineties. In subsequent years, we proposed several additional tables, table extensions, and features to improve text layout and to accompany advances in Unicode (variation sequences, for example) – extensions which have been implemented by OpenType font engines across the board, including those of Apple, Microsoft, and Google.

Sairus Patel
Senior Computer Scientist
Core Type Technologies

 

Forking Standards and Document Licensing

There’s been quite a bit of controversy in the web standards community over the copyright licensing terms of standards specifications, and whether those terms should allow “forking”: allowing anyone to create their own specification, using the words of the original, without notice or getting explicit permission. Luis Villa, David Baron, Robin Berjon have written eloquently about this topic.

While a variety of arguments in favor of open licensing of documents have been made, what seems to be missing is a clear separation of the goals and methods of accomplishing those goals.

Developing a Policy on Forking

While some kinds of “allowing forking” are healthy; some are harmful. The “right to fork” may indeed constitute a safeguard against standards groups going awry, just as it does for open source software. The case for using the market to decide rather than arguing in committee is strong. Forking to define something new and better or different is tolerable, because the market can decide between competing standards. However, there are two primary negative consequences of forking that we need to guard against:

  1. Unnecessary proliferation of standards (“The nice thing about standards is there are so many to choose from”). That is, when someone is designing a system, if there are several ways to implement something, the question becomes which one to use? If different component or subsystem designers choose different standards, then it’s harder to put together new systems that combine them. (For example, it is a problem that Russia’s train tracks are a different size than European train tracks.) Admittedly, it is hard to decide which forks are “necessary”.
  2. Confusion over which fork of the standard is intended. Forking where the new specification is called the same thing and/or uses the same code extension points without differentiation is harmful, because it increases the risk of incompatibility. A “standard” provides a standard definition of a term, and when there is a fork which doesn’t rename or recode, there can be two or more competing definitions for the same term. This situation comes with more difficulties, because the designers of one subsystem might have started with one standard and the designers of another subsystem with another, and think the two subsystems will be compatible, when in fact they are just called the same thing.

The arguments in favor of forking concentrate on asserting that allowing for (1) is a necessary evil, and that the market will correct by choosing one standard over another. However, little has been done to address (2). There are two kinds of confusion:

  1.  humans: when acquiring or building a module to work with others, people use standards as the description of the interfaces that module needs. If there are two versions of the same specification, they might not know which one was meant.
  2. automation: many interfaces use look-up tables and extension points. If an interface is forked, the same identifier can’t be used for indicating different protocols.

The property of “standard” is not inheritable; any derivative work of a standard must go through the process standardization itself to be called a Standard.

Encouraging wide citation of forking policy

The extended discussions over copyright and document license in W3C seems somewhat misdirected. Copyright by itself is a weak tool for preventing any unwanted behavior, but especially since standards group are rarely in a position to enforce copyright claims.

While some might consider trademark and patent rights as other means of discouraging (harmful) forking, these “rights” mechanisms were not designed to solve the forking problem for standards.  More practically, “enforcement” of appropriate behavior will depend primarily on community action to accept or reject implementors who don’t play nice according to expected norms. At the same time, we need to make sure the trailblazers are not at risk.

Copyright can be used to help establish expected norms

To make this work, it is important to work toward a community consensus on what constitutes acceptable and unacceptable forking, and publish it; for example, a W3C Recommendation “Forking W3C Specifications” might include some of the points raised above. Even when standards specifications are made available with a license that allows forking (e.g. the Creative Commons CC-by license), the license statement could also be accompanied by a notice that pointed to the policy on forking.

Of course this wouldn’t legally prevent individuals and groups from making forks, but hopefully would discourage harmful misuse, while still encouraging innovation.
Dave McAllister, Director of Open Source
Larry Masinter, Principal Scientist