Posts tagged "W3C"

Open@Adobe Anniversary

This month marks the fourth anniversary of the Open@Adobe initiative, a site presenting a definitive view of the openness efforts at Adobe. Over the past few years, Adobe has released over 100 pieces of technology under open source licenses, as well as contributed to many notable open source projects. Adobe has also contributed to the community in the form of active members and chairs/co-chairs of numerous standards bodies such as  IETF, ECMA and ISO and authors of W3C standards like CSS, WCAG and ARIA.

Learn more about Adobe’s role in driving innovation and making the Web open in the Open@Adobe Fourth Anniversary blog post.

The W3C Updates Process for More Agile Standards Development

For much of my 40 year career, the development of standards, both IT standards and those of other fields, proceeded in much the same way. A group, often under the guidance of a national or international standards organization, would work out a proposal, pretty much in private, and submit it to a series of reviews by other people interested in standards. The process under which this happened was quite formalized and participation was rather limited. Though the process worked, it was originally designed for a time when communication was by mail and it often took a long time to conclude.

The Internet, e-mail and other social technologies have changed how we communicate. Experience with software development has changed how we organize and realize projects. Communication is immediate (and often continuous) and software is constructed using agile methodologies that break the work into relevant chunks that are realized in succession (rather than trying to do the whole project at once and then test and deliver – i.e. the “waterfall model”). It is time that standards development, at least for IT standards, exploits these cultural changes to make the standards development process more effective.

The World Wide Web Consortium (W3C) has undertaken updating its process to allow more agile standards development. Over the last five years, the W3C has opened up much of its standards work to public participation on a daily (if desired) basis. But, there are aspects of the current W3C Process that act as barriers to more agile standards development. One of these is the assumption that all parts of a potential standard will progress at the same rate; that is, all parts of a standard will be accepted and reach deployment at the same time.

In the past, a specification would go through a number of public Working Drafts, become a Candidate Recommendation and finally a W3C Recommendation. When the relevant Working Group believed that they had completed their work on the specification they would issue a Last Call. This Last Call was a request for confirmation that the work was complete by people outside the Working Group. All too frequently, this Last Call was both too late (to make changes to the overall structure of the standard) and too early because many relevant detailed comments were submitted and needed to be processed. This led to multiple “Last Calls.” When these were done, the next step, Candidate Recommendation, was a Call for Implementations. But, for features that were of high interest, experimental implementations began much earlier in the development cycle. It was not implementations, but the lack of a way of showing that the implementations were interoperable that held up progression to a Recommendation.

So, the W3C is proposing that (where possible) standards be developed in smaller, more manageable units, “modules.” Each module either introduces a new set of features or extends an existing standard. It’s size makes it easier to review, to implement and completely specify. But even the parts of a module mature at different rates. This means that reviewers need to be notified, with each Working Draft, which parts are ready for serious review. This makes the need for a Last Call optional. It is up to the Working Group to show that they have achieved Wide Review of their work product. This involves meeting a set of reasonable criteria rather than a single specific hurdle. With this change, Candidate Recommendation becomes the announcement that the specification is both complete and it has been widely reviewed. This is the time at which the final IPR reviews are done and the Membership can assess whether specification is appropriate to issue as a Recommendation. It is also time that the existence of interoperable implementations is demonstrated.

With these changes, it becomes much easier to develop all the aspects of a standard – solid specification, wide review, implementation experience and interoperability demonstrations – in parallel. This will help shorten the time from conception to reliable deployment.

Stephen Zilles
Sr. Computer Scientist

CSS Shapes in Last Call

(reposted from the Web Platform Blog)
The CSS Working Group published a Last Call Working Draft of CSS Shapes Module Level 1 yesterday. This specification describes how to assign an arbitrary shape such as a circle or polygon to a float and to have inline content wrap around the shape’s contour, rather than the boring old float rectangle.

A Last Call draft is the signal that the working group thinks the specification is nearly done and ready for wider review. If the review (which has a deadline of January 7th, 2014) goes well, then the spec goes on to the next step. At that point, the W3C invites implementations in order to further validate the specification. We at Adobe have been implementing CSS Shapes in WebKit and Blink for a while, but this milestone opens the door for more browsers to take up the feature.

This means that 2014 will be the year you’ll see CSS Shapes show up in cutting-edge browsers. The effort then moves to making sure these implementations all match, and that we have a good set of tests available to show interoperability. My hope is that by the end of next year, you’ll be able to use shapes on floats in many browsers (with a fallback to the normal rectangle when needed).

Alan Stearns
Computer Scientist
Web Platform & Authoring

Forking Standards and Document Licensing

There’s been quite a bit of controversy in the web standards community over the copyright licensing terms of standards specifications, and whether those terms should allow “forking”: allowing anyone to create their own specification, using the words of the original, without notice or getting explicit permission. Luis Villa, David Baron, Robin Berjon have written eloquently about this topic.

While a variety of arguments in favor of open licensing of documents have been made, what seems to be missing is a clear separation of the goals and methods of accomplishing those goals.

Developing a Policy on Forking

While some kinds of “allowing forking” are healthy; some are harmful. The “right to fork” may indeed constitute a safeguard against standards groups going awry, just as it does for open source software. The case for using the market to decide rather than arguing in committee is strong. Forking to define something new and better or different is tolerable, because the market can decide between competing standards. However, there are two primary negative consequences of forking that we need to guard against:

  1. Unnecessary proliferation of standards (“The nice thing about standards is there are so many to choose from”). That is, when someone is designing a system, if there are several ways to implement something, the question becomes which one to use? If different component or subsystem designers choose different standards, then it’s harder to put together new systems that combine them. (For example, it is a problem that Russia’s train tracks are a different size than European train tracks.) Admittedly, it is hard to decide which forks are “necessary”.
  2. Confusion over which fork of the standard is intended. Forking where the new specification is called the same thing and/or uses the same code extension points without differentiation is harmful, because it increases the risk of incompatibility. A “standard” provides a standard definition of a term, and when there is a fork which doesn’t rename or recode, there can be two or more competing definitions for the same term. This situation comes with more difficulties, because the designers of one subsystem might have started with one standard and the designers of another subsystem with another, and think the two subsystems will be compatible, when in fact they are just called the same thing.

The arguments in favor of forking concentrate on asserting that allowing for (1) is a necessary evil, and that the market will correct by choosing one standard over another. However, little has been done to address (2). There are two kinds of confusion:

  1.  humans: when acquiring or building a module to work with others, people use standards as the description of the interfaces that module needs. If there are two versions of the same specification, they might not know which one was meant.
  2. automation: many interfaces use look-up tables and extension points. If an interface is forked, the same identifier can’t be used for indicating different protocols.

The property of “standard” is not inheritable; any derivative work of a standard must go through the process standardization itself to be called a Standard.

Encouraging wide citation of forking policy

The extended discussions over copyright and document license in W3C seems somewhat misdirected. Copyright by itself is a weak tool for preventing any unwanted behavior, but especially since standards group are rarely in a position to enforce copyright claims.

While some might consider trademark and patent rights as other means of discouraging (harmful) forking, these “rights” mechanisms were not designed to solve the forking problem for standards.  More practically, “enforcement” of appropriate behavior will depend primarily on community action to accept or reject implementors who don’t play nice according to expected norms. At the same time, we need to make sure the trailblazers are not at risk.

Copyright can be used to help establish expected norms

To make this work, it is important to work toward a community consensus on what constitutes acceptable and unacceptable forking, and publish it; for example, a W3C Recommendation “Forking W3C Specifications” might include some of the points raised above. Even when standards specifications are made available with a license that allows forking (e.g. the Creative Commons CC-by license), the license statement could also be accompanied by a notice that pointed to the policy on forking.

Of course this wouldn’t legally prevent individuals and groups from making forks, but hopefully would discourage harmful misuse, while still encouraging innovation.
Dave McAllister, Director of Open Source
Larry Masinter, Principal Scientist

The Internet, Standards, and Intellectual Property

The Internet Society recently issued a paper on “Intellectual Property on the Internet“,written by Konstantinos Komaitis, a policy advisor at the Internet Society. As the title of the paper indicates, the paper focuses on only one policy issue – the need to reshape the role and position of intellectual property. The central thesis of the paper is that “industry-based initiatives focusing on the enforcement of intellectual property rights should be subjected to periodic independent reviews as related to their efficiency and adherence to due process and the rule of law.”

The author cites the August 2012 announcement of “The Modern Paradigm for Standards Development” which recognizes that the economics of global markets, fueled by technological advancements, drive global deployment of standards regardless of their formal status. In this paradigm, standards support interoperability, foster global competition, are developed through an open participatory process, and are voluntarily adopted globally.” These “OpenStand” principles were posited by the Institute of Electrical and Electronics Engineers (IEEE), the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), the Internet Society, and the World Wide Web Consortium (W3C).

Komaitis conveniently overlooks the nearly 700 other organizations (formal and otherwise) that develop standards. And that nearly all industries depend upon standards. And that governments are aware of the power of standards to create economic policy and drive and sustain economic growth. Instead, the author focuses on one small aspect of standards – intellectual property.

Another issue conveniently overlooked is how to fund standards development. Komaitis asserts that “…industry-based initiatives ….should be subjected to periodic independent reviews… ” He misses the fact that industry funds nearly all of the standards organizations in existence. Absent industry funding for participants, charging for standards, and acceptance of standards in product creation, would cause the entire standardization arena to become extinct.

The author seems to be arguing for a revision of intellectual property rights (IPR) rules in standardization – when, in fact, there is no real demand from the industry as a whole. Komaitis is really asking for an “intellectual property rights carve out” for standards related to the Internet. Looking at the big picture, the plea that it is necessary to rejigger world-wide IPR rules to prevent putting the State or courts “in the awkward position of having to prioritize intellectual property rights over the Internet’s technical operation…” seems trite and self-serving.

There is a claim that “the Internet Society will continue to advocate for open, multi-participatory and transparent discussions and will be working with all stakeholders in advancing these minimum standards in all intellectual property fora.” Perhaps the Internet Society could look at what already exists in the International Organization for Standardization (ISO) or the World Trade Organization (WTO) or perhaps even the International Telecommunications Union (ITU) to see how a majority of the “stakeholders” worldwide already deal with these issues – and then maybe get back to actually solving the technical issues at which the IETF excels.

Carl Cargill
Principal Scientist

 

The Role of PDF and Open Data

The open data movement is pushing for organizations, in particular government agencies, to make the raw data that they collect, openly available to everyone for the common good. Open data has been characterized as the “new oil” that is driving the digital economy.  Gartner claims: “Open data strategies support outside-in business practices that generate growth and innovation.”

What promises to be a very interesting workshop on the topic “Open Data on the Web,” is being sponsored by the W3C in London on April 23-24, 2013. I will be attending and will present a talk entitled “The Role of PDF and Open Data,” which explores how PDF (Portable Document Format – ISO standard ISO 32000-1) can be effectively used to deliver raw data.

There is widespread belief that once data has been rendered into a PDF format, any hope to access or use that data for purposes other than for the original presentation, is lost.  The PDF/raw-data question arises because raw data is usually best represented as comma-separated values (CSV) or in a specific (well documented) XML language.

PDF is arguably the most widely used file format for representing information in a portable and universally deliverable manner. The ability to capture the exact appearance of output from nearly any computer application has made it invaluable for the presentation of author-controlled content.

The challenge has been to find ways to have your cake and eat it too: to have a highly controlled and crafted final presentation and yet keep the ability to reshape the same content into some other form. We know of no perfect solution/format for this problem but there are several ways in which PDF can contribute to solutions, which I have explored in previous blog posts and will expand on in my presentation at the workshop. I hope to see you there.

James C. King
Senior Principal Scientist

 

Who’s Making the Rules About the Internet?

Governance – who makes the policies and rules (but NOT the technology) on how the Internet runs – is one of the “invisible infrastructure” activities that just happen and keeps the Internet from failing. With the explosive growth of the Web and its importance to the world economy (coupled with the fact that the Internet is global and doesn’t recognize national borders), the last decade has seen governments and policy makers start to look more closely at who actually makes the rules that run the Internet and wonder if perhaps there isn’t a better way. Things like taxation, censorship, privacy, intellectual property, libel, economic stability, and financial dealings are all aspects of Internet governance the world is coming to recognize. And governments are loathe to grant U.S. based Non-Governmental Organizations (NGOs) (such as ICANN and ISOC and IETF[1]) the right to make the fundamental rules that impact these sovereign rights.

Part of the reason for this is the importance of the Internet to the world’s economic well-being. The global (and globalized) economy depends in large part on the instantaneous communications afforded by the Internet, which are now reaching ever-broader audiences.

However, the major impact of the Internet is on governance of nations – and not just of the individuals of the Internet. The “Arab Spring” movement showed the power of the Internet to galvanize public reaction. This can be a worrisome thing to a government trying to maintain economic or political stability. Wikileaks also illustrated the power of the Internet to disseminate governmentally unfavorable information that impacted governmental foreign policy, and the use of malware (e.g. Stuxnet) has become a tool for both industrial and military espionage and sabotage.

But in the international geopolitical arena, governance has expanded to mean more: It also means that all nations have an equal say and stake in the creation of the rules and deployment of the Internet. Note here that the term is “nations” – not individuals – because how the Internet is governed can have a tremendous impact on a nation’s ability to pursue a national growth and governance strategy.

One of the countries most concerned about this area is China. It was noted that the non-regulated nature of the Internet, and what could be viewed as favoritism to the developed countries, poses a long term and large problem for developing countries. On September 18, 2012, the Chinese government hosted an “emerging nations Internet roundtable,” where issues of Internet governance, content management, and cyber security were discussed. The governments of Russia, Brazil, India, and South Africa all participated, and together looked at Internet governance so that the needs of developing nations are taken into consideration.

Following the meeting, a statement was released that said that participating governments would continue to meet on a regular basis and that consensus has been reached on the following four major topics:

1. Internet governance must become a governmental mandate, and the impact of social networks on society (good, bad, and otherwise) is of particular concern.

2. Content filtering needs increased regulation and legislation to both protect and promote the interests of developing and emerging nations.

3. The whole area of cyber-security needs increased transnational cooperation and coordination – but this must be balanced with the economic needs of emerging states.

4. Emerging and developing nations offer the greatest opportunity for Internet growth, and these nations must take responsibility for managing this growth.

This conference clearly delimits the debate between those who seek an unfettered (open and based in information technology) Internet and those who would like a more regulated (in the style of the regulated telecommunications industry) model.

The issue of who controls a tremendously powerful communications force is, of course, a matter of high interest to nearly everyone. But the essential issue is that this policy and governance issue is being fought in the standards arena – that is, who has the right and duty to make the rules about the standards that will drive the next generation of innovation in the massively connected world. Currently, the International Telecommunication Union (ITU)[2], is proposing to assume an increased role in making rules for the Internet, with support from many of the G-30 nations. And ISOC, the IETF, W3C, and the IEEE are responding with the Open Stand (http://open-stand.org) initiative. Europe is moving to recognize consortia specifications – and the national standards bodies (with implicit support of their governments) are trying to slow and limit this change. And we will see this same type of standards-based activity in policy decisions on privacy, security, accessibility, and others. As the world becomes more and more highly interconnected, the need for and control of who creates and who mandates standards – their creation, implementation, testing, and IPR status – will become major issues in national and international policy. And this is the lesson that is being learned from the internet governance discussions.

(1)Internet Corporation for Assigned Names and Numbers (ICANN); Internet Society (ISOC); Internet Engineering Task Force (IETF)

 (2) The ITU is a specialized U.N. Treaty Organization responsible for Information and Communications technology. It creates standards to regulate telecommunications.

W3C Web Platforms Docs: A Standards Perspective

Recently, Adobe, along with many others in the community, initiated a major community effort to build a common suite of developer-oriented documentation of the Open Web Platform as a community effort sponsored by the World Wide Web Consortium (W3C) .

One of the problems with standards is that generally, they are meant more for implementors and less for users of the standards (often by design). Those who actually write the standards and work on the committees that create them know that they are fragile interface descriptions – and this fragility is what requires the care in their crafting.

Standards specifications are necessarily quite detailed, in order to really promote interoperability and ensure things work the same. And this is where things get sticky. The implementations are based on the standard or the specification, and all standards (well, nearly all) are written in native language by people who are usually specialists in technology, as are many people who implement the developer-oriented documentation.

What’s exciting here is that the Web Platform Docs (WPD) effort is really targeted at the user community to help document the standards in a way that is useful to that community.

But a standard really only gains value when it is implemented and widely deployed. And this is why the WPD is so innovative. WPD is about use and deployment of the standard. It has tutorials on how to use a feature; it has examples of uses. This is the kind of material that the W3C working group does not have time to create, but values. It is what the vendors provide to get their implementations used.

The importance of the site, from a standards point of view, is that it helps build an informed user base. Not at all a simple task.

The Web is evolving – and in its evolution, it is forcing others to change as well. Ten years ago, this type of common activity, open to all (for both contributions and information) would have been if not unthinkable, at least foreign. With this announcement, the contributors and the W3C have (hopefully) begun to change the way standards are seen – to an easier and kinder environment.  And this is a good thing.

For an Adobe developer’s view, see: What’s one of the biggest things missing from the Web?

Leading the Web Forward: Adobe’s “Create the Web” Event and Open Standards

I recently attended Adobe’s “Create the Web” event in San Francisco on September 24, 2012. One of things that struck me was the role standards are playing in the tools and technologies announced at that event.  Adobe is increasingly delivering standards-based tools to simplify the creation of imaginative content for the Web as well as contributing technology to the continuing development of the Open Web Platform standards within the World Wide Web Consortium (W3C).
 
Adobe has for many years been one of the primary vendors of tools for creating visual content. Our customers look to us to help them create innovative and effective presentations and content: graphically, textually and interactively. Originally, these tools involved display vehicles created by Adobe, but increasingly, the tools Adobe is providing are moving to standards-based platforms such as the Open Web Platform. For example, the recently announced Edge Animate tool makes the creation of animations using HTML5, CSS and JavaScript much more natural; a user interacts with a graphical display of the objects being animated, and the tool helps the user write the “code” for inclusion of the objects on the user’s Web page.
 
As the Web platform standards have become available on mobile as well as desktop devices, creating presentations that scale across these devices has become more challenging. The Edge Reflow tool helps create presentations that shift the way the same content is displayed on devices of different sizes using a Cascading Style Sheet (CSS) feature called media queries. PhoneGap Build then allows an author to take a Web platform-based application and package it as a native application that can run on a number of mobile device operating systems.
 
But, the Web of today still lacks many of the features Adobe customers have grown to appreciate and use. For that reason, Adobe is very active in extending the Web standards to include those features. In the area of presentation layout, Adobe has submitted proposals to allow a presentation to be constructed from multiple flows of material and to have objects on the page exclude other objects or text to achieve layout effects commonly seen in magazines. In the area of graphics, Adobe is helping to standardize the technologies used to create filters that add pizazz to presentations and to allow various elements to be overlaid, transparently. These efforts are accompanied by open-source demonstration implementations that help vendors supporting the Open Web Platform understand the value of and possibilities around the features being contributed. Adobe is in active partnerships developing these features to lead the Web forward.
 
Adobe is making a strong statement in support of the Open Web Platform standards. We are developing tools that make it easier to produce content for the Open Web, and we are working to extend that standard to better meet the needs of Adobe customers. Thus, standards are significant in the ways Adobe helps creative professionals, publishers, developers and businesses create, publish, promote and monetize their content anywhere.