Updating HTTP

Much of the excitement about advancing the Web has been around HTML5 (the fifth version of the HyperText Markup Language) and its associated specifications; these describe appearance and interactive behavior of the Web.

The HyperText Transfer Protocol (HTTP) is the network protocol used to manage the transfer of HTML and other content, as well as applications that use HTTP. There has been significant progress in updating the HTTP standard.

The third edition of  HTTP/1.1  is nearing completion by the HTTPbis working group of the IETF. This edition is a major rewrite of the specification to fix errors, clarify ambiguities, document compatibility requirements, and prepare for future standards. It represents years of editing and consensus building by Adobe Senior Principal Scientist Roy T. Fielding, along with fellow editors Julian Reschke, Yves Lafon, and Mark Nottingham. The six proposed RFCs define the protocol’s major orthogonal features in separate documents, thereby improving its readability and focus for specific implementations and forming the basis for the next step in HTTP evolution.

Now with HTTP/1.1 almost behind us, the Web community has started work on HTTP/2.0 in earnest, with a focus on improved performance, interactivity, and use of network resources.

Progress on HTTP/2.0 has been rapid; a recent HTTPbis meeting at Adobe offices in Hamburg made significant advancements on the compression method and interoperability testing of early implementations. For more details, I’ve written on why HTTP/2.0 is needed, as well as sounding some HTTP/2.0 concerns.

Larry Masinter
Principal Scientist

The Internet, Standards, and Intellectual Property

The Internet Society recently issued a paper on “Intellectual Property on the Internet“,written by Konstantinos Komaitis, a policy advisor at the Internet Society. As the title of the paper indicates, the paper focuses on only one policy issue – the need to reshape the role and position of intellectual property. The central thesis of the paper is that “industry-based initiatives focusing on the enforcement of intellectual property rights should be subjected to periodic independent reviews as related to their efficiency and adherence to due process and the rule of law.”

The author cites the August 2012 announcement of “The Modern Paradigm for Standards Development” which recognizes that the economics of global markets, fueled by technological advancements, drive global deployment of standards regardless of their formal status. In this paradigm, standards support interoperability, foster global competition, are developed through an open participatory process, and are voluntarily adopted globally.” These “OpenStand” principles were posited by the Institute of Electrical and Electronics Engineers (IEEE), the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), the Internet Society, and the World Wide Web Consortium (W3C).

Komaitis conveniently overlooks the nearly 700 other organizations (formal and otherwise) that develop standards. And that nearly all industries depend upon standards. And that governments are aware of the power of standards to create economic policy and drive and sustain economic growth. Instead, the author focuses on one small aspect of standards – intellectual property.

Another issue conveniently overlooked is how to fund standards development. Komaitis asserts that “…industry-based initiatives ….should be subjected to periodic independent reviews… ” He misses the fact that industry funds nearly all of the standards organizations in existence. Absent industry funding for participants, charging for standards, and acceptance of standards in product creation, would cause the entire standardization arena to become extinct.

The author seems to be arguing for a revision of intellectual property rights (IPR) rules in standardization – when, in fact, there is no real demand from the industry as a whole. Komaitis is really asking for an “intellectual property rights carve out” for standards related to the Internet. Looking at the big picture, the plea that it is necessary to rejigger world-wide IPR rules to prevent putting the State or courts “in the awkward position of having to prioritize intellectual property rights over the Internet’s technical operation…” seems trite and self-serving.

There is a claim that “the Internet Society will continue to advocate for open, multi-participatory and transparent discussions and will be working with all stakeholders in advancing these minimum standards in all intellectual property fora.” Perhaps the Internet Society could look at what already exists in the International Organization for Standardization (ISO) or the World Trade Organization (WTO) or perhaps even the International Telecommunications Union (ITU) to see how a majority of the “stakeholders” worldwide already deal with these issues – and then maybe get back to actually solving the technical issues at which the IETF excels.

Carl Cargill
Principal Scientist

 

Linking and the Law

This article contains some thoughts based on the article “Publishing and Linking on the Web” co-authored with Jeni Tennison and Dan Appelquist for the W3C TAG.

If you type a Web address into your browser you will most likely be taken to a Web page consisting of text and images. Sometimes you may be taken to a game where you can pretend to be a race car driver or throw stones at pigs but in most cases, you will get a Web page. From the information on the page you may be able to access related material by simply clicking. This capability is what makes the Web the Web.

If you are creating a Web page you can use material from other sources in different ways. You can provide a link to the material or you can embed it – include or transclude - within your material. These two ways of using material that is not authored by you are quite different and treated differently by courts.

Here is a page from Wikipedia that includes the picture of a whale from another web site:
Blue whale cropped large

 

The above page is from http://en.wikipedia.org/wiki/Blue_whale and if you click on the image in Wikipedia it tells you where the image came from and that it is in the public domain “because it contains materials that originally came from the U.S. National Oceanic and Atmospheric Administration, taken or made as part of an employee’s official duties.”

With embedding you see the embedded content on the page. Linking, on the other hand requires a user action. The link requires the user to click on it to be taken to another Web page. There are advantages to inclusion vs. linking. If you include material, that material is not going to change out from under you, whereas material at the end of a link may change. In the worst case, it could be replaced by malware.

In recent years there has been a rash of legal cases relating to linking and embedding. There was, for example, the case of a student who resides in the UK and was facing possible extradition to the United States for posting links on a Web site, which itself is not US-based and is not primarily intended for US users, to material that the US considers to be copyrighted. This case above also raises the question of jurisdiction (more on that later).

The general principle at play here is the notion of agency. If you link to something, you’re less responsible for it being available than if you include it; and if you transclude something, you’re less responsible then if you include it (transclude a copy you made). Most of the questions are whether you’re responsible for making information available that people don’t want to have shared (bomb making, pornography, copyright infringement). If you do decide to embed, the material should be attributed and, unless it is a brief quote, requires permission; otherwise, you may be held responsible for copyright violation.

Linking, is generally allowed — the argument has been made in several places that restricting linking is like interfering with free speech. The idea being that a hyperlink is nothing more than a reference or footnote, and that the ability to refer to a document is a fundamental right of free speech. There have been a few cases in the U.S. that have implied that the act of linking without permission is legally actionable, but these have been overturned.

Still, you need to be careful. The words accompanying a link can express an opinion — for example the HTML code

 

<pre>Joe’s Bar has &lt ;a  href=”http://joes.bar/menu.html” &gt; great  food&lt;/a&gt;</pre>

 

links “great food” to the bar’s menu — but some opinions may be construed as defamatory or libellous.

 

And then again, the material you link to may be so inflammatory that even minor responsibility might be risky; it’s best not to link to Nazi propaganda, child pornography or “How to Make a Bomb”. Web media has been very effective in political campaigns, but if you link to political material it may be judged to be seditious by some governments, and you may be held responsible.

 

Restricting Links

Even though linking, in general, does not violate copyright, some sites may want to restrict linking to all or part of their content.

This Digital Reader article ridicules an Irish newspaper for trying to charge for merely giving directions on where to find information but the request for payment is understandable. If you are a newspaper that invests in creating original content you would like to monetize your investment. The New York Times now allows a certain number of links per month. The Wall Street Journal requires you to subscribe. Other news media have similar policies. So, a link may tell you where to find a book but the library may charge a fee or be accessible only via membership.

Incidentally, links to The Digital Reader where the policy by the Irish newspaper was reported have ceased to work. Thus, while a link may not violate copyright, publishers have the right to restrict linking and may impose a number of conditions such as pay barriers or age verification that must be satisfied before a link is followed.

 

Restricting Deep Linking

Many web sites restrict deep-linking, i.e. links to pages other than the top page, because this allows links to bypass advertising or the legal Terms and Conditions or because a deep link may leave the source of the material unclear. Often, legal Terms and Conditions are used to restrict deep linking but not only are such terms difficult to enforce, there are simple technical mechanisms that are more effective.

 

Jurisdiction

The World Wide Web is truly an international phenomenon and as we have discussed, linking has been compared to freedom of speech. But there are limits to freedom of speech and, as we discuss above, some uses of external material may lead to legal action. If I live in the U.S. and host a web site in a Scandinavian country that has links to offensive material, where could I be prosecuted? If I host a website in a country that does not have a bilateral copyright agreement with the U.S. and the website includes swaths of U.S. copyrighted material, can I be prosecuted? If so, where? In the case of certain kinds of international disputes, there are agreements that such disputes will be settled by mediation or arbitration. Perhaps, we need to formalize a similar capability for the Web.

 

Linking to material that did not originate with you is an essential feature of the Web and one that gives it much of its power. In general, linking to other material, as opposed to inclusion or transclusion, is safe and carries little risk but, as we explain above, it still pays to be careful.

Ashok Malhotra
Standards Professional, Oracle

Larry Masinter
Principal Scientist Adobe

 

Alignment of Adobe-Approved Trust List (AATL) and EU Trust List (EUTL)

Adobe has long recognized the value of digital signatures as a tool for driving secure transactions in Europe. As a continuation of our previous investments in qualified signature technology, we see the integration of the EU Trust List into Adobe Acrobat and Reader software as the next logical step. Though this sounds like a relatively simple problem, in reality it took some time, requiring agreement with a number of stakeholders outside of Adobe. ETSI’s June 19 announcement of TS 119 612 v1.1.1: Electronic Signatures and Infrastructures (ESI); Trusted Lists is the culmination of many months work by interested stakeholders, and the first step in creating a solution.

Over the past few years, our commitment to advancements in digital signatures has made Acrobat and Reader one of the most readily available means for EU citizens to receive signed electronic documents based on qualified certificates. Some of our most significant milestones include:

  • Developing the “Adobe-Approved Trust List” (AATL) to ensure that qualified certificates issued by valid Certification Service Providers could be recognized by our products.
  • Working with the European Telecommunications Standards Institute (ETSI) to develop the technical specification for PDF Advanced Electronic Signature (PAdES), incorporated into the Adobe Acrobat PDF Reader product in 2009.
  • Enabling the manual import of qualified certificates, in Acrobat 9 and later, into the trust list within Acrobat or Reader, so that qualified signatures are validated.

Our approach has had some limitations. Currently, only certificates imported by the user or included in the AATL are “trusted,” and therefore recognized as valid by Adobe software. Other qualified certificates – including those on the national trust lists – are not recognized by Adobe as legitimate sources.  As a result, users and Certification Service Providers are asking Adobe to do more to recognize national trust lists within Adobe software.

ETSI’s announcement of TS 119 612 v1.1.1: Electronic Signatures and Infrastructures (ESI); Trusted Lists  is the culmination of many months of work by interested stakeholders, including Adobe, and at last provides a stable means of streamlining the recognition of trust lists within software applications. A key concern has been to ensure that there is a stable standard that describes how proprietary trust lists (such as the AATL) interact with national trust lists. This involves a number of separate issues including:

  • The national trust list description needs to be consistent to allow certificates to be read by software applications, otherwise some certificates from certain countries will not be readable
  • Trust lists are built into a number of software applications, most notably web browsers. A standard is needed to ensure that software applications all react in a consistent way when reconciling certificates that are in both the proprietary trust list and the national trust list.

A stable specification is a significant milestone, as it will allow software manufacturers and vendors, including Adobe, to implement the new features into future versions of their software. From an Adobe perspective we are working through a number of technical considerations. Many of these are unique to Adobe, including:

  • Updates – With hundreds of millions of instances of Acrobat/Reader in the world that could potentially encounter a digital signature that needs validation, sending updates is a non-trivial matter from an engineering and bandwidth perspective.
  • User experience – The same functional version is shipped globally. Since not all users will want or require the EUTL functionality, we are investigating the best way to make this option available, and the frequency with which updates will be offered.

It is not our policy to comment publicly on the roadmap for any of our software, however we consider these issues entirely solvable and are working hard to find good solutions. More details of specific implementation plans will be made available in due course.  In the meantime, we look forward to the adoption of the standard by the EC within the planned new Trust Services Regulation, which will replace the current e-Signatures Directive.

Steve Gottwals
Group Product Manager, Acrobat

John Jolliffe
Senior Manager, European Government Affairs

Adobe Support for Encrypted Media Extensions

Adobe is actively supporting the development of the Encrypted Media Extensions (EME) to the HTML5 standard. We are working on implementations of the EME and its companion specification, MSE (Media Source Extensions) and have been regular participants in the task force working sessions and email discussions.

HTML has grown to include many capabilities which were previously only provided by browser plugins like Adobe Flash. As a result, more developers are choosing to build applications using Open Web technologies. However, there are applications that are not possible to build today without extending the browsers capabilities. The inclusion of the <video> tag in particular has been a huge step forward, but that capability is limited to playing unprotected videos. To enable the playing of protected videos like feature-length Hollywood films, developers are forced to rely on plugins or non-standard browser extensions. As Adobe supports Open Web development more and more, we need to find a way to provide this capability to developers. I believe the EME specification will help us provide this capability for customers using our Adobe Primetime products.

This EME specification provides benefits to multiple parties. Content providers will benefit from more standardization of the formats used for delivering protected audio and video, lowering their cost for delivering the content. Developers will benefit from easier and faster cross-platform development by leveraging the common Web stack along with the EME APIs. End users will benefit from being able to stay within the familiar browser environment instead of being forced out to standalone proprietary applications. End users may also benefit from increased content options, due to the lower costs to content providers I mentioned above. Everyone will benefit from the reduced API surface area (as compared to existing plugin based solutions) this exposes to malicious code on the web.

The EME working group has published its First Public Working Draft. We are working with the group to address the issues that have been raised so far and constructive comments are welcome. Adobe is working on our own implementations of EME and once ready, we will make them as widely available as possible. Adobe’s goal is to enable more content to flow to more people on more platforms. I believe strongly that this effort will help us towards achieving that goal.

Joe Steele
Sr. Computer Scientist
Runtime Engineering

OpenCL Enables More Compelling and Efficient Applications

We live in a world with an embarrassingly wonderful variety of choices when it comes to processors, monitors, video cards, and other components that make up the computers and devices we use every day. As a user, this variety is great because it encourages vendors to innovate and compete with one another, creating progressively more awesome hardware. As a programmer, this same variety is challenging because each vendor’s components use their own proprietary programming models and interfaces. Writing applications that perform correctly and efficiently on all that hardware would be overwhelmingly difficult without industry standard APIs. For example, just about every graphics accelerator vendor has a different way to program their chips or talk to their accelerators, which is why just about every video card vendor implements OpenGL, an industry standard graphics API. This then allows software developers to write applications that use OpenGL to take advantage of the wide variety of graphic accelerators found on users’ machines without having to write special code for each and every video card in the market.

Over the last ten years, hardware and software vendors have worked to figure out how to do for parallel programming and parallel processors what OpenGL did for graphics programming and graphics accelerator cards. CPU vendors now routinely introduce processors with multiple cores, hyperthreading, and other features that allow software to perform many operations at once. GPU vendors similarly design new chips designed to allow their massively parallel compute cores to perform general purpose programming tasks, in addition to the extreme graphics performance for which they were originally conceived. For a company like Adobe, whose applications need to squeeze every bit of performance and efficiency from every user’s machine, writing programs that run well on each vendor’s architecture and that take advantage of each vendor’s innovations would be inordinately difficult without an open, vendor-neutral API for parallel programming that was designed to address the variety of processors in the market today and yet to come. Thus is was that, five years ago, Khronos, the same industry consortium that governs OpenGL, convened a new working group to standardize just such an API, the Open Computing Language more commonly known as OpenCL.

I have the honor and pleasure of being Adobe’s voice and vote in the OpenCL working group. Because OpenCL, like its graphics cousin, OpenGL, is fundamentally implemented by hardware vendors, the OpenCL working group primarily consists of hardware companies. Consequently, I am often asked why Adobe is actively involved in creating an open standard for computing hardware. The answer is deceptively simple:  a good, open standard for parallel computing helps us deliver better software, and merely hoping that OpenCL will be that standard without our input is wishful thinking.

We want to deliver software that is fun to use, runs fast, and consumes as little power as possible, something that is increasing more challenging in today’s fragmented world. We write desktop software that runs on Mac OS and Windows, mobile software that runs on iOS, Android, and Windows 8, and server software that runs on Windows and Linux. Our software uses chips built by different vendors, including NVIDIA, AMD, Intel, and ARM. We also have more than a decade’s worth of experience designing and developing parallel computing systems, Pixel Bender and Halide being two well known strategic investments Adobe made in this space. Through those experiences, we gained a hard-won perspective that we can be even more successful by joining forces with the community at large, that using an open standard makes our jobs a lot easier – writing great software that runs on different operating systems, form factors, and chip sets – and allows us to spend more time building software that runs quickly and efficiently on your computers and devices.

Returning to the original question, we know that open standards are good for the industry and enables us to most easily write great software that runs well on the widest variety of hardware. Even though Adobe does not build hardware and, therefore, does not implement OpenCL, we use OpenCL and we know what features we need, from OpenCL and from OpenCL hardware, to enable us to build even more compelling and efficient applications. So, we openly participate in the discussion to develop the standard, adding our experience, perspective, and vision for creating great software running on great hardware, with the goal of shaping and influencing OpenCL to evolve in a way that is advantageous to Adobe’s customers. With our continued participation and influence, and with the cooperation and support of our vendor partners, Adobe has high hopes for the power of the computers and devices in your future, and for the capability, efficiency, and experience of the software you’ll run on them.

Eric Berdahl
Sr. Engineering Manager

 

The Role of PDF and Open Data

The open data movement is pushing for organizations, in particular government agencies, to make the raw data that they collect, openly available to everyone for the common good. Open data has been characterized as the “new oil” that is driving the digital economy.  Gartner claims: “Open data strategies support outside-in business practices that generate growth and innovation.”

What promises to be a very interesting workshop on the topic “Open Data on the Web,” is being sponsored by the W3C in London on April 23-24, 2013. I will be attending and will present a talk entitled “The Role of PDF and Open Data,” which explores how PDF (Portable Document Format – ISO standard ISO 32000-1) can be effectively used to deliver raw data.

There is widespread belief that once data has been rendered into a PDF format, any hope to access or use that data for purposes other than for the original presentation, is lost.  The PDF/raw-data question arises because raw data is usually best represented as comma-separated values (CSV) or in a specific (well documented) XML language.

PDF is arguably the most widely used file format for representing information in a portable and universally deliverable manner. The ability to capture the exact appearance of output from nearly any computer application has made it invaluable for the presentation of author-controlled content.

The challenge has been to find ways to have your cake and eat it too: to have a highly controlled and crafted final presentation and yet keep the ability to reshape the same content into some other form. We know of no perfect solution/format for this problem but there are several ways in which PDF can contribute to solutions, which I have explored in previous blog posts and will expand on in my presentation at the workshop. I hope to see you there.

James C. King
Senior Principal Scientist

 

Adobe Helps Welcome the ECMAScript Internationalization API

The ECMAScript Internationalization API, an extension to the ECMAScript Language Specification was recently approved. It provides a much-needed API that helps developers create world-ready applications. The new API has been standardized as ECMA-402.

The new API provides developers with the ability to create language and region-sensitive objects for the following needs:

·         collation (sorting text)

·         number formatting

·         date and time formatting

Many standards groups and companies worked together to create the specification. Although these three sets of internationalization functionality do not represent all the needs of a fully global application, the working group was successful in creating an ECMAScript language extension that organizations could agree on, across the software industry.

Although all browsers do not yet support the new API, several browsers have already begun to implement the new specification. Companies that have committed to supporting the specification and will implement the API in future versions of their browsers include Google and Mozilla.

John O’Conner
Globalization Architect

Adobe Sponsors EclipseCon

130x100_sponsoring

On March 25-28 the Eclipse and OSGi Alliance developer communities will meet in Boston at EclipseCon 2013 co-located with OSGi DevCon 2013. Adobe is proud to be a sponsor for this event. Carsten Ziegeler, Sr. Developer will present the Apache Sling OSGi Installer  which is used to drive installation and update deployments of the Adobe Experience Manager (AEM) 5.6. I will talk about the work of the OSGi Alliance to update the HTTP Service Specification

These Adobe presentations highlight the value Adobe is delivering to the OSGi community. Not only is the specification work possible due to Adobe’s membership in the OSGi Alliance, the actual implementation work is being done in the Apache Software Foundation where both Carsten and I are both members.  We’re happy to be involved.

If you’re attending EclipseCon or OSGi DevCon 2013, we’ll see you there.

Felix Meschberger, Principal Scientist, OSGi Alliance Board of Directors

Adobe’s ECMA TC-39 Involvement

Adobe is supporting, using, and contributing to Web standards, and ECMAScript is no exception. We’re actively building applications and tools built on and for Web technology, and are becoming keenly aware of the strengths and weaknesses of Web technologies for development.  That being said, Adobe is returning to the ECMA committee TC-39. During the week of March 11, Adobe’s Bernd Mathiske and Avik Chaudhuri attended the technical committee meeting. (Our entire ECMAScript team includes language design, type theory, compiler, concurrency and VM experts).

Ultimately, Adobe’s goal is to improve the Web experience for everyone. Making developers more productive is a key means to achieve this goal. One way we’re hoping to help is by improving the consistency of ECMAScript across its various implementations. While we’re aware that in practice, implementation differences come more from browser integration, we also plan to work with browser vendors to improve that consistency too. This will help us to develop better tools and make it easier for us to create our own Web applications.

On the JavaScript front, Adobe brings expertise from areas like ActionScript and Flash, Java, Lisp, OCaml, and other languages.  We’ll use this expertise to contribute to open-source platforms and we’ll do demonstrations of ideas built on existing open-source technology. Eventually, we’d like to see some of this work make it into both open-source and proprietary implementations.

My personal favorite feature that hopefully will become part of ECMAScript 6 is modules. The approach under consideration is highly flexible, versatile, and quite easy to use. It will have a huge effect on making ECMAScript scale to much more complex systems.

We’d like to see a more uniform interface to debugging and profiling data from JavaScript engines, and from browsers and virtual machines in general. We have some exciting ideas and a compelling way of demonstrating them and foresee interest among the other TC-39 participants. We’re researching this now and will start with preliminary specification and prototypes soon.

Performance matters a lot to Adobe, and this might be an area of ECMAScript where we can also help. This is mostly an implementation issue, so we’ll help on open-source implementations too. That also means ideas like asm.js and River Trail are compelling for us, and we’re willing to help with standardization on those fronts too.

I hope this introduction to our ECMA TC-39 participation is helpful and encouraging to TC-39 and the community at large.

John Pampuch
Director, Languages, Compilers and Virtual Machines