Details Of The JohnnyA MediaTemple Hack

Update: MediaTemple directed me to another blog post with additional details. This highlights another problem with this incident. The information has been spread all over the place. While this blog post does give some good details, it still does not provide cleanup instructions. It simply says that all malicious files have been removed. I’m sorry [...]

SOA vs. REST in Flash Player

After reading this I have to rant a bit. The architecture of Flash has been done right and is in alignment with the core principles of SOA, as defined by the OASIS Reference Model for SOA, an abstract model which describes the core principles of SOA at much the same level of abstraction as Roy Fielding’s doctorate thesis on REST. Flash is also in alignment with the core principles of REST. For those who are not familiar with REST, I have written an article called “Understanding REST“. Note that “REST” does not mean “HTTP”.

Disclaimers (to disclose any conflicts of interest): I work for Adobe. I worked on the W3C Web Services Architecture Working Group, several other Web Services standards groups and chaired the OASIS SOA Reference Model Technical Committee for over 3 years. I speak only for myself, based on my ignorant views of the way of the world.

REST is in no way dependent upon HTTP although section 6.3 of Fielding’s thesis does elaborate on how to use the principles of REST with HTTP, with several notes where there are departures. On that note, I’ve always found it odd that HTTP was considered an application layer protocol in the OSI stack however I can live with this. TCP/IP are the true transport workhorses but I have always believed HTTP is tasked largely with delivering messages to HTTP servers. This does however, raise the differences of a REST style to an SOA approach. SOA uses services as an action boundary between a capability and the consumer. A service has a deliberate design to be as opaque as possible as the consumer should not necessarily know how the capability is being fulfilled (monkey’s on typewriters, Cray supercomputer….doesn’t matter). Having an application layer semantic like “DELETE” in a transport protocol (yes – I know it is considered an application layer protocol by the OSI crowd) like HTTP seems to break this architecture principle since it prescribes a method on the capability (the functionality layer which lives below the service), thus making the service less opaque. This brings up an interesting difference in the architectural ways of SOA vs REST.

The SOA-ish way to architect applications is to keep the semantics of data management out of the transportation workhorse where possible. There is not hard set of rules for this yet many architects I talk to seem convinced this is the best way to build. Every service has an associated data model and action model for processing the service invocation. The data model is the data and the processing/action models are the place whereby the “verb” of the service invocation can be expressed. In SOA, there is also a concept of “Reach-ability and Visibility” which is usually manifested in the real world by using a transportation protocol sending electronic signals (usually by using SOAP with the HTTP binding) to a destination denoted by a URI (very similar to REST).

I personally think that in a perfect world, layers of an architecture should ideally be independent of each other. In an SOA world, the transport functionality (usually implemented using SOAP) should focus on just delivering the message and it’s associated payload(s) to the destination(s), optionally enforcing rules of reliability and security rather than declaring to the application layer processing instructions to the service endpoint. This is why in the W3C Web Services working groups, we chose to use only “GET” and “POST” in the SOAP HTTP binding, as denoted in the SOAP work itself in section 7.4 (don’t believe me? Read it here).

In defense of the author, Flash’s HTTP libraries support currently support GET and POST. My architectural view of this is that the HTTP libraries only should really support these and not worry about the others. The HTTP POST and GET themselves are very similar if you remove the semantics (both carry some bytes over the wire in a Request-Response message exchange pattern). When you start comparing POST and PUT, the confusion gets worse. To understand this more, let’s examine what Roy Fielding wrote in section 6.3:

“The Hypertext Transfer Protocol (HTTP) has a special role in the Web architecture as both the primary application-level protocol for communication between Web components and the only protocol designed specifically for the transfer of resource representations. Unlike URI, there were a large number of changes needed in order for HTTP to support the modern Web architecture. The developers of HTTP implementations have been conservative in their adoption of proposed enhancements, and thus extensions needed to be proven and subjected to standards review before they could be deployed. REST was used to identify problems with the existing HTTP implementations, specify an interoperable subset of that protocol as HTTP/1.0 [19], analyze proposed extensions for HTTP/1.1 [42], and provide motivating rationale for deploying HTTP/1.1.

The key problem areas in HTTP that were identified by REST included planning for the deployment of new protocol versions, separating message parsing from HTTP semantics and the underlying transport layer (TCP), distinguishing between authoritative and non-authoritative responses, fine-grained control of caching, and various aspects of the protocol that failed to be self-descriptive. REST has also been used to model the performance of Web applications based on HTTP and anticipate the impact of such extensions as persistent connections and content negotiation. Finally, REST has been used to limit the scope of standardized HTTP extensions to those that fit within the architectural model, rather than allowing the applications that misuse HTTP to equally influence the standard.”

Sounds a lot like the SOA approach doesn’t it? So let’s analyze why an enterprise architect (regardless of using REST of SOA styles) would want to stay with these and keep the processing semantics in the payload. If you are architecting a client-service architecture where the cardinality is 1:1, it might make sense to start overloading HTTP with things like “UPDATE” which some people have proposed. Some time ago, architects realized that keeping layers of applications as independent from each other as possible is a desirable trait. The rationale is that over time, changes can be made to one layer without affecting the others. When cardinality goes to multiple clients per service, you end up with conflicts in state, marshaling and ordering requests and many other issues.

Adding in things like “DELETE” expose methods publicly that may not be available to all consumers of a service. Specifically for DELETE, in many Flash MEP’s, this is not an option you want to bestow upon non-authenticated users. If you want to expose this for consumers of a service, my preferred way would be to build a second service exposing this but keep the back end processing semantics in the payload (SOAP body in many cases). For this reason, I have also been skeptical about about WS-TX as it also exposes back end semantics into the SOAP header.
Another way would be to keep the endpoint the same, but have other permitted operations exposed and declared in the WSDL instance.

Using POST vs PUT is also something that is confusing. Some have equated PUT to be synonymous with an INSERT in SQL terms and POST to be equivalent to CREATE. While this is sort of true, in the real world, the physical bytes of both operations overwrite the existing copy of a resource hence it is really an UPDATE in CRUD terms. The author of the other article that started this rant asserts that PUT must be supported for REST. In reality, it is not an explicit requirement. In fact, in section 6.3.3.1 of his thesis, Fileding writes:

“HTTP does not support write-back caching. An HTTP cache cannot assume that what gets written through it is the same as what would be retrievable from a subsequent request for that resource, and thus it cannot cache a PUT request body and reuse it for a later GET response. There are two reasons for this rule: 1) metadata might be generated behind-the-scenes, and 2) access control on later GET requests cannot be determined from the PUT request. However, since write actions using the Web are extremely rare, the lack of write-back caching does not have a significant impact on performance.”

Again, this shows a heavy alignment between the SOA approach to transparency and the REST-afarian approach. Both groups have realized that assumptions about something behind a service endpoint are not optimal. The Flash platform architecture adheres to this methodology.

Now on to Cookies. HTTP requests are idempotent by nature. This means that any new request is not dependent upon knowledge of a previous exchange of messages. Again, most architects I work with prefer to not introduce dependencies where they are not necessary. The reason for this is that services (whether SOA or REST style endpoints) are “slaves” to a higher level of logic where things like the state of an overall process or a long running orchestration is kept. Keeping the state at the individual service level is a bad idea as conflicts will arise. This can be proven to be mathematically troublesome when cardinality of service/process goes higher.

Now cookies do offer some good functionality, yet again, the REST-afarians have taken a similar approach to SOA:

6.3.4.2 Cookies

An example of where an inappropriate extension has been made to the protocol to support features that contradict the desired properties of the generic interface is the introduction of site-wide state information in the form of HTTP cookies [73]. Cookie interaction fails to match REST’s model of application state, often resulting in confusion for the typical browser application.

Cookies also violate REST because they allow data to be passed without sufficiently identifying its semantics, thus becoming a concern for both security and privacy. The combination of cookies with the Referer [sic] header field makes it possible to track a user as they browse between sites.

Says it all, doesn’t it! Cookies are for the browser and belong in the browser. Having the Flash Player able to access coookies would be a mistake in my own opinion. Any logic that is facilitated by a browser should probably be dealt with at the browser layer before Flash Player is used.

Conclusions:

1. The original post I responded to actually was valuable as it got me thinking about the real differences between REST and SOA, but the more I look at it, the core architectural principles of REST are aligned with SOA. There are some minor differences.

2. Flash’s architecture is staying true WRT to REST and SOA. The runtime supports the right level of functionaliy while balancing out features. There is NO explicit requirement to support DELETE and PUT in REST as it is currently written. If you do not believe me, see for yourself here.

3. To be fair to the author of the article, I would invite a discussion on what the end goal was to see if there is some use case that the Flash Player has missed.

Have a great weekend!

DECE shifts to UltraViolet

Earlier this week, an industry consortium in which Adobe is a founder made some significant announcements. I wanted to help readers of this blog parse the information that was shared and also provide the Adobe/Flash/Flash Access perspective.

The group is known as DECE or Digital Entertainment Content Ecosystem, and I’ve written about it before. As of this week, we’ve announced a much more user-friendly brand, UltraViolet. From a purely personal perspective, I have to say that this brand is growing on me — the geek in me likes the implied “beyond blue” (read Blu-Ray). There’s even a website, www.uvvu.com and an associated logo.

You may be asking yourself: So… what does this really mean to me? I think the biggest winner here is the consumer. The model behind UltraViolet, and its main reason for existence, is to create a more seamless experience for purchasing/enjoying premium digital content. In the UVVU universe, a user can buy devices from different retailers and have it play on different devices.

This seems like a pretty obvious thing, but today’s electronic content distribution ecosystem based on silos where a device is “captive” to a given content service does not reflect this. Imagine if you needed a stack of different DVD/BluRay players for content from different studios that you buy from different retailers. That would be crazy, right? Well, that’s the status quo today for electronic content distribution, which UltraViolet hopes to overcome.

What’s in it for the close to 60 member companies from different industries participating in DECE/UVVU? Our shared vision is to create a much bigger pie for electronic content distribution (which today only represents a small percentage of all film/video content sold) by removing some of these artificial barriers. By creating the basic infrastructure, UltraViolet also creates opportunities for innovation in business models by everyone who wants to participate. (You don’t need to be a DECE member in order to offer UltraViolet products or services.)

Is this a done deal? My opinion is that it is still very early days for electronic content distribution in general, and UltraViolet in particular. I’m convinced that in the next several years we will see significant innovation in the content distribution space. In times of significant churn in business models, key players, technologies and consumer expectations, such as the one we live in right now, it is hard to predict what will become the new normal. I believe in the vision of UVVU, now we need to see some actual market adoption and see how well everyone executes to deliver on the vision.

From Adobe’s perspective, we see DECE/UltraViolet as highly complementary to our efforts to help drive rich user experiences around content. For instance, the Open Screen Project is an Adobe-led initiative with close to 80 members (many of them also participating in DECE) working together to help establish a consistent execution runtime across a wide range of devices.

More specifically, DECE’s adoption of Flash Access as an approved content protection solution means that UltraViolet content will be able to flow to Flash-enabled PCs and other devices. Flash Access 2.0 shipped in May of this year, and is supported in Flash Player 10.1 and Adobe AIR 2.0, which shipped in June. Conversely, the ability for people to create interactive experiences around UltraViolet content using the #1 platform for video on the Web means that DECE gets very broad reach right from the start. Everyone wins, especially consumers who will soon be able to purchase premium video without having to worry about which device it will play on. Well, mostly, as some device manufacturers may have their own reasons to not play in this ecosystem.

Florian Pestoni
Principal Product Manager
Adobe Systems

Confirmed: Apple’s “Magic” Trackpad works with AIR 2.0

Yesterday I saw a Tweet from Ralph Hauwert who was wondering if Apple’s Magic Trackpad would work with AIR 2.0. You probably already know that AIR 2.0 supports multitouch and gestures. The trackpad on a recent MacBook Pro supports gestures and these work nicely in AIR 2.0. So… My hunch was that the “Magic” Trackpad… [...]

Video-on-Demand over P2P in Flash Player 10.1 with Object Replication

In the previous tutorial File Sharing over P2P in Flash Player 10.1 with Object Replication we went through the Object Replication basics. And you can see that the Receiver is requesting packets one by one. That’s not suitable for the real world app, but it’s good for testing on a LAN to see the progress. [...]