Archive for November, 2009

Upcoming Beta Program on the way to OSMF 1.0

With nearly eight sprints of development under our belt, it’s time
for an update on the release plan for OSMF 1.0, which is scheduled to land in Q2 2010.  The focus of this post
is to preview the upcoming beta program that starts in late January.

First, a couple of small adjustments to the immediate schedule: to
accommodate the Thanksgiving and end of the year holidays, we’ve
expanded Sprints 8 and 9.  As a result, Sprint 8 will finish in mid
December, and Sprint 9 finishes at the end of January.

In fact, the Sprint 9 release at the end of January will represent a major milestone on the road to version 1.0:   Beta 1
This release will be mostly feature complete (HTTP streaming
support will still be in progress) but more importantly it will include
stable APIs.  After Beta 1, Beta 2 will follow, then a Release Candidate, and finally 1.0 in Q2.

The goal of providing a set of beta releases is to show the
developer community that the framework is ready for prime time, and
that you can use it to start building real world media players with a
reasonable expectation that your code will continue to work into the
future.  It’s also the last chance to let us know something needs to
be changed or fixed before we release 1.0.  The beta program is
essentially a dress rehearsal for the APIs that will be supported into the future.

Between now and the end of January, we’re taking a number of steps
in order to ensure that we have a solid API for Beta 1.  We’ve been
conducting detailed reviews with the API review board here at Adobe in
addition to several team reviews, and we’re also working on writing
real world player applications to vet the API.  In addition, we’ll be
taking a close look at performance and package/class level dependency
with an eye towards impact to the public API.  In short, Beta 1 will be
the milestone by when we’ve evaluated and triaged all the feedback
received to date and made resulting API changes necessary for 1.0.

Unless you tell us something is missing, the Beta 1 API will be what ends up getting
released at 1.0.  We encourage you to put OSMF through its paces and use it to build a real world player for your production website.  If
something doesn’t work quite right or if your use case isn’t enabled,
we want to hear about it!  (You can file bugs and enhancement requests in JIRA.)

More details on the beta program to follow in
the coming months…

For an up-to-date summary of the OSMF feature set and release schedule, check out:
http://opensource.adobe.com/wiki/display/osmf/Features

Happy Thanksgiving!

Cue Point Support in OSMF

v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}

Normal
0
false

false
false
false

EN-US
X-NONE
X-NONE

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:12.0pt;
font-family:”Cambria”,”serif”;
mso-ascii-font-family:Cambria;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Cambria;
mso-hansi-theme-font:minor-latin;}

Guest post from Charles Newman of Akamai, who designed and implemented OSMF’s new cue point feature.


OSMF
v0.7 includes new functionality allowing you to create, inspect, and react to
temporal metadata, either embedded in the media at encoding time or added to the
media element at run time.

 

Since
cue points are essentially temporal metadata, we decided to provide a generic
solution for temporal metadata, rather than limit ourselves (and you) to cue
points for video elements.  Therefore,
temporal metadata can be applied to any media element in OSMF; you are not
limited to cue points on video elements. 

 

Types of Cue Points

 

Cue
points come in 3 flavors:

·        
Event: 
Meant to trigger some specific action when the player reaches the cue
point, such as displaying a caption, controlling an animation, etc.

·        
Navigation: 
Allows seeking to a specific point in the media, such as a chapter or a
sequence. The encoding software creates a key frame at the position of the cue
point.

·        
ActionScript: 
External cue points created at run-time; requires code to watch for
these cue points.

 

Event
and Navigation cue points are added at encoding time.  ActionScript cue points are added at
run-time.

 

Easing Your Pain

 

This
new support for temporal metadata in OSMF solves a few pain points and enables
other features to be built on this core functionality, such as closed
captioning.  A couple of specific pain
points include:

 

1)
F4V encodes created with a CS4 or earlier product do not fire in-stream cue
point events, you need to extract the cue point information from your onMetaData()
handler and create a timer to watch for cue points, then dispatch a custom
event.

2)
In order for your player to react to ActionScript cue points, as mentioned in
1) above, you’ve got to write some code, which may not be trivial, depending on
whether you want to optimize the timer, support seeking, etc.

 

F4V
files are H.264 encodes with an FLV wrapper. To react to your embedded event
cue points, you need to read the array of cue points in your onMetaData() handler,
create a timer, write some code to watch the NetStream time and dispatch your
own event.

 

For
ActionScript cue points, you need to do the same thing but also make sure the
cue points are sorted by time in your internal collection.

 

The
new temporal metadata support in OSMF v0.7 handles all of this for you with a
new metadata facet class called TemporalFacet.

 

If
you are unfamiliar with metadata support in OSMF, here is a brief description (for
more info you can read the spec here
http://opensource.adobe.com/wiki/display/osmf/Metadata+Support):

 

·        
Metadata
can be added to a media element or a media resource.

·        
All
metadata is organized by namespaces, guaranteeing their uniqueness, allowing
several different types of metadata to be added to a media element or its
resource.

·        
In
addition to a namespace, a metadata instance has a facet type.

·        
The
facet type describes what the metadata holds, for example there is a
KeyValueFacet that represents a concrete class containing a collection of key
value pairs. This class allows you to easily add key/value pairs as metadata to
a media element or a media resource.

 

The New Classes

 

Here
are the new classes that implement the new temporal metadata support along with
a brief description:

 

org.osmf.metadata.TemporalIdentifier

 

This
is a base class for temporal metadata; it defines time and duration properties.
The new CuePoint class extends this class.

 

org.osmf.metadata.TemporalFacetEvent

 

The
TemporalFacet dispatches this event. There are specific events for “position
reached” and “duration reached”.

 

org.osmf.metadata.TemporalFacet

 

This
class is essentially the temporal metadata manager.  It manages temporal metadata of type
TemporalIdentifier associated with a media element and dispatches events of
type TemporalFacetEvent when the playhead position of the media element matches
any of the time values in its collection of TemporalIdentifer objects.  Basically, this is the code you would need to
write to handle F4V event cue points and ActionScript cue points in your
player.

 

The
TemporalFacet class has an optimized algorithm for adding and watching for the
time values in its internal collection of TemporalIdentifier objects.  Here are some of the ways the algorithm is
optimized:

-      
Uses
a binary search to insert items into its collection of TemporalIdentifier
objects (sorted by time), rather than calling a sort method on each insert.
Inserting items in any order is very fast.

-      
Stops
the timer when the user pauses play back and restarts the timer when the user
resumes play back.

-      
Optimizes
the timer interval by looking ahead to the next cue point (there is no reason
to keep checking every 100 milliseconds, for example, when the next cue point
is 15 seconds away).

-      
Keeps
track of the last cue point fired so it doesn’t need to look through its entire
collection of cue points.

-      
If
the user seeks, it reliably dispatches the correct TemporalFacetEvent.

 

 

org.osmf.video.CuePoint

 

This
class extends TemporalIdentifier to provide a more standard cue point model for
video cue points. This class contains the properties: name, type, and
parameters (the parameters property returns an array of key value pairs added
at encode time or run time).

 

The New Cue Point Sample Application

The
cue point sample app in the OSMF v0.7 release
demonstrates the following:

-       Loads
a video and populates a data grid (in the upper right of the sample app) with the
embedded Event and Navigation cue points found in the onMetaData() handler.  You can sort this grid by time.  The purpose of the grid is to allow you to
navigate using the Navigation cue points and to verify the TemporalFacet class
is working correctly by showing you the events you should be receiving.

-       Clicking
on a Navigation cue point in the grid takes you to that position (key frame) in
the video.  This represents what could be
chapters or sequences.

-       Shows
the ActionScript and Event cue points (in the lower left of the sample app) as received
by the player code (the events are dispatched by the TemporalFacet class at run
time).

-       Allows
you to add ActionScript cue points at run time (in the lower right of the
sample app) and see those events being fired. 
As you hit the “Add” button you will see the ActionScript cue point
added to the data grid (note you may need to click on the Time column to force
a sort).  If you enter a duplicate, only
the last one is retained.

How to Listen for Cue Points

 

The first step is to listen for metadata facets
being added to your media element:

 

videoElement = new VideoElement(new NetLoader(), new URLResource(new URL(MY_STREAM)));

 

videoElement.metadata.addEventListener(MetadataEvent.FACET_ADD,
onFacetAdd);

 

When the
TemporalFacet is added to your media element, you can start listening for the
TemporalFacetEvent.POSITION_REACHED event:

 

private function onFacetAdd(event:MetadataEvent):void

{

  
var
facet:TemporalFacet = event.facet as TemporalFacet;

  
if
(facet)

  
{

  
   facet.addEventListener(TemporalFacetEvent.POSITION_REACHED,

  
                          onCuePoint);

 

 

How to Add Cue Points at Run-time

 

Create a
new TemporalFacet with your own unique namespace, add the facet to the metadata
for the video element, and then add your cue point to the facet:

 

_temporalFacet = new TemporalFacet(new URL(CUSTOM_NAMESPACE), videoElement);

videoElement.metadata.addFacet(_temporalFacet);                  

                 

var cuePoint:CuePoint = new CuePoint(CuePointType.ACTIONSCRIPT,

                                    121, // time
in seconds

                                    ‘my test cue
point’,

null);

_temporalFacet.addValue(cuePoint);

 

When you
add the facet to the metadata for the media element, you will get the
MetadataEvent.FACET_ADD as shown in the example above this one. In that event
handler, you can create a unique listener for your namespace, or use one event
listener for cue points coming from all namespaces.

 

What’s Next

 

As I
mentioned earlier, this new functionality lays the groundwork for more useful
and exciting OSMF features and plugins. 
Next up, closed captioning.

OSMF v0.7 available

Another month, another OSMF release!  Here are the highlights from the latest drop:

  • Cue Point Support.  OSMF now has support for all types of cue points (event, navigation, and AS).  Cue point support is built on top of our new support for temporal metadata, which can serve as the basis for defining (and responding to) metadata along the media’s timeline.  (We’ll cover this in greater detail in a separate post.)
  • Tracking Download Progress. We’ve introduced a new trait for following the download progress (in terms of bytes) of media.  The relevant properties and events are exposed on the MediaPlayer class, and can be used to represent the portion of the media which is immediately seekable (e.g. the red seek track in YouTube’s player).
  • Package Renaming + Refactorings.  You may notice that the package for the OSMF classes is now “org.osmf”.  (Yes, we finally got around to fixing that!)  As part of this renaming, we took the opportunity to do some general cleanup of code and APIs.

In addition, this drop contains an initial implementation of some content protection features in support of Flash Access 2.0, Adobe’s Digital Rights Management solution for the Flash platform.  These features require a new version of the Flash Player, version 10.1, which is not yet publicly available.  For information about participating in the Flash Access private prerelease program, please contact Kelly Miller (kelmille@adobe.com).

And now to the links:

Simple Stand Alone Video Player

Recently we were asked if we’d ever created an OSMF based desktop player that was capable of switching back and forth between windowed and full-screen mode. The answer was “no”, but re-using some of the sample code for our recent MAX presentation, we thought that building it shouldn’t be too much work – so we set out to create it as a sample.

The resulting very very simple ‘proof-of-concept’ AIR based OSMF Desktop Player is available for download here. The sources to the files are available right here.

Operating Instructions
(please note that the application is not a supported product!)

  • To load a video, double click the player’s chrome.
  • To switch between windowed mode and full-screen, press the ‘f’ key.
  • To quit the player, bring it into focus and press ALT-F4 on Windows, or CMD-Q on OS-X.
  • The player doesn’t scrub: the progress bar merely indicates the play-head’s position.

The remainder of this post goes over the sample’s code. We’ll touch on how to bind a UI to a media element using traits, and how to put a viewable media element on the stage by setting its gateway property to a RegionGateway instance.

Continue reading…