Cue Point Support in OSMF

v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}

Normal
0
false

false
false
false

EN-US
X-NONE
X-NONE

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:12.0pt;
font-family:”Cambria”,”serif”;
mso-ascii-font-family:Cambria;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Cambria;
mso-hansi-theme-font:minor-latin;}

Guest post from Charles Newman of Akamai, who designed and implemented OSMF’s new cue point feature.


OSMF
v0.7 includes new functionality allowing you to create, inspect, and react to
temporal metadata, either embedded in the media at encoding time or added to the
media element at run time.

 

Since
cue points are essentially temporal metadata, we decided to provide a generic
solution for temporal metadata, rather than limit ourselves (and you) to cue
points for video elements.  Therefore,
temporal metadata can be applied to any media element in OSMF; you are not
limited to cue points on video elements. 

 

Types of Cue Points

 

Cue
points come in 3 flavors:

·        
Event: 
Meant to trigger some specific action when the player reaches the cue
point, such as displaying a caption, controlling an animation, etc.

·        
Navigation: 
Allows seeking to a specific point in the media, such as a chapter or a
sequence. The encoding software creates a key frame at the position of the cue
point.

·        
ActionScript: 
External cue points created at run-time; requires code to watch for
these cue points.

 

Event
and Navigation cue points are added at encoding time.  ActionScript cue points are added at
run-time.

 

Easing Your Pain

 

This
new support for temporal metadata in OSMF solves a few pain points and enables
other features to be built on this core functionality, such as closed
captioning.  A couple of specific pain
points include:

 

1)
F4V encodes created with a CS4 or earlier product do not fire in-stream cue
point events, you need to extract the cue point information from your onMetaData()
handler and create a timer to watch for cue points, then dispatch a custom
event.

2)
In order for your player to react to ActionScript cue points, as mentioned in
1) above, you’ve got to write some code, which may not be trivial, depending on
whether you want to optimize the timer, support seeking, etc.

 

F4V
files are H.264 encodes with an FLV wrapper. To react to your embedded event
cue points, you need to read the array of cue points in your onMetaData() handler,
create a timer, write some code to watch the NetStream time and dispatch your
own event.

 

For
ActionScript cue points, you need to do the same thing but also make sure the
cue points are sorted by time in your internal collection.

 

The
new temporal metadata support in OSMF v0.7 handles all of this for you with a
new metadata facet class called TemporalFacet.

 

If
you are unfamiliar with metadata support in OSMF, here is a brief description (for
more info you can read the spec here
http://opensource.adobe.com/wiki/display/osmf/Metadata+Support):

 

·        
Metadata
can be added to a media element or a media resource.

·        
All
metadata is organized by namespaces, guaranteeing their uniqueness, allowing
several different types of metadata to be added to a media element or its
resource.

·        
In
addition to a namespace, a metadata instance has a facet type.

·        
The
facet type describes what the metadata holds, for example there is a
KeyValueFacet that represents a concrete class containing a collection of key
value pairs. This class allows you to easily add key/value pairs as metadata to
a media element or a media resource.

 

The New Classes

 

Here
are the new classes that implement the new temporal metadata support along with
a brief description:

 

org.osmf.metadata.TemporalIdentifier

 

This
is a base class for temporal metadata; it defines time and duration properties.
The new CuePoint class extends this class.

 

org.osmf.metadata.TemporalFacetEvent

 

The
TemporalFacet dispatches this event. There are specific events for “position
reached” and “duration reached”.

 

org.osmf.metadata.TemporalFacet

 

This
class is essentially the temporal metadata manager.  It manages temporal metadata of type
TemporalIdentifier associated with a media element and dispatches events of
type TemporalFacetEvent when the playhead position of the media element matches
any of the time values in its collection of TemporalIdentifer objects.  Basically, this is the code you would need to
write to handle F4V event cue points and ActionScript cue points in your
player.

 

The
TemporalFacet class has an optimized algorithm for adding and watching for the
time values in its internal collection of TemporalIdentifier objects.  Here are some of the ways the algorithm is
optimized:

-      
Uses
a binary search to insert items into its collection of TemporalIdentifier
objects (sorted by time), rather than calling a sort method on each insert.
Inserting items in any order is very fast.

-      
Stops
the timer when the user pauses play back and restarts the timer when the user
resumes play back.

-      
Optimizes
the timer interval by looking ahead to the next cue point (there is no reason
to keep checking every 100 milliseconds, for example, when the next cue point
is 15 seconds away).

-      
Keeps
track of the last cue point fired so it doesn’t need to look through its entire
collection of cue points.

-      
If
the user seeks, it reliably dispatches the correct TemporalFacetEvent.

 

 

org.osmf.video.CuePoint

 

This
class extends TemporalIdentifier to provide a more standard cue point model for
video cue points. This class contains the properties: name, type, and
parameters (the parameters property returns an array of key value pairs added
at encode time or run time).

 

The New Cue Point Sample Application

The
cue point sample app in the OSMF v0.7 release
demonstrates the following:

-       Loads
a video and populates a data grid (in the upper right of the sample app) with the
embedded Event and Navigation cue points found in the onMetaData() handler.  You can sort this grid by time.  The purpose of the grid is to allow you to
navigate using the Navigation cue points and to verify the TemporalFacet class
is working correctly by showing you the events you should be receiving.

-       Clicking
on a Navigation cue point in the grid takes you to that position (key frame) in
the video.  This represents what could be
chapters or sequences.

-       Shows
the ActionScript and Event cue points (in the lower left of the sample app) as received
by the player code (the events are dispatched by the TemporalFacet class at run
time).

-       Allows
you to add ActionScript cue points at run time (in the lower right of the
sample app) and see those events being fired. 
As you hit the “Add” button you will see the ActionScript cue point
added to the data grid (note you may need to click on the Time column to force
a sort).  If you enter a duplicate, only
the last one is retained.

How to Listen for Cue Points

 

The first step is to listen for metadata facets
being added to your media element:

 

videoElement = new VideoElement(new NetLoader(), new URLResource(new URL(MY_STREAM)));

 

videoElement.metadata.addEventListener(MetadataEvent.FACET_ADD,
onFacetAdd);

 

When the
TemporalFacet is added to your media element, you can start listening for the
TemporalFacetEvent.POSITION_REACHED event:

 

private function onFacetAdd(event:MetadataEvent):void

{

  
var
facet:TemporalFacet = event.facet as TemporalFacet;

  
if
(facet)

  
{

  
   facet.addEventListener(TemporalFacetEvent.POSITION_REACHED,

  
                          onCuePoint);

 

 

How to Add Cue Points at Run-time

 

Create a
new TemporalFacet with your own unique namespace, add the facet to the metadata
for the video element, and then add your cue point to the facet:

 

_temporalFacet = new TemporalFacet(new URL(CUSTOM_NAMESPACE), videoElement);

videoElement.metadata.addFacet(_temporalFacet);                  

                 

var cuePoint:CuePoint = new CuePoint(CuePointType.ACTIONSCRIPT,

                                    121, // time
in seconds

                                    ‘my test cue
point’,

null);

_temporalFacet.addValue(cuePoint);

 

When you
add the facet to the metadata for the media element, you will get the
MetadataEvent.FACET_ADD as shown in the example above this one. In that event
handler, you can create a unique listener for your namespace, or use one event
listener for cue points coming from all namespaces.

 

What’s Next

 

As I
mentioned earlier, this new functionality lays the groundwork for more useful
and exciting OSMF features and plugins. 
Next up, closed captioning.