/* Style Definitions */
mso-padding-alt:0in 5.4pt 0in 5.4pt;
Guest post from Charles Newman of Akamai, who designed and implemented OSMF’s new cue point feature.
v0.7 includes new functionality allowing you to create, inspect, and react to
temporal metadata, either embedded in the media at encoding time or added to the
media element at run time.
cue points are essentially temporal metadata, we decided to provide a generic
solution for temporal metadata, rather than limit ourselves (and you) to cue
points for video elements. Therefore,
temporal metadata can be applied to any media element in OSMF; you are not
limited to cue points on video elements.
Types of Cue Points
points come in 3 flavors:
Meant to trigger some specific action when the player reaches the cue
point, such as displaying a caption, controlling an animation, etc.
Allows seeking to a specific point in the media, such as a chapter or a
sequence. The encoding software creates a key frame at the position of the cue
External cue points created at run-time; requires code to watch for
these cue points.
and Navigation cue points are added at encoding time. ActionScript cue points are added at
Easing Your Pain
new support for temporal metadata in OSMF solves a few pain points and enables
other features to be built on this core functionality, such as closed
captioning. A couple of specific pain
F4V encodes created with a CS4 or earlier product do not fire in-stream cue
point events, you need to extract the cue point information from your onMetaData()
handler and create a timer to watch for cue points, then dispatch a custom
In order for your player to react to ActionScript cue points, as mentioned in
1) above, you’ve got to write some code, which may not be trivial, depending on
whether you want to optimize the timer, support seeking, etc.
files are H.264 encodes with an FLV wrapper. To react to your embedded event
cue points, you need to read the array of cue points in your onMetaData() handler,
create a timer, write some code to watch the NetStream time and dispatch your
ActionScript cue points, you need to do the same thing but also make sure the
cue points are sorted by time in your internal collection.
new temporal metadata support in OSMF v0.7 handles all of this for you with a
new metadata facet class called TemporalFacet.
you are unfamiliar with metadata support in OSMF, here is a brief description (for
more info you can read the spec here http://opensource.adobe.com/wiki/display/osmf/Metadata+Support):
can be added to a media element or a media resource.
metadata is organized by namespaces, guaranteeing their uniqueness, allowing
several different types of metadata to be added to a media element or its
addition to a namespace, a metadata instance has a facet type.
facet type describes what the metadata holds, for example there is a
KeyValueFacet that represents a concrete class containing a collection of key
value pairs. This class allows you to easily add key/value pairs as metadata to
a media element or a media resource.
The New Classes
are the new classes that implement the new temporal metadata support along with
a brief description:
is a base class for temporal metadata; it defines time and duration properties.
The new CuePoint class extends this class.
TemporalFacet dispatches this event. There are specific events for “position
reached” and “duration reached”.
class is essentially the temporal metadata manager. It manages temporal metadata of type
TemporalIdentifier associated with a media element and dispatches events of
type TemporalFacetEvent when the playhead position of the media element matches
any of the time values in its collection of TemporalIdentifer objects. Basically, this is the code you would need to
write to handle F4V event cue points and ActionScript cue points in your
TemporalFacet class has an optimized algorithm for adding and watching for the
time values in its internal collection of TemporalIdentifier objects. Here are some of the ways the algorithm is
a binary search to insert items into its collection of TemporalIdentifier
objects (sorted by time), rather than calling a sort method on each insert.
Inserting items in any order is very fast.
the timer when the user pauses play back and restarts the timer when the user
resumes play back.
the timer interval by looking ahead to the next cue point (there is no reason
to keep checking every 100 milliseconds, for example, when the next cue point
is 15 seconds away).
track of the last cue point fired so it doesn’t need to look through its entire
collection of cue points.
the user seeks, it reliably dispatches the correct TemporalFacetEvent.
class extends TemporalIdentifier to provide a more standard cue point model for
video cue points. This class contains the properties: name, type, and
parameters (the parameters property returns an array of key value pairs added
at encode time or run time).
The New Cue Point Sample Application
cue point sample app in the OSMF v0.7 release demonstrates the following:
a video and populates a data grid (in the upper right of the sample app) with the
embedded Event and Navigation cue points found in the onMetaData() handler. You can sort this grid by time. The purpose of the grid is to allow you to
navigate using the Navigation cue points and to verify the TemporalFacet class
is working correctly by showing you the events you should be receiving.
on a Navigation cue point in the grid takes you to that position (key frame) in
the video. This represents what could be
chapters or sequences.
the ActionScript and Event cue points (in the lower left of the sample app) as received
by the player code (the events are dispatched by the TemporalFacet class at run
you to add ActionScript cue points at run time (in the lower right of the
sample app) and see those events being fired.
As you hit the “Add” button you will see the ActionScript cue point
added to the data grid (note you may need to click on the Time column to force
a sort). If you enter a duplicate, only
the last one is retained.
How to Listen for Cue Points
The first step is to listen for metadata facets
being added to your media element:
videoElement = new VideoElement(new NetLoader(), new URLResource(new URL(MY_STREAM)));
TemporalFacet is added to your media element, you can start listening for the
private function onFacetAdd(event:MetadataEvent):void
var facet:TemporalFacet = event.facet as TemporalFacet;
How to Add Cue Points at Run-time
new TemporalFacet with your own unique namespace, add the facet to the metadata
for the video element, and then add your cue point to the facet:
_temporalFacet = new TemporalFacet(new URL(CUSTOM_NAMESPACE), videoElement);
var cuePoint:CuePoint = new CuePoint(CuePointType.ACTIONSCRIPT,
121, // time
‘my test cue
add the facet to the metadata for the media element, you will get the
MetadataEvent.FACET_ADD as shown in the example above this one. In that event
handler, you can create a unique listener for your namespace, or use one event
listener for cue points coming from all namespaces.
mentioned earlier, this new functionality lays the groundwork for more useful
and exciting OSMF features and plugins.
Next up, closed captioning.