Posts in Category "General Video Concepts"

UX Case Study: ESPN3 (Part 2)– The Stuff Outside the Video Box

In the previous chapter, I termed ESPN3 “player centric” because the compact design focuses on the Video Player, where it rightly should in this user scenario.

Even with the maximum amount of stuff possible going on, given the options, the focus stays on the player although in the example below where I’ve got 4 different events playing back simultaneoulsly, this does start to fall apart somewhat.

espn3_chap2_01.jpg

What remains impressive is that this works at all. I’ve got both live and on-demand sporting events of an extremely wide-variety playing back simultaneously (European Soccer, NCAA Lacrosse, International Cricket, and Georgia Bass Fishing — as in the U.S. state Georgia, not the country Georgia, although I’ll bet a Bass Fishing show from the country Georgia would be a frikkin’ gas . . . )

espn3_chap2_02.jpg

OK, maybe not.

The Mosaic view does make sense in full-screen mode (as shown above). It all looks pretty good as well, as long as you’re sporting internet connectivity of 1mbps sustained, or better.

THE STUFF OUTSIDE THE PLAYER

Less is always more with a design of this nature. Anything outside the Video Player better be damn important to the end-user, and consistent with brand.

Below-the-player content is what caught my eye first in this design. My brain thought it would give me more choices of sporting events that are live now. And that is the case with the thumbnails at the bottom (and where there aren’t at least 4 events currently live, you get “recommended” featured on-demand content).

espn3_chap2_03.jpg

Above the thumbnails, there is a widget which displays a live feed of sports scores, which at the moment defaults to MLB games (because it’s baseball season, I guess). If you want to get scores from other sports, the UI cascades out nicely to show you the available options, all the way down to that most esoteric of selections, ANY kind of sport from Paraguay.

espn3_chap2_04.jpg

Over to the right side of the player, we get some stuff that’s useful, and some other stuff that’s redundant (or perhaps could be integrated with the scores & thumbnails at the bottom of the UI to save some real-estate).

espn3_chap2_05.jpg

The “Featured Events” panel seems to be 100% redundant to the thumbs at the bottom, showing “Featured Events” (even though not labeled as such). No reason for this to be here, other than to flush out the geometry of the design (and that’s a pretty weak reason IMO).

The Stats would/could be useful, if there were any for this particular event. And then, maybe even we could move this to below the player instead, as a tab within the Scores panel (maybe I’m being hasty here — if there were some useful stats in that panel now, I could be able to make a more informed opinion — let me come back to this one at a later time).

The Chat Panel seems really straighforward, although I’d like a glimpse of how many users are in this particular chat room, and how many chat rooms (with how many users, respectively) are online related to this event. I’ve also not yet tested how it behaves if you’re using Mosaic view (i.e., does it flip to the event that has currently has focus, when you’re viewing 4 events simultaneously? As a random thought, it could be a cool design to have 4 chat pods open simultaneous with the 4 events, to be able to keep 4 chats going at once. A sports geek’s geek-out dream come true!).

Internet TV Technical Basics: Video Compression

Have you ever heard that Texas expression that goes something like this:

“That”s like trying to stuff 10 pounds of manure into a 5 pound bag”

Let’s consider that expression as an analogy to what one needs to do in order to get a video file delivered from some server in god-knows-where over the public Internet to whatever device you choose to view it on.

So, how exactly DOES one stuff 10lbs of crap into a 5lb bag?

Well, if there are some gaps of air in it, you can compress it somewhat and get more in the bag, right? No, wait, we’re talking about air here, it doesn’t weigh anything. Scratch that.

Well, you physics majors out there realize there’s no way you can get that 10lbs into the 5lb bag. The only thing you can do is leave some of it behind. So what can you leave behind, and still retain the ESSENCE AND INTEGRITY of the dung (OK, this analogy has gone way too far and ends here).

When you watch Internet TV, the pipe that connects the your computer to the server where the video is stored is in a state of constant fluctuation. How much data can get through that pipe changes from moment to moment.

Think of the video as your “10 pounds of manure” and the pipe can only get 5 pounds through at this instant, but maybe in a few seconds it can only take 3, and then a few seconds later it can handle 7. What needs to happen in order for that video to get through that pipe, is for it to get Compressed, which means getting rid of some of the data that makes up the digital video file, while maintaining its ESSENCE AND INTEGRITY.

In video production, we tend to work with very large video files which are either uncompressed (which maintains 100% integrity, meaning the files contain all of the visual information that the camera captured when the video was recorded), or are compressed to an extent that you can’t see any difference from the uncompressed file (this is defined as Lossless Compression).

You simply can’t deliver these huge files we use in production over the public Internet. Nobody in the universe has a connection fast enough. So we need to Compress the video to a Lossy Compression format, which will degrade the integrity of the video to some extent.

Every single video standard that works for Internet TV involves Lossy Compression.

As time goes on, and technical innovation continues, new and better Compression algorithms become available that let you get better quality at a lower Bitrate.

So what’s a Bitrate?

Basically, your internet connection is like that pipe I was talking about earlier — you can only get so much data through at any given time. So let’s say your “average throughput” is 700kbps (kilobits per second). If the video you are watching was Compressed to a Bitrate of 700kbps, you’d have a very good viewing experience, unless your connection started to drop lower. Then maybe the video would pause or stutter.

Every video you watch on Internet TV was Compressed to a specific Bitrate. On Adobe TV, our videos are generally Compressed to about 600kbps.

Many websites offer a choice of several different Bitrates, so if you have a faster Internet connection, you can view higher quality video. Take this one below as an example:

Once you’ve started playing the video, the button that says “360p” contains selections for 3 different Bitrate versions of the video. Each version is encoded to a different Bitrate, but many sites obscure this by defining the versions by their Frame Sizes rather than their Bitrates (as this tends to be a closely-guarded secret for competitive resaons).

If you go to the highest available bitrate, defined as 720p, the video will look much, much, better and the audio will sound better as well (audio is also Compressed, along with the Video, for delivery for Internet TV). If I had to venture a guess, I’d say 360p is a bitrate of 300kbps, 460p is 600kbps, and 720p is 1mbps (megabit, or 1000 kilobits, per second). But that’s only a guess.

Some Internet TV channels, like MLB.TV and HULU, offer Dynamic Bitrate Switching, which means the Video Player will deliver the highest Bitrate that your Internet connetion can handle, at any given moment, and vary the Bitrate that you get depending on how your connection speed expands and contracts.

To see just how much better Compression algorithms have gotten over the years, let’s look at a video that was uploaded to YouTube 3 years ago (I selected the video randomly, I just wanted it to be a few years old to illustrate my point):

Because of the Compression algorithms available at that time, and also because the video had to be Compressed to a much lower Bitrate back then (as the average user’s Internet connection was much slower then than it is today) this looks pretty rough.

Now here’s a random example from Vimeo, who generally uses a higher Bitrate encoding profile, that was posted and encoded today (go fullscreen to see it in all it’s splendor):

Amsterdam Osdorp from The QBF on Vimeo.

This isn’t even close to the best quality Internet TV has to offer today, but man isn’t that a huge difference?

More on this subject to come soon, this is an incredibly deep topic….

How To Make Your Own Basic Internet TV Video Player & Webpage

As a follow-on to my previous post, here’s how you can create your own integrated Video Player and Webpage in a few easy steps using Dreamweaver. This works with just about any relatively recent version, but the screencaps below were taken in Dreamweaver CS5.

First, you need a video file encoded to a Flash-compatible format (FLV or F4V) which can be output from just about any Adobe authoring tool, including Premiere Pro and After Effects, or by taking an existing video file and encoding it with the Adobe Media Encoder.

Next, launch Dreamweaver and create a new HTML document. Type any old text you want, just to have some content on your page, and then select Insert > Media > FLV… (important note: the actual menu verbage may vary slightly, depending on which version of DW you’re using, but with a little poking around you’ll find it).

dw_insert_media.jpg

And then you get the dialog. The first menu is the only one that needs explanation, and that explanation warrants a separate post which will come shortly. For now, all you need to know is keep it set to it’s default, which is Progressive Download.

Insert_FLV.jpg

The rest of the dialog is pretty self-explanatory. Browse to your FLV video file, select a player Skin (there are a few stock skins but you can also create your own using the Flash Professional authoring tool), set the size of the player, etc. Then, you’ve got a Webpage in DW with a big grey box on it, representing the placement of your Video Player.

DW_player_inserted.jpg

And lastly, to view your new creation, select File > Preview in Browser.

look_at_my_video.jpg

So at this point, in order to make this live on the Internet, you would copy the HTML file, the FLV file, and the 2 SWF files that were created in the same folder as your HTML file when you inserted the FLV file onto the Webpage in the steps above. The first file will be called FLVPlayer_Progressive.swf, and (name of the skin you selected).swf. These 2 SWF files are the actual player component and the skinning of the player, respectively. Dreamweaver created this files for you when you inserted the video. The key is to remember to upload them when you publish the video, the Video Player, and the Webpage to your website.

Video Player vs Webpage

Keeping with the tradition of this blog (to keep the info as digestible for as wide a range of skill levels as possible), I want to explain some basic Internet TV concepts, so if you’re new to this the posts you’ll be reading in the future make more sense.

When you’re watching video in a web browser, there are 2 main components of that experience. The “Video Player” and the “Webpage”. They are generally developed as separate components that get integrated once they are developed. Let’s look at YouTube as an example:

player_vs_webpage.jpg

I dimmed out everything that isn’t the player.

The player is the component that actually plays back the video, and contains the controls that let you navigate and manipulate the video (i.e. move forward and backwards in time, enter Fullscreen mode, Closed Captioning on, etc) . In the case of most of the websites you watch video on, it is a player built on the Adobe Flash Platform, and is playing video encoded to one of the Flash video specifications.

Everything else is contained within the Webpage. The Webpage is generally build using a mix of different technologies including HTML and Javascripit.

Ultimately, the video player is “embedded” within the webpage, which is what integrates the 2 together. This is also what happens when you take the “embed” code from a video player and put it on your own webpage (as I have done with one of the films I’ve produced for Adobe TV, directly below this paragraph) or when you “Share” a video on a social networking site like Facebook.

Other things integrated within the Player, behind the scenes, are snippets of code which report usage back to a metrics & reporting system (in the case of Adobe TV, every time you watch a video, including if you just played that video above this sentence, it sends information that the video was watched, and also reports how much of it you watched, to our reporting system Omniture Site Catalyst). If there is ad-insertion, there is also code that calls out to the ad server to show you advertisements at specified times (generally “pre-roll”, i.e. before the video you actually came to watch plays). These are just a few examples.

Why Hulu’s New Player is the One To Beat

Hulu‘s player, IMO, has always been one of the best models of simplicity, functionality, and aesthetic harmony (defined in my book as “not getting in the way of the video”). This remains the case with the new 3.0 player which recently went live.

In it’s default “play” state it’s 100% clean, with the exception of the very unobtrusive copyright notice and “more options” tab.

hulu_v3_01d_520.jpg

On the rollover state (i.e. when your curser moves over the video), a standard set of player controls and timeline appear overlaid on the video, and a row of options appears flagged off the upper-right-hand corner. I really like the arrangement of these particular player controls, as they tend to get visually detached from the player on other sites.

hulu_v3_02b.jpg

In the above screengrab, I had clicked the “Video Settings” icon, which brought up a dialog allowing me to save a default preference for any of the 3 available bitrates, or for dynamic bitrate switching (which they define as “Auto-select the best quality for my bandwidth”). You need to be logged in to save as a default, but since certain Hulu content is behind login anyway (for “mature” themed content, e.g.) this won’t be a barrier for most users as we tend to be logged in anyway.

This new capability to save a default resolution does fix one of the more annoying aspects of the previous player, which defaulted to delivering the 360p stream, and required that you manually click on the 480p button at the beginning of the video, and you could only do this after all pre-rolls were completed. When you watch as much content on Hulu as I do, this is a very, very nice improvement to the UX.

Another welcome improvement is dynamic thumbnails on the timeline, which they’re calling “Seek Preview”:

hulu_v3_03b.jpg

It is really, really fluid, As you move your curser left and right over the timeline, it updates the timecode as well as the thumbnail. I’m fond of Netflix On Demand‘s timeline scrub feature, as it shows you 5 frames as you scrub through, which can definitely help you locate the scene you’re looking for much quicker, and I think Netflix may still have Hulu beat on this one (although Hulu’s implementation is really darn responsive).

Finally, the all-important Fullscreen state has 2 new features I really like. The first is the clock showing the time-of-day, positioned in the upper-right corner which is exactly where the OS clock appears on my Mac’s desktop. This means I don’t need to leave Fullscreen to check the time. Nice.

hulu_v3_04b.jpg

Second, in the upper-center you can see the state of the buffer. Unless I’m semi-passively watching a video (e.g. I’ve got a ballgame on as I’m getting some work done) I like to control the bitrate I’m getting, and make the necessary pauses to buffer if necessary to sustain the quality I want.

There are some other great features in the new player, but these are the ones that stand out to me, in the way I use Hulu.

Howdy Dooit?

If anyone’s seen my camera, please let me know, I seem to have lost it. I’m not joking –it’s a Canon Power Shot that’s been scratched up really badly from getting knocked around in my travels. I turned my office upside down yesterday looking for it, and in the process came across a disc with some photos from the CS3 demo asset shoot with UVPH that we did in NYC late last year.

howdy_01.jpg

So let’s talk a little bit about what’s going on in that photo above. Our actor friend is standing on a treadmill that one of the UVPH guys found on the street. They took the control panel off, and that’s what the guy kneeling on the floor is playing around with. They also painted the treadmill Chromakey Green to match the psyche which is painted the same color. The whole idea is to key all that stuff out so we wind up with a shot of the actor running in “mid air”.

howdy_02.jpg

A few takes with the right framing is all we needed. We did lots of scenes like this with several actors, all doing various activities. Have a look.

Pretty neat, huh? Here’s how we shot the rock climber:

howdy_03.jpg

Some stuff painted green and some imagination is all you need.

A short note – I was interviewed this week for an NBC TV program called Tech Now. It will air this weekend in the following cities and times: Saturday at 6:30 p.m. on KNTV San Jose/San Francisco; Saturday at 5 p.m. on KNSD San Diego; Throughout the week on WNBC Digital (4.4) New York (which is available on most cable systems there).

You can also watch the podcast here no matter where you live.

I’ll be in a story about the 30th anniversary of Star Wars, talking about how anyone can do “Star Wars type effects” themselves using After Effects.

Instant Dimentionality

Yep, I’m making up words again. That’s jetlag talking. But through the jetlag I’m going to try and show you how to create a 3d model from a photograph using some new integration we’ve done with Photoshop CS3 Extended and After Effects CS3.

A lot of what we do here at the “factory” is try and take things that would take you hours or even days to do and give you ways to do them in a matter of minutes. Sometimes that takes looking within and seeing what bits of this app could be used to help someone working in that app. The “secret sauce” in this case is something called Vanishing Point Exchange (vpe).

You might be familiar with a feature of Photoshop called Vanishing Point, which is typically used when working with still images to define the perspective of a scene or object. What vpe does is let you take the geometry data generated by Vanishing Point and make use of it in other applications. In Creative Suite 3, you can now export the vpe to After Effects where before your very eyes a 3d scene is automatically created, something that would’ve taken huge buckets of time in the past.

I’m going to be starting with a photo I just snapped here in my SF office:

vpe_01.jpg

Thrilling, isn’t it? No, really, we do have a very beautiful office here – it’s just that I wanted to start with something simple for this tutorial – something with good, clear corner perspective.

You need to have Photoshop CS3 Extended to export the vpe, but you can still follow along with the next step, which is to create your planes in Vanishing Point, if you’re using the Standard edition.

With the photo open in Photoshop, select Filter > Vanishing Point. You will start by defining a plane in the photo, and you want to look for the easiest one to define. In my photo, it is the wall on the right side. It’s a matter of clicking on the 4 corners, lining up each edge with the edge of the plane you’re defining, and you’re done. If your plane is red, Photoshop is telling you it can’t get a read on your plane, so try again ‘til you get it (just use the hard edges in your photo as your guide). Once you’ve got a good plane it’ll look like this:

vpe_02.jpg

If you look at my cursor, on the right, you can see I am dragging to the right to extend the plane just past the edge of the photo – that’s about where you want to be. You can adjust the first plane after you’ve drawn it, and do take advantage of that capability because it is imperative to get this first plane right. If you don’t the whole rest of this will be messed up.

The second most important thing is to get the second plane right. For this I’ll use the left-hand wall. Create a new plane by holding down Cmd (Mac) / Ctrl (Win) on the left-hand control point on the original plane, and drag a new plane to the left (if your second plane is in a different direction than adjust that instruction accordingly). It is important to add your additional planes in this matter, as the planes need to be connected in order for this to work.

vpe_03.jpg

If the plane doesn’t line up right, you’ll need to rotate it. Hover your curser over the same control point you were just using, and hold down Opt (Mac) / Alt (Win) – your curser turns into a little bendy arrow. Use it to adjust the angle of your second plane – a task you can also accomplish in the “Angle” widget at the top of the Vanishing Point UI.

vpe_04.jpg

Continue adding and adjusting planes, repeating those steps, until you’ve got your planes all defined. If I weren’t in such a hurry to write this, I would’ve also refined this by adding planes to those brown columns on the left-hand wall, which would add more realism, but you can go ahead and do that on your own time ;-)

Here’s what I wound up with:

vpe_05.jpg

Now it’s time for that “secret sauce”. Go up to your fly-out menu (that little triangle-in-a-circle that you see in all Adobe apps) and select Export for After Effects CS3 (.vpe)

vpe_06.jpg

Create a new folder somewhere on your hard drive, because Photoshop is going to spit out a bunch of .png image files (one for each plane you drew) and a .vpe which holds all the geometry data. Go ahead and save. Then close out of Vanishing Point and save your PSD, you’re done there.

Now, switch over to After Effects CS3 and select File > Import > Vanishing Point (.vpe)

vpe_07.jpg

You’ll see a bunch of new stuff in your Project Panel, including a new Composition. Double-click the Composition and you’ll see that AE has built for you a 3D scene based on the vpe. It has arranged all the exported planes (each of them an individual layer in the .png format) in 3d space.

vpe_08.jpg

Select your Orbit Camera tool (letter “C” on your keyboard) and rotate your scene to see the 3d glory. I did a quick animation on my camera and got this:

You can also see that there was a bunch of white space where my Vanishing Point planes extended past the edge of my photo. That’s fixed easily by selecting the layer in the AE Project Panel, then selecting Edit > Edit Original which opens that layer in Photoshop.

vpe_09.jpg

Then it’s generally time to use the Clone Tool, Healing Brush, or whatever tool suits the need. In my case I used the Clone Tool to “fill in the blanks” (here it is “in progress”).

vpe_10.jpg

Here it is, cleaned up a bit (not 100% yet, but with 5 min. in Photoshop I was able to get it 95% of the way there – in 15 more minutes it’ll be perfect).

I want to do a users gallery of this kind of stuff, so please send me comments if you’ve done anything cool with this technique.

Legal Matters

If you started in video after the mid-90’s there’s a good chance you never used a tape-to-tape, linear, A/B Roll system or a flatbed (I’ve used the former but not the latter, which gives my wife bragging rights in that department). Today, for most people, the definition of “post production hardware” is a computer and maybe some bits and pieces plugged into it, but in the old days you needed a roomful of expensive and complicated gear to get anything done.

Software like Adobe’s DV Rack simulates a lot of the gear you’d find in an old-school edit suite like a broadcast monitor and waveform/vectorscopes.

legal_matters_01.jpg

The first thing you learned in old-school editing school was how to read those scopes –they’re key to making sure your video is broadcast legal. You’ll also find them in Premiere Pro by opening your Reference Monitor (from the Program Monitor’s flyout menu) and selecting the scope you want from its flyout menu (the flyout menu is the little round button with the triangle inside it that’s in the upper-right corner of every panel in Adobe’s video & audio tools).

legal_matters_02.jpg
The YC Waveform Monitor and the Vectorscope in Premiere Pro.

The basic idea is that TV screens, unlike computer screens, display an image comprised of Luma (brighness) and Chroma (color). The three channels that make up a video signal are Y (Luma) Cr (Chroma Red) and Cb (Chroma Blue), also referred to as YUV. Broadcast legal for NTSC video is within the range 7.5 IRE and 100 IRE on the waveform monitor (IRE stands for “Institute of Radio Engineers” for those of you keeping score).

7.5 IRE is black, and 100 IRE is white, and everything else needs to fall in between in order for video to be “broadcast legal” (exception is in Japan where they use NTSC with 0 IRE black). You can see the IRE scale on the right hand side of the YC Waveform Monitor in the image above.

If your video isn’t broadcast legal it will not be aired, and even if it will never be broadcast it will cause many TV sets to produce an annoying “buzz” in the audio.

Now you may be thinking “my videos aren’t for broadcast, they’re not even for a TV set, and I don’t need to worry about this.” Well, one reason you should care is that video looks completely different on a computer screen than it does on a TV screen and if your video is going to be viewed on a computer monitor (e.g. on the web) or a handheld device (e.g. iPod) you should color correct it. Computers display images in terms of RGB or Red (R) Green (G) and Blue (B) so if you want your video to look its best you’ll need to compensate.

legal_matters_03.jpg

In computer land, 0 RGB is black and 255 RGB is white. The problem is that when you convert video black (7.5 IRE) to computer black it actually translates as 16 RGB, not 0 RGB where it should be. Likewise, video white (100 IRE) translates to 235 RGB, not 255. So what you wind up with is less contrast, and blacks & whites that aren’t true. Color correcting your video can fix this and here’s a quick and easy way to do it:

If you’re editing in Premiere Pro, once you’re finished and done apply the Levels effect to your entire sequence. Start by nesting your sequence in a new sequence by clicking the New Item button and selecting Sequence

legal_matters_04.jpg

Accept the default settings in the New Sequence dialog, then drag your existing sequence from the Project Panel into the Video 1 track in your new sequence (this basically flattens all the layers in your original sequence so you can apply the Levels effect to the whole thing at once).

In the Effects panel, type “Levels” in the Contains field, and drag the Levels effect onto your nested sequence in Video 1. Open the Effect Controls panel and twirl down the controls for Levels

legal_matters_05.jpg

Since your video has levels of 16 black and 235 white, change the settings for (RGB) Black Input Level to 16 and (RGB) White Input Level to 235. See the difference?

legal_matters_06.jpg

Your blacks & whites are now where they should be, and you’ve regained the full contrast range in your video.

In After Effects, the same thing can be done by creating an Adjustment Layer, dragging it to the top of the layer stack in the Timeline, and applying the Levels effect to it (using the same settings above).

Remember, if you have graphics, photos, or other elements in your Premiere Pro edit or After Effects comp, you’ll have to take that into account when applying the Levels effect – but for many cases this is a great way to make video look its best on a non-TV viewing medium.

Seriously . . .

Just returned from a few weeks of filming in several geographically disparate locations (and thus feeding my ever-increasing sense of an airline cabin being my “home away from home”). One of the things I love about my job is that despite the fact that I do Marketing I still get to produce stuff, and this time I got to shoot with some of the new tools that recently came into the Adobe fold. On October 19, when I was on said shoot, we announced that we’d acquired a software company called Serious Magic (read the full press release here). Their two products of main interest to me, and probably most of you, are DV Rack and Ultra. I haven’t had the chance to use Ultra yet, it’s a keying and virtual set technology, but I did use DV Rack extensively the past few weeks both on location and in the studio.

dvrack_shoot.jpg
Using DV Rack to monitor camera signal and capture direct to hard disc.

DV Rack has software versions of the scopes & meters that you’d have in a studio (e.g. Waveform and Vectorscopes) and by taking the signal from your camera via FireWire into your computer, you can easily adjust your camera’s iris, white balance, etc to get the best possible quality by reading the scopes & meters or using a wizard-like calibration tool. This is good stuff, since it helps improve the quality of what you’re shooting.

DV Rack is also a direct-to-disc DVR (Digital Video Recorder) that captures direct from your camera to hard-drive making for an inexpensive and powerful tapeless workflow. It can capture DV, HDV, DVCPro50 and DVCProHD. On the studio shoot in the photo above, I captured DVCProHD live from an HVX200, which I then opened in After Effects to make sure we had a clean chroma-key. When we were on location, I used DV Rack to grab shots using the video tap from our main camera for use in Premiere Pro (no that wasn’t a typo — a “video tap” is a signal that comes straight off a film or video camera for on-set monitoring, and in this case simultaneous capture).

And (if you hadn’t already noticed in the photo) I did this running Windows XP on my MacBook Pro. Bleeding edge, yessirree. Tapeless workflow, yeeehaaaa!!! I foresee bricks from videotape manufacturers flying through my office window any day now.

You wanna try? Free trial downloads are here.

Speaking of things flying through windows, I want to share one more nugget from the filming. I’m a huge advocate (and practitioner) of guerilla filmmaking, but this looked more to me like a suicide mission.

cablecam.jpg

Our friend from the local crew is about to fly down that zipline at an incredible speed, while holding that camera steady. No budget for a helicopter? No problem! No brakes on that thing? No problem! Were we carrying serious insurance coverage? You betcha!

Okay, so continuing on with the “news of significance that I haven’t blogged about until now” tip, the Soundbooth Public Beta went live 2 weeks ago — you can download that for free from Adobe Labs right here. Our thinking behind Soundbooth is that video & Flash pros need to work with audio, but don’t necessarily need a full-featured audio app like Audition (which is indeed full-featured and powerful, but comes with a bit of a learning curve). We wanted to put all the audio tools a video or Flash person would need right at the top level of the interface – tools for doing things like basic editing, music & sound effect creation, level normalization, noise reduction, etc. My next posting will be a detailed one on Soundbooth, but in the meantime you should download the beta, read the “getting started” doc, and get movin’.

And finally, we won an Emmy Award yesterday (like how I put that at the bottom of today’s post to show what a blasé New Yorker I am?). Yep, that’s right, we just won the Emmy Award for Streaming Media Architectures and Components for our Flash Video technology. Now the fight begins over whose desk the statue will live on!

The Ben Kurns Effect

There’s nothing more boring than something just sitting there on a movie or TV screen doing nothing. Think test pattern here – boy, I remember being a 5 year old, sitting in front of the TV at 6 in the morning waiting for that test pattern to go away and for Davey & Goliath or New Zoo Review to come on (if you watched D&G as a kid and haven’t seen Moral Orel on Adult Swim yet, you neeeeeed to go see it right now, don’t ask questions just do it).

Documentary filmmakers have long known this, because they often have more archival photography available on a subject than film or video footage. They use a technique called “pan & scan” (also known as “pan & zoom”) to do camera moves on still images to make them more interesting to the viewer. You’ll often see this referred to as the “Ken Burns Effect,” as his documentaries (the one on Jazz, in particular) use this technique extensively. But it’s been going on for way longer than Mr. Burns has been around. We used to do this with camera stands, which you still find in the odd studio here & there – basically a flat, well-lit surface where you lay the photo with a video camera mounted on a pole, pointing down to the photo. The signal from the camera runs to a tape deck, and the camera is either panned & zoomed manually or mechanically, depending on the sophistication of the particular camera stand. Some of them are pretty tricked-out, with the ability to control the camera’s position & zoom with precision via remote control.

Hardly anybody uses camera stands anymore – it’s much easier to scan the photo and do the pan & scan in software. You get more precision, can experiment more easily, and you don’t have to purchase & maintain the camera stand itself. Of course, if you’ve got a digital photo then this is the only way to go.

You can pan & scan high-resolution photos in both After Effects & Premiere Pro while maintaining their full resolution. This means you can zoom in on details without having the image get all pixilated and cruddy. If you’re editing a piece that involves using stills, then it’s better to do the pan & scan in Premiere Pro. The steps are basically the same whether you do it there or in AE.

First of all, import your photo using the standard File>Import command. Premiere Pro imports photos at a duration of 5 seconds, but you can change this.

pan_zoom_1.jpg
The project panel tells me that my photo is 2592 X 1944, which will let me zoom in very close at full resolution.

If you’d like your image to run for a longer or shorter duration, right-mouse click on your image file in the Project Panel, select Speed/Duration, and enter your desired duration.

Then, cut the image into your sequence in the same way you would a video clip. Once it’s in your timeline, click it and then open the Effect Controls Panel (usually docked behind the Source Monitor), and click on the triangle to the left of “Motion” to twirl down the Motion properties.

pan_zoom_2.jpg
If you don’t see the Current Time Indicator on the right side of the Effect Controls, click the white button with the 2 left-facing triangles to reveal it.

For this example, we’ll start out zoomed in real close, then zoom out to reveal the entire image. If you want to start zoomed out you can do the next steps in reverse, but before you do anything you need to make sure that the Anchor Point is set on the object you want to zoom out from or zoom in to. When you click on the word “Motion” in the Effect Controls, the Anchor Point (the little circle with the “X” in the middle) appears on the center of your image. To move it over your “object of focus”, click & drag on the Anchor Point values in the Effect Controls until the Anchor Point is centered over your object (or face, or whatever).

Now you’re ready to animate. We’ll begin with a basic camera move, and then you can modify to your taste. Let’s have the image start out still for 1 second, then zoom out over the course of 2 seconds. Move the Current Time Indicator (CTI) in the Effect Controls Panel ahead by 1 second (use the timecode in the lower-left corner as a guide). Then, set initial keyframes for Position, Scale, and Rotation by clicking on the stopwatches to the left of their names. Double-click the Rotation value and set it to -40 degrees, or something similar. This is the starting position of your camera move.

pan_zoom_3.jpg

Next, move the CTI ahead by 2 seconds. Set Rotation back to 0, and click-and-drag on the Position and Scale values to “zoom out” to your final camera position. Premiere Pro automatically adds new keyframes at the current CTI position.

Go ahead and roll back in your timeline to the shot right before your panned & scanned image and play back to see how your camera move works in the context of the timing & pacing of your edit. You might want to adjust the duration of the camera move, which can be done simply by moving the keyframes, or you might want a more fluid camera motion. By default, Premiere Pro creates a camera move that starts & stops on a dime – in other words it’s not particularly elegant. Now, if you’re cutting an MTV-style piece with really fast pacing, this might be what you want, but in most cases you’ll want the camera to ease out of its initial position and ease in to a smooth landing. Start by clicking-and-dragging across the initial 3 keyframes to select them all. Then, right-mouse-click on any of them and from the pop-up menu select Temporal Interpolation>Ease Out.

pan_zoom_4.jpg

Repeat for the ending keyframes, this time selecting Ease In. Roll back and play your adjusted camera move.

At this point you can treat your image as you would any video clip – e.g. you can add effects and transitions if you wish. With some photos, you might notice a certain degree of “interlace flicker” as the image pans & scans. If that’s the case, increase the amount the Anti-flicker Filter in the Effect Controls and that should make it look much nicer.