October 10, 2016

Understanding Ingest Settings

In the 2015.3 release, Premiere Pro added a new tab in Project Settings: The Ingest tab. This new tab adds a lot of functionality that can be used in a lot of different ways. The next couple of blog posts will break up the different options, and explain the different workflows available with the options in the Ingest tab. Stay tuned!


Ingest Settings options.

Ingest Settings.

February 2, 2015

Multichannel Audio in Premiere Pro

“You know, it’s a shame that Premiere Pro can’t output more than stereo audio.”

“What?? Are you kidding?? Premiere Pro handles multichannel audio output great. Up to 32 channels, depending on formats. Where’d you get that idea?”

“I’ve tried it. Even if I set my output to multiple channels, I end up with a stereo pair on A1 & A2, and a bunch of blank channels. It doesn’t matter how many tracks I have in my timeline, it all mixes down to Stereo.”

Does this conversation sound familiar?

It’s becoming more and more common for people to want/need to output more than a stereo mix down from a Premiere Pro sequence. Some need multiple languages in a single file. Others want to keep a discrete output of a music track, or a voiceover, so it can be updated later. Yet most people have trouble understanding how this is done in Premiere Pro, and it’s one of the biggest differences between Premiere Pro and FCP7.

It All Starts with the Sequence

Take a look at any existing sequence you may have, and scroll all the way to the bottom of the audio tracks. You’ll see something there called a Master track.

Master Audio Track picture

The Master Audio Track

The Master track is the mixed-down version of all the other tracks in the sequence. It doesn’t matter if you have a single audio track or 99 audio tracks – they all are getting mixed down into this Master Track. It’s similar to the way an audio mixing board works – you have a slider for each input track, but you also have a Master slider at the far right of the mixing board. It’s one extra place you can make a final adjustment of the mix volume before going out. For example, if you like the overall mix, but it’s just a little too hot, you can use the master slider to nudge the volume down a bit. This preserves the overall mix, but lowers the final output volume.

Why is this important to multichannel output? Take a look at this icon on the Master track:

This icon indicates a Stereo Master track.

This icon indicates a Stereo Master track.

That’s a STEREO master track. Most of the sequence presets that come with Premiere Pro are set by default to use a Stereo Master track. If we want to output multichannel audio, we’ll need to use a multichannel audio track. There’s currently no way to change the Master audio track in a sequence, so it’s best to start with a new sequence, and copy/paste from your existing sequence into the new sequence.

Start by creating a new sequence (File-New-Sequence) and pick out the format that best suits your resolution and frame rate. But before you click OK, go to the Tracks tab at the top of the panel.

Creating a New Sequence with a Multichannel Audio Master track.

Creating a New Sequence with a Multichannel Audio Master track.

In order to output multichannel audio, you need to use a Multichannel Master Track. This will enable you to output up to 32 discrete channels of audio. And, it’s flexible, and the number of output channels can be changed at any time in the sequence or in the audio mixer.

For those who use a lot of multichannel output, be sure to set a new sequence preset using a multichannel master track, and take note of the default pan and channel assign controls found here in the Track tab of the New Sequence box – we’ll reference them later in this tutorial.

Using the Multichannel Master Track

The Multichannel master is really flexible – it can be set to 2-channel for output of stereo, and any other number of channels up to 32-channel. To change it, just click on the number in the track header here:

Click here to change the number of output channels at any time.

Click here to change the number of output channels at any time.

You can also adjust the number of output channels in the Audio Track Mixer here:

Select the number of output channels from the Audio Track Mixer.

Select the number of output channels from the Audio Track Mixer.

Notice that when you change the number of output channels, the VU meter to the right of the timeline also changes – it matches the number of output channels. There are Solo buttons at the bottom to listen to individual pairs of channels.

VU Meters with 8-channel output selected.

VU Meters with 8-channel output selected.

Let’s take a further look at the Audio Track Mixer, because this is also where you assign channels to the various output channels. Notice that by default, all of the audio tracks are assigned to output 1+2. This means that currently, the mixer is assigning all the output tracks to mix down into output channels 1 and 2. In the mixer, go to Track A2, and change the output assignment to 3+4. Notice that you need to uncheck 1+2 and check 3+4 to make this happen – it’s very easy to duplicate audio out to discrete tracks this way! For example, lets say you need to cut a promo with mixed music, SFX, and V/O tracks, but also want to keep discrete versions of all 3 tracks:

Audio routed in the Track Audio Mixer.

Audio routed in the Track Audio Mixer.

As this shows, I’ve assigned all the audio into 1+2, so this will be my mixed version. But I’ve also routed copies of each tracks into 3&4, 5&6, and 7&8, respectively. This way, the output file will still contain “clean” versions of the music, VO, and SFX. Someone else could recut the promo later with a different voiceover, or change out the music, just by using these extra tracks in the finished file. (and assuming the playout server knows to ignore these extra channels for play-to-air.)

The Pan knobs in the mixer also affect the output – If I have mono content in a track, and the pan knob is left as the default (center), then it will go to both assigned output channels equally. If I need it to go discretely into a single channel in the output, I need to pan the track either left or right, depending on which channel I want the audio to go to.

For example, if I have a mono V/O in A4, and I want it to ONLY be in A9 of my output file, I would do the following:

Assign A4 to output 9+10
Pan to -100 (left channel)

It should look like this (I labeled the track to Mono VO to avoid confusion):

A4 set to output a single mono channel to output 9.

A4 set to output a single mono channel to output 9.

Most people I know who use Multichannel audio like to set up the mixer the same way each time – don’t forget that the output channel assign AND the Pan control can be set to a default setting in the New Sequence box. Having the mixer output pre-set in the custom sequence is the key to using this quickly and efficiently. Don’t forget.

Saving a sequence preset with default settings makes things much easier.

Saving a sequence preset with default settings makes things much easier.

Out of the Pan, into the Output Settings

The output from Premiere Pro is also super-flexible – you can pare off unwanted channels for a specific output, add additional blank channels for file compatibility with playout servers, and more. But, with great power comes great responsibility. You’ll want to check to make sure your output settings match what you want, or else your rendered file may be different from what you’re looking for.

Select your sequence, and go to File-Export Media. Then click on the Audio tab.

Audio Tab in Export Settings.

Audio Tab in Export Settings.

The number of channels in the Export tab depends greatly on the file format chosen and the audio codec you’re using. Here, in this example, I’m using MXF OP1a as my output format, and I’m using AVC-Intra-100 as my video codec. This usually has uncompressed audio, and that’s what’s shown in the Audio tab.

The number of output channels can be limited sometimes by the video codec as well. Certain camera makers have specific numbers of channels that are supported. In AVC-Intra 100, I can’t choose 12-channel output. I have to choose 10-channel or 14-channel, or it wouldn’t be a valid AVC-Intra 100 file.

Another example would be IMX-50. This only supports 2, 4, or 8-channel audio.

Export with IMX50 MXF OP1a. Note the limited output options.

Export with IMX50 MXF OP1a. Note the limited output options.

For maximum flexibility for audio, QuickTime uncompressed audio is very open, with choices ranging from 1-32 tracks, and special output options for 5.1 sound as well.

QuickTime Export options for uncompressed audio - note all the choices for multichannel audio.

QuickTime Export options for uncompressed audio – note all the choices for multichannel audio.

So, what happens if my Master Audio track in my timeline and my Export settings don’t match?

Premiere and AME will use the Export settings, and add/remove tracks as necessary to make them match. For example:

If my Master Audio track is set for 16 channels, and I set the Export for 8 channels, the resulting output file will have the first 8 channels from my sequence. Anything assigned to channels 9-16 will NOT be in the output file.

If my Master Audio track is set for 8 channels, but my playout server needs 16 channels, I can set the Export Settings for 16 channels. The exported file will have the exact audio from my sequence – all 8 channels routed the way I assigned them – in channels 1-8. It will also have blank channels 9-16 so that the file will pass the QC system and play on my broadcast server.

Quick Recap

The Master Audio track in the timeline determines the maximum number of discrete audio tracks that can be in the output file. For example, if the Master Audio track is a Multichannel track, and it’s set for 16 channels, that’s the maximum number of output channels available.

Routing of the tracks in the timeline is done in the Audio Track Mixer. Tracks on the timeline aren’t tied to particular output channels – it’s up to the editor to assign tracks to specific audio output channels.

The Audio tab in the Export Settings box has an additional choice for number of output tracks. The number of tracks depends on the format – some formats are much more rigid in the number of audio tracks. QuickTime AAC, for example, only supports 2-channel. QuickTime Uncompressed audio will support up to 32-channel. If the number of channels in the Export settings is different from the number of channels in the Multichannel Master track in the sequence, Premiere will either add blank tracks, or remove tracks. For example, if the Multichannel Master is set for 32 channels, but the export setting is only set to 16 channels, the exported file will only have the top 16 channels. Another example – if the Master track was set to 2 channels, but the export setting was set for 16 channels, the output file would have 2 channels with active audio, and 14 blank channels. (useful for playout server compatibility.)

December 27, 2014

Understanding Premiere Pro Metadata

This was originally a quick answer to a question on the Moving to Premiere Pro forum on Facebook, which is where you’ll find me most of the time. This blog doesn’t get used too often these days due to Twitter and Facebook, but I still maintain it for more complicated questions like this one.

Someone asked a series of questions about where Premiere Pro stores metadata, and why there are a lot of duplicate fields in the metadata panel, in two separate groups.

The Adobe engineers have found that people use metadata in one of two ways, typically – some use it on a project-specific basis, adding notes, descriptions, and commentary that relate specifically to that one project. In this case, the raw assets shouldn’t hold the metadata, as it could confuse other editors using the material. The other group wants to add metadata to the raw clips, so all editors, present and future, can take advantage of the information added.

Basically, Premiere Pro has two systems in place to accommodate either workflow – it can store in the individual files (or an adjoining sidecar) or it can store metadata as a clip property in the project file itself. Plus, there’s a special way to “link” these two systems together, in cases where you want to auto-populate from project to asset, and visa-versa.

The easiest way to see these two systems is to pull up the Metadata panel:


Any metadata entered in the top portion (under Clip) is just local project metadata. It’s tied to the selected clip, but nothing gets modified in the original file (or XMPsidecar.) This is great for people that are marking up clips in a specific way related to this project. Nothing added will be visible in other project files.

Any metadata entered under the bottom portion (under File) is embedded into the original asset if possible, and placed into an XMP sidecar if it’s not possible. (varies depending on read-access and file format.) This is great when you want to permanently mark up a clip for use with multiple editors, or add metadata that others will use in the future. However, be advised that this is one of the ONLY times that Premiere Pro could be modifying your original assets.

There is an exception to this Clip/File separation  – if you look at the screen grab, you’ll notice that the Description field in both areas has a little chain link icon on the right-hand side. This indicates that anything that’s entered in one area will automatically be copied into the other area. If you don’t want this information copied from the project into the file and visa-versa, click the little chain icon and turn off the link.

BTW – the Project bin has a slightly different view of the Clip properties, and anything entered there will also show up in the Metadata panel. It’s just more of a column, multi-clip view. Here’s part of the Project Bin showing the Description field for the clip in the above example:


If you want to modify the column view in the Project bin, use the bin flyout menu, and choose Metadata Display.

Once a property is linked in the Metadata panel, it doesn’t matter where you enter the metadata – for example, if Description is linked, you could enter it directly in the Project bin, and you’d see it show up under Clip metadata and (since it’s linked) in the File metadata.

The Link icon won’t magically copy metadata that’s already entered, so it won’t overwrite anything unless you type something new.

Any questions?


Question 1: What do you mean that Premiere Pro sometimes writes in the original file, and sometimes uses a sidecar?

In the Metadata panel, under the File section, Premiere Pro is capable of adding metadata to the original source media. It will do this if the file format supports metadata in the header, and the read/write privileges are enabled on the file. Common file formats like MOV typically support metadata in the file header, but there are some exceptions. For example, M2T has no place for common metadata. In these cases, or in cases where the files are locked read-only, Premiere Pro will create an XML file in the same location as the original clip, and store the File metadata there.


Question 2: I want a global way of turning on/off the linking of Clip and File Metadata. Is it possible?

As of Premiere Pro CC 2014.2, there is a setting for this. Look under Preferences -> Media -> Enable Clip and XMP Metadata Linking.


Question 3: You refer to XML files, but the software talks about XMP. What’s the difference?

XMP is a very specific form of XML, used for metadata for photos, images, and now video. It was originally developed by Adobe, but is now an ISO standard. More info here: http://en.wikipedia.org/wiki/Extensible_Metadata_Platform


January 19, 2014

Hong Kong in 4k on a Galaxy Note 3???

I recently visited Hong Kong on a business trip, and actually ended up with a rare free day to do something. Unfortunately, my DSLR is in for repair right now, and the only camera I had with me was the one on my phone – the Samsung Galaxy Note 3.

The Note 3 impressed me on its spec sheet by being able to shoot full motion video at 3840×2160, or UHD resolution. (Sometimes also called 4K) However, I was pretty skeptical whether this tiny camera could really be useful at that resolution. The tiny form factor and the sensor size didn’t seem capable of such resolution. Still, since it was the only camera with me, I figured it was worth a test. I decided to shoot everything in UHD, and then deliver an edit in both UHD and 1920×1080, taking advantage of the higher resolution source material by panning/zooming around.

I shot in a wide variety of lighting conditions, both during the day and at night, taking full advantage of my free day, and a little bit of extra time the day afterwards.

Here’s the cut, posted on YouTube in full UHD resolution:

HK4K: The Galaxy Note 3 Edit

Honestly, I was pleasantly surprised what I could accomplish with this little camera. It’s probably too noisy and compressed to be of much use to anyone really needing to master in 4k, but I enjoyed the latitude in reframing shots for 1080p. There are a couple of quirks that would keep me from using this in real, paid production work, but it would definitely be a camera that I’d set up if I needed extra coverage of an event.


The camera DESPERATELY needs some kind of stabilization. Rolling shutter is bad on this sensor, and the H.264 recording codec doesn’t like a huge amount of motion. I used 2 tricks to help fix this – first, I found a small cell phone clamp at a local electronics store for less than $10.

Tripod mount and mini-tripod for mobile phones.

Tripod mount and mini-tripod for mobile phones.

This one even came with a small tripod. The second technique I used was to press the phone onto the glass of a window. It worked great on several shots on board the trams. You get whatever the glass is facing, and don’t have any other options, but the image is rock-solid stable.

Auto-iris cannot be turned off on the phone, so I had to fix this a few times in Premiere Pro with animated Brightness/Contrast filters.

There are also moments where horizontal lines will appear in the frame, almost like the codec just couldn’t handle the content. These were rare, but visible in a few shots.

The most annoying problem, which I couldn’t fix, and you’ll see in the final video, is what I call the “focus Pop” or autofocus “snapback.” About every 30 seconds, you’ll see a moment where the entire picture seems to go “pop”. Seems like the lens just gets tired of holding the same focus for too long, and, for lack of a better phrase, it “blinks.” This is a real pain – I’m hoping Samsung has a firmware fix, or some Android developer takes a look at it. The only solution I found for this was to edit around it as much as possible. I left it in a few shots to show people what I’m talking about, and it kind of added to the future feel of the video.

Trying to play the clips back in QuickTime was painful. The MP4 files this generated wouldn’t play back smoothly from the operating system. However, the Media Browser in Premiere Pro performed well with the shots. So did the hover scrub thumbnails in the project bin.

(As an aside – set yourself up a “junk” Premiere Pro project on the desktop, and use it ONLY for media browsing. Makes life so much easier.) 

The footage drops direct into Premiere Pro without any need for transcoding. I found that setting playback at half res was perfect for my 2011 MacBook Pro. Premiere Pro uses a pretty straightforward way of adjusting quality/performance. In the Source and Program monitors, there’s a menu for visible resolution – full, 1/2, 1/4, etc. Set it as high as it’ll go, without dropping frames, and you’ve balanced your performance for your hardware.

In Premiere Pro, one thing you’ll notice right away is that the frame rate of the clips doesn’t always conform to 29.97fps. The majority of clips actually came in at 30.02 fps, and some had other frame rates. I tried using the default setting at first, but ran into trouble with some of the speed changes later on. Don’t use the trick of dragging/dropping clips onto the New Sequence Button. The wonky frame rate will create a timeline with a time base of 10.00 frames per second. For best results, I selected all the clips, and used Modify Clip -> Interpret Footage to set all the clips at 29.97fps. This can be done once and forgotten about – it’s not a rendered function. This didn’t affect the visible playback rate of the clips, which makes me think there’s something odd about the default rate in the clips. Maybe Premiere Pro isn’t detecting it properly, or the metadata in the clip is written wrong.

Rather than deal with the nonstandard frame rate, I created a custom sequence preset at 29.97fps at 3840×2160 resolution, and dropped all the clips into that.

The Warp Stabilizer was used on multiple handheld shots, trying to get slow, smooth motion. Don’t even try it for really shaky stuff – the wobble and the blurring are too much even for the Warp Stabilizer. But holding a high shot, and trying to be steady, worked really well with warp stabilizer. You won’t be able to tell those weren’t tripod shots.

I had to get artistic with the sunset – I’m still not happy with it, but I only was able to shoot it once. No second chances. The camera shoots really wide angle, and the time-lapse I did lacked any focal point for the eye, so I gave up trying to deliver in 4k at that point. I added a cut and zoomed in 200% to get a decent framing. Even at 1080p, that shot is soft and artifacted. I added some noise to balance out the blockiness, but it’s still visible.

I did some minor color work by using the new “Direct Link to SpeedGrade” function. I will say that my 2011 17″ MBP kept up admirably up until this point. SpeedGrade was great, but then bringing the project back to Premiere forced me to render some sections of the timeline before it would play back. The Lumetri Effect that SpeedGrade adds is heavy, and my 3 yr old GPU wasn’t up to the task. (Would love to try this on a Retina MacBook. Hint Hint to my boss! 🙂 )

Oh, one important note on rendering previews – In Premiere Pro CC 7.2 and higher, you can now edit the Sequence Settings! I was able to change the Preview settings to QuickTime, ProRes, 3840×2160, and get full 4k previews for the rendered sections of my sequence.

All in all, this was a fun project to play around with. I hope that someone fixes the lines and the “focus pop” in the camera – here’s hoping that it’s just a firmware issue or a camera app issue. If those two problems are addressed, this will make an excellent pocket “2nd coverage camera” for 1080p shooters, with lots of room to reframe what you get.


April 24, 2013

NAB 2013 Sneak Peeks

Wow. Where to begin? NAB this year was one of the best shows I’ve attended in a long time. Attendance was up, great crowds at the Adobe booth, and the reactions to the sneak peeks were very very positive.

There are a lot of different videos showcasing what we showed at NAB here: http://tv.adobe.com/show/adobe-at-nab-2013/


Premiere Pro is adding so many new features, that some of my favorites have been overlooked:

1. It will now be possible for 3rd-party effects to be GPU-accelerated. Yep, for the first time, 3rd party effects can take full advantage of the Mercury Engine’s real-time performance. The engineering group is working with plug-in makers now to show them how it’s done. Can’t wait to see what comes from that.

2. Smart Rendering is now possible for many new formats. ProRes? Yup! DNxHD? Yup! Plus many more – including some added flavors of QuickTime. As soon as I have a full list of formats, I’ll post it. This is going to speed up a lot of renders by as much as 10 times, and will make the “Use Previews” function in final output render quicker too.

Those are just 2 examples of the multitude of new features coming soon – keep your eyes open for more examples coming soon.


December 17, 2012

On 48fps

I got the opportunity to see The Hobbit this week in mini-IMAX, 48fps 3D, and I think I’ve figured out what I both love and hate about this new format.

First, my take is that some of it works beautifully at 48fps. The scenes with Gollum make him look so incredibly real, I half expected him to come out at the end of the screen and answer questions from the audience. He’s actually there in every scene – no hint of being a digital character.

But, some of it doesn’t work well. Too often, there are scenes that pull you completely out of the moment. People have described it as “suddenly watching Masterpiece Theatre from 1978” or “everything looks fake,” but no one can put their finger on why. I think I figured it out.

Camera motion.

I felt most immersed in the 3d and the movie when the camera was static. Simple cuts between shots allowed my brain to process what was going on, and made me feel like I was standing, watching the action unfold. And it looked beautiful. But, as soon as a jib arm or dolly began making my point-of-view float away, something deep in my brain called BS on everything. Suddenly, the magnificent Shire looked like nothing more than a well-crafted set. I suddenly knew I was looking through a camera lens on a jib arm at a set in a sound stage. That’s what people are trying to articulate here – the experience, made using film techniques developed for 24p, aren’t going to work the same in the hyper-real world of 48p. It’s going to need to develop a language and style all its own, and that’s going to take time. Just as 3D requires a language, and even B&W and Color require different tools and techniques, this new world of High Frame Rate (HFR) will require a re-learning of filmmaking techniques to make it go. And I think we need to start looking at how we move the camera. I think it necessitates a “reality” perspective on the scene. What we think of as “high production value” at 24p (cranes, jibs, dollys) work against us at 48p. And, we associate it with videotaped shows of the 1970’s because that’s where we first saw it.

We may also have to revisit set dressing, making the sets as absolutely “real” as possible. Maybe, maybe not.

BTW – the 3D on this film was flawless. Beautiful. You don’t even think about it.

I want to see the movie in 2D, 24p as well, just to compare and contrast. I’ve seen on Twitter that at 24p, there’s motion blur, and it looks like a “normal” movie.

December 13, 2012

4k presets for Adobe Media Encoder

I was asked the other day when Adobe will offer a 4k output from the H.264 encoder. The answer is simple – it’s there today!
Currently, the Media Encoder (and, by association, Premiere Pro) doesn’t ship with presets for 4k H.264 output, simply because it’s not a common enough format. But it’s reasonably simple to make your own.
I won’t go into all the details of H.264 encoding here, but the key thing to know is that the encoder settings include something called Profile and Level. These settings set limits for what type of file you can create.

I’ve created some sample presets for you to play with – these use a really small bit rate, so you may want to tinker with the quality some and make your own presets accordingly, but these will get you started.

Click here to get ’em:
4k Presets

Install them in Adobe Media Encoder by going to the Preset Browser, and clicking the button to Import Presets:

December 9, 2012

Avoiding RAM Starvation: Getting Optimum Performance in Premiere Pro

Something I wanted to share for all you “build-it-yourself” users. Recently, I helped a customer build out a really beefy system – 16 physical cores, plus hyperthreading, 24 GB of RAM, Quadro 5000, etc.

The system wasn’t rendering well at all. Bringing up the task manager was showing each processor only hitting about 19% – 20%. My MacBook Pro was actually handling the same tasks MUCH faster.

This was a classic case of Processor RAM Starvation. With Hyperthreading turned on, the system was showing 32 processors, and there wasn’t enough RAM to drive all those processors! Some processors had to wait for RAM to free up, and the processors that finished their calculations had to wait for THOSE processors to catch up. It’s a really bad state to be in. With multiple CPU’s, everything has to happen in parallel, so when some threads take longer to finish, everything comes to a screeching halt.

I turned off hyperthreading, and suddenly, the system started to just FLY – all the CPUs were being utilized effectively and roughly equally. Render times were over 10-20x faster.

I can’t stress enough the need to ‘balance’ the system to get proper performance. There’s never a danger of having “Too much RAM”, but too many processors is not necessarily a good thing!

You can check this on your system – using the stock effects, when you render previews or render your output files, you should see all the CPU cores being utilized. They won’t exactly be used the same amount, but roughly, they all should be about the same for common tasks.

Also, a BARE MINIMUM amount of RAM I recommend for Premiere Pro is 1GB per core. If your budget can afford it, 2GB per core is pretty optimal for a Premiere Pro system. 3GB per core isn’t required, but isn’t a bad thing. If you are trying to decide between 4 cores, 8 cores, 12 cores, or 16 cores, let the amount of RAM be your guide – look at the cost of 2GB per core, and pick the CPU accordingly.

UPDATE: Some of the feedback I’m getting on Twitter seems to believe that this points to Premiere Pro needing extreme amounts of RAM. No – that’s not it at all. RAM needs to be balanced with number of Cores. The days of just “getting the best CPU” are past. Modern processors are actually multiple CPUs on a single chip, and each one needs to allocate its own chunk of RAM to operate at peak efficiency.

On a dual core processor, 4GB of RAM is a reasonable amount of RAM to have, and 6-8 GB would be pushing into that “it ain’t a bad thing” category. A 4-core processor runs great on 8GB of RAM, which is what I have in my MacBook Pro. RAM is really cheap nowadays – I think I just paid about USD$40 for 8 GB on my son’s computer, and 16GB is less than $80 right now for a desktop system. Remember, it’s about balance, people…

SECOND UPDATE: If you’re an old Classic Auto tinkerer, like I used to be, think of it this way – the CPU is like the engine block, and the cores are the number of cylinders. Each cylinder needs fuel and air delivered to it. RAM is like the carburetor – it provides what the cylinders need. But, you have to match the right carburetor for the size of the engine. A wimpy carburetor on a V8 engine is a disaster – low horsepower, and because it’s heavier, it’ll be outperformed by a properly tuned 4-cylinder engine.

Clear as mud? 🙂

December 2, 2012

Premiere Pro and QuickTime and Nikon, OH MY!

This post is going to get a little techy and geeky – I want to take a minute and explain the relationship between Premiere Pro and QuickTime for a minute. I feel it’s important to understand it, so that you’ll also understand why it’s sometimes necessary to change file extensions on some .MOV files in order to get them to play properly in Premiere Pro. This mostly seems to affect Nikon owners, but can be a workaround for certain other types of cameras as well.

Premiere Pro actually has its own built-in system for decoding files, and Adobe works with the camera manufacturers and codec owners to ensure that the majority of cameras and codecs are supported directly.

For certain codecs, like H.264, there are a number of wrappers for the file – an H.264 file can come in a QuickTime .MOV file, an .AVI file, or an .MP4 file.

In the case of a QuickTime .MOV file, Premiere Pro will generally let QuickTime handle the decoding of the file, unless there’s metadata in the file header that suggests otherwise. If there’s nothing in the header, it just hands off the file to QuickTime, and the performance is reliant on QuickTime for decode and playback. This is required for a number of codecs, since there are many QuickTime codecs that only exist inside of the QuickTime framework. (ProRes, for example.) And, the performance can be very good with QuickTime files. However, it’s not the case with certain codecs. For example, decoding H.264 files with  QuickTime can sometimes cause less-than-ideal performance in Premiere Pro. Some of the QuickTime codecs are really more optimized for viewing and playback, rather than editing.

In the case of Canon DSLR files, there’s something in the file header. Premiere Pro can recognize that the clips came from a Canon camera, and bypass QuickTime. This enables Premiere Pro to have smooth playback of DSLR files, and get better dynamic range from the clips. Premiere will use its own built-in decoder, which is optimized for editing, and respects the extended color used by the Canon cameras.

For this reason, it’s sometimes necessary to force Premiere Pro to bypass QuickTime for a certain set of files. I tend to see this the most with certain types of Nikon DSLR cameras. For whatever reason, Premiere Pro cannot detect what camera these .MOV files come from, and it just hands off the decoding of the files to QuickTime, usually with less-than-stellar results.

For this reason, when I see a problem with a .MOV file performing badly within Premiere Pro, I first determine the codec used. If it’s some type of MPEG/H.264 derivative, I rename the file extension manually in Finder or Windows to .MPG. This will force Premiere Pro to use the built-in MPEG decoders to decode the file, and will usually help playback/performance a great deal.

If you run into this problem, and deduce it’s from an H.264 file in a .MOV wrapper, you can use Adobe Bridge to batch rename files very quickly, and without re-encoding the files. All bridge does is change the 3-letter extension of the existing files, so it can plough through hundreds of files in minutes.

In Bridge, select all the files you wish to rename, and go to Tools – Batch Rename. Then, set up the Batch renaming tool something like this:


October 12, 2012

Remembering Steve Sabol

One of the jobs I had in “a past life” was doing some technology work and training for NFL Films. I recently saw an article on the passing of Steve Sabol, and needed to take a moment and pause, reflect, and celebrate the life of a man I knew briefly in my career, but who left a lasting impact on me. I doubt he would’ve remembered me, but I definitely remembered him.

If you are a fan of the “mutant rugby” that Americans call Football, you need to thank Steve. The impact he had on promoting this game was unprecedented. He and his organization brought the drama, the conflict of Football, to television in a way that hadn’t been seen before. Some of the best camera operators worked at NFL Films, and week-after-week, they’d shoot plays at crazy frame rates – over 120fps – just to bring those creamy slow-motion shots you’re used to seeing. I still crave owning a high-frame-rate camera because of what I grew up watching from Steve.

Steve introduced the concept of metadata to me. Back in the 90’s, NFL Films was sitting on a HUGE library of film, dating back to something like 1947. He called for the creation of a computerized system to catalog, tag, and digitize this library, and his team created a custom system called SABRE. Using SABRE, an editor could search for all the clips of Green Bay Packers linebackers with cold breath at an away game, and SABRE would deliver a list of clips, with low-res proxie files ready to view. All the asset-management systems of today? Steve’s team had them beat back in 1999. I remember asking one of the production staff about my brother-in-law, who played for the Raiders, and within 5 minutes, he handed me a tape with all his highlights cut together.

Steve was also thinking about the future of broadcasting – the division I worked with was NFL Films Online – a team of people hired to create the next generation of NFL Films. A lot of those early attempts involved extending shows that were broadcast on ESPN or the big 4 networks. We would take a show brand, like Edge NFL Matchup on ESPN, and create original online content. Many of the traditional shows were limited to 24 minutes of on-air time, and couldn’t cover all the games, so our job was to bring the hosts in, let them talk about the games the same way they did on-air, and produce extended content that fans could watch on the web site. And, we had to do it at 1/25 the cost of the on-air version. I have memories of training the production staff how to use a virtual set system we set up in an old film storage room.  One of their virtual set designers was a 16 year old kid who impressed Steve enough to get a job.

One of the last projects I remember working on was a virtual set demo for an NFL Owner’s meeting in Baltimore. I had to tech-direct the demo, showing live internet streaming (with Steve holding a clock as “proof”!) from another part of the building. Steve was presenting his vision for “building the Brand online” and it worked – he convinced the owners to fund the project.

Steve was a class act, and what you saw in all those on-camera intros was what you got off-camera. He was a genuine, personable guy with a vision of what he wanted.


Copyright © 2016 Adobe Systems Incorporated. All rights reserved.
Terms of Use | Privacy Policy and Cookies (Updated)