Recently in Mercury | Main

April 24, 2013

NAB 2013 Sneak Peeks

Wow. Where to begin? NAB this year was one of the best shows I’ve attended in a long time. Attendance was up, great crowds at the Adobe booth, and the reactions to the sneak peeks were very very positive.

There are a lot of different videos showcasing what we showed at NAB here: http://tv.adobe.com/show/adobe-at-nab-2013/

 

Premiere Pro is adding so many new features, that some of my favorites have been overlooked:

1. It will now be possible for 3rd-party effects to be GPU-accelerated. Yep, for the first time, 3rd party effects can take full advantage of the Mercury Engine’s real-time performance. The engineering group is working with plug-in makers now to show them how it’s done. Can’t wait to see what comes from that.

2. Smart Rendering is now possible for many new formats. ProRes? Yup! DNxHD? Yup! Plus many more – including some added flavors of QuickTime. As soon as I have a full list of formats, I’ll post it. This is going to speed up a lot of renders by as much as 10 times, and will make the “Use Previews” function in final output render quicker too.

Those are just 2 examples of the multitude of new features coming soon – keep your eyes open for more examples coming soon.

 

December 9, 2012

Avoiding RAM Starvation: Getting Optimum Performance in Premiere Pro

Something I wanted to share for all you “build-it-yourself” users. Recently, I helped a customer build out a really beefy system – 16 physical cores, plus hyperthreading, 24 GB of RAM, Quadro 5000, etc.

The system wasn’t rendering well at all. Bringing up the task manager was showing each processor only hitting about 19% – 20%. My MacBook Pro was actually handling the same tasks MUCH faster.

This was a classic case of Processor RAM Starvation. With Hyperthreading turned on, the system was showing 32 processors, and there wasn’t enough RAM to drive all those processors! Some processors had to wait for RAM to free up, and the processors that finished their calculations had to wait for THOSE processors to catch up. It’s a really bad state to be in. With multiple CPU’s, everything has to happen in parallel, so when some threads take longer to finish, everything comes to a screeching halt.

I turned off hyperthreading, and suddenly, the system started to just FLY – all the CPUs were being utilized effectively and roughly equally. Render times were over 10-20x faster.

I can’t stress enough the need to ‘balance’ the system to get proper performance. There’s never a danger of having “Too much RAM”, but too many processors is not necessarily a good thing!

You can check this on your system – using the stock effects, when you render previews or render your output files, you should see all the CPU cores being utilized. They won’t exactly be used the same amount, but roughly, they all should be about the same for common tasks.

Also, a BARE MINIMUM amount of RAM I recommend for Premiere Pro is 1GB per core. If your budget can afford it, 2GB per core is pretty optimal for a Premiere Pro system. 3GB per core isn’t required, but isn’t a bad thing. If you are trying to decide between 4 cores, 8 cores, 12 cores, or 16 cores, let the amount of RAM be your guide – look at the cost of 2GB per core, and pick the CPU accordingly.

UPDATE: Some of the feedback I’m getting on Twitter seems to believe that this points to Premiere Pro needing extreme amounts of RAM. No – that’s not it at all. RAM needs to be balanced with number of Cores. The days of just “getting the best CPU” are past. Modern processors are actually multiple CPUs on a single chip, and each one needs to allocate its own chunk of RAM to operate at peak efficiency.

On a dual core processor, 4GB of RAM is a reasonable amount of RAM to have, and 6-8 GB would be pushing into that “it ain’t a bad thing” category. A 4-core processor runs great on 8GB of RAM, which is what I have in my MacBook Pro. RAM is really cheap nowadays – I think I just paid about USD$40 for 8 GB on my son’s computer, and 16GB is less than $80 right now for a desktop system. Remember, it’s about balance, people…

SECOND UPDATE: If you’re an old Classic Auto tinkerer, like I used to be, think of it this way – the CPU is like the engine block, and the cores are the number of cylinders. Each cylinder needs fuel and air delivered to it. RAM is like the carburetor – it provides what the cylinders need. But, you have to match the right carburetor for the size of the engine. A wimpy carburetor on a V8 engine is a disaster – low horsepower, and because it’s heavier, it’ll be outperformed by a properly tuned 4-cylinder engine.

Clear as mud? :-)

December 2, 2012

Premiere Pro and QuickTime and Nikon, OH MY!

This post is going to get a little techy and geeky – I want to take a minute and explain the relationship between Premiere Pro and QuickTime for a minute. I feel it’s important to understand it, so that you’ll also understand why it’s sometimes necessary to change file extensions on some .MOV files in order to get them to play properly in Premiere Pro. This mostly seems to affect Nikon owners, but can be a workaround for certain other types of cameras as well.

Premiere Pro actually has its own built-in system for decoding files, and Adobe works with the camera manufacturers and codec owners to ensure that the majority of cameras and codecs are supported directly.

For certain codecs, like H.264, there are a number of wrappers for the file – an H.264 file can come in a QuickTime .MOV file, an .AVI file, or an .MP4 file.

In the case of a QuickTime .MOV file, Premiere Pro will generally let QuickTime handle the decoding of the file, unless there’s metadata in the file header that suggests otherwise. If there’s nothing in the header, it just hands off the file to QuickTime, and the performance is reliant on QuickTime for decode and playback. This is required for a number of codecs, since there are many QuickTime codecs that only exist inside of the QuickTime framework. (ProRes, for example.) And, the performance can be very good with QuickTime files. However, it’s not the case with certain codecs. For example, decoding H.264 files with  QuickTime can sometimes cause less-than-ideal performance in Premiere Pro. Some of the QuickTime codecs are really more optimized for viewing and playback, rather than editing.

In the case of Canon DSLR files, there’s something in the file header. Premiere Pro can recognize that the clips came from a Canon camera, and bypass QuickTime. This enables Premiere Pro to have smooth playback of DSLR files, and get better dynamic range from the clips. Premiere will use its own built-in decoder, which is optimized for editing, and respects the extended color used by the Canon cameras.

For this reason, it’s sometimes necessary to force Premiere Pro to bypass QuickTime for a certain set of files. I tend to see this the most with certain types of Nikon DSLR cameras. For whatever reason, Premiere Pro cannot detect what camera these .MOV files come from, and it just hands off the decoding of the files to QuickTime, usually with less-than-stellar results.

For this reason, when I see a problem with a .MOV file performing badly within Premiere Pro, I first determine the codec used. If it’s some type of MPEG/H.264 derivative, I rename the file extension manually in Finder or Windows to .MPG. This will force Premiere Pro to use the built-in MPEG decoders to decode the file, and will usually help playback/performance a great deal.

If you run into this problem, and deduce it’s from an H.264 file in a .MOV wrapper, you can use Adobe Bridge to batch rename files very quickly, and without re-encoding the files. All bridge does is change the 3-letter extension of the existing files, so it can plough through hundreds of files in minutes.

In Bridge, select all the files you wish to rename, and go to Tools – Batch Rename. Then, set up the Batch renaming tool something like this:

 

October 1, 2012

Adobe Anywhere for Video

I’ve been quiet about a new technology coming from Adobe, called Anywhere for Video, not because I didn’t have anything to say. Rather, I’ve been trying to keep the excitement to myself until the time was right. Every time I get to play with the technology, I end up giggling hysterically, since my brain keeps trying to tell me what I’m doing shouldn’t be possible.

If you haven’t heard of Adobe Anywhere yet, start by watching this short-but-informative video: http://tv.adobe.com/watch/adobe-anywhere/introducing-adobe-anywhere-for-video/

For a more detailed technical understanding, read this post by John Montgomery from FXGuide: http://www.fxguide.com/featured/new-tech-adobe-anywhere/

I first got the opportunity to work with a VERY EARLY version of this technology back in Feb/March of this year. Keep in mind that this was an early “proof-of-concept” version, so I need to stress that what I played with may not represent the final product. It didn’t even have a name at that time. But what I got to touch was mind-blowing. I sat down at Premiere Pro, and began to edit. This was XDCAMHD 50 422 footage. JKL playback in the Source monitor was super-smooth. Inserting clips onto the timeline was super-smooth. Adding effects and transitions between clips worked just like I expected they would. The kicker? The footage was on a server over 1000 miles away. Quality during playback was nearly indistinguishable with the original media, and if I paused on a frame and blew it up full-screen, it WAS the original frame.

There are very few technologies that make me cackle maniacally, but Anywhere did it. Many Many times.

At NAB 2012, we did the first public-facing demonstration of this early collaborative editing technology. I edited onstage in Las Vegas, and then handed off what I was working on to Dan working up in Seattle. The footage we were editing on was on a server in San Jose, California. And the time to pass an edit back and forth took seconds.

I’ve since showed the technology around Asia-Pacific, and it gets the same reactions that I experienced – this is the way editing remotely should be, and the way collaboration should be. Anyone who has had to download massive files, or waited around for an overnight delivery, can related to the power of Adobe Anywhere.

Anywhere also has fun implications across shorter distances. Doing massive amounts of layers in a multicam edit? Anywhere sends a single “stream” to your local machine, eliminating traditional bandwidth concerns across a facility network. Need 30 students working on the same source media? No need to copy to 30 workstations, when Anywhere can serve up the footage without massive fibre-channel installations.

While Adobe Anywhere has now been officially unveiled, it’s not going to be available until sometime in 2013. The most up-to-date information on pricing or availability will be at the Anywhere site here: http://success.adobe.com/microsites/adobeanywhere.html

It’s gonna be big. :-)

August 4, 2011

A ProRes workflow end-to-end

With the radical change going on right now in the world of Final Cut Pro, I’ve had some FCP7 users ask me about maintaining an end-to-end ProRes workflow in Premiere Pro. There are questions whether it’s even possible. Well, I’m here to show you it IS possible, and how to make it go.

What do I mean by an “end-to-end ProRes workflow”? This means ingesting ProRes clips, dropping them right to the timeline, rendering previews when necessary to a new ProRes file, and outputting back to a ProRes master. While Premiere Pro works great with a wide variety of native camera formats, there are times when this workflow is a good idea. For example, using an AJA KiPro for capture, shooting with the ARRI Alexa, or working with ProRes media from an FCP timeline.

This particular workflow does only work on a Mac system that has the ProRes encoder installed. There are a couple of ways to get this component, but unfortunately, they are not free. For most people using this workflow, you probably already have Final Cut Pro 6 or 7 installed, so you won’t have to worry. If you’re equipping a new Mac, you can also buy Motion 5 for under US$50 from the App Store. This will also get you the necessary codecs.

For Windows users, unfortunately, there is not a ProRes encoder component available. But that doesn’t mean you can’t use ProRes files. QuickTime for Windows does include the decoder. It just means that, if you render preview files in the timeline, you’ll need to use another codec. So, technically, it won’t be a “full” ProRes workflow, but you’ll still get great results. On the bright side, Windows users have more options for Nvidia cards, which is a worthwhile investment, since it ELIMINATES the need to render previews in most cases anyway. Also you won’t be able to output back to ProRes. Until a ProRes encoder is released for Windows, that’s sadly going to be the case.

What makes this possible is the flexibility of Premiere Pro to input and output in pretty much any format that the system has access to. Unfortunately, since Premiere doesn’t ship with ProRes encoding components, this’ll take a bit of time setting up. But, once it’s set up, using it is really easy.

Setting Up Timeline Presets:

You’ll need to first set up some timelines that use ProRes as the Preview File format. It’s a good idea to create as many as necessary for the different resolutions and frame rates you’ll be working with. For this tutorial, I’m going to show you how to make a 1080p/24 timeline preset.

Open up Premiere Pro, and set up a “dummy” project. We just need to have a blank project open to access some of the settings in Premiere. In this picture, I’m using a project called “Untitled” that I use for stuff like this.

My universal "Untitled" New Project.

My universal "Untitled" New Project.



In the New Sequence panel, ignore all the existing presets! Most people assume incorrectly that these presets are the only formats that Premiere Pro can work with. I’m going to take you into the “guts” of how a Premiere Pro timeline is set up. Find the Settings Tab near the top:

 

Find the Settings Tab

Find the Settings Tab



 

Custom Sequence Settings panel - where the magic happens...

Custom Sequence Settings panel - where the magic happens...



This is where the real power and flexibility of Premiere Pro lies – Premiere can essentially edit any format or file type that it can decode, and this includes working with QuickTime files.

What you’ll want to do here is to start by making a Timeline preset for ProRes 422 at a resolution of 1920×1080, 23.976fps. There are a lot of setting in here, so let me list them:

Editing Mode: Custom

Timebase: 23.976 frames/second

Frame Size: 1920 horizontal, 1080 vertical (should show 16:9 aspect)

Pixel Aspect Ratio: 1.0 (square pixels)

Fields: No Fields (Progressive Scan)

Display Format: 24fps Timecode

Audio: 48000 Hz

Now, up until this point, you’ll notice that nothing is format-specific. All we are doing is setting up the size and frame rate all our media will conform to in the timeline. That’s how Premiere operates – in general, it is format-agnostic, meaning that you can mix and match ANY format on ANY timeline. The main settings for any timeline are just resolution/frame rate settings, period.

The bottom half of the panel is where formats start to play a role:

Video Previews

Video Previews



The Video Previews setting only affects things when you render the timeline. When you are playing back unaltered video clips on the timeline, it has no effect. If you are using GPU-accelerated effects on your clips, again, this preview file format has no effect. But for people using non-accelerated effects, or working on a system without GPU acceleration, you probably will want to render the red-bar portions of your timeline.

Set the Preview File Format to QuickTime (Desktop) and set the Codec to Apple ProRes 422. Also, make sure the Width and Height match the other timeline settings.  Now, STOP! BEFORE you hit the OK button, locate the Save Preset button:

Save your new Preset!

Save your new Preset!



 

To make this easy, you’ll want to be as descriptive as possible in saving your preset. I recommend using a naming convention, and WRITE IT DOWN as you make these. That way, all of your ProRes timeline presets will have easy-to-understand, logical names. I’m going to call this one “ProRes 422 1080p24.”  If you need some additional descriptive help, make whatever notes you like in the Description field. This information will be visible each time you select the preset.

Once you have saved your preset, Premiere Pro will take you back to the Sequence Presets panel, and you should see your shiny new preset appear at the bottom, in the Custom folder:

Your shiny new ProRes 422 1080p24 preset!

Your shiny new ProRes 422 1080p24 preset!



 

Now that you understand the steps to create your first ProRes preset, you’ll want to repeat these steps again for each type of ProRes format, size and resolution you typically work with. Go back to the Settings tab at the top, and modify the settings again to make another preset. Then save and name the second new ProRes preset.

Back to the Settings Tab. Wash, Rinse, Repeat.

Back to the Settings Tab. Wash, Rinse, Repeat.



You may want ProRes 422 (HQ) presets, 1280×720 presets, or frame rates other than 23.976fps. This is up to you, and totally dependent on what type of ProRes clips you are working with. On my system, these are the presets I’ve created:

Just a sample of potential ProRes presets you can create.

Just a sample of potential ProRes presets you can create.



 

Setting Up Output Presets:

Just like the Timeline Presets, we will need to set up some Export Setting Presets for ProRes as well. To do this, we need a timeline with at least one clip in it so that we can access the Export Settings panel.

Go ahead and choose one of your ProRes Timeline presets so that the full Premiere Pro interface opens up. Import a clip, any clip, and drop it onto the timeline. If you have no clips on this system, you can just create a Countdown Leader file by choosing File-New-Universal Counting Leader. Drop it onto the timeline.

Now, with the timeline selected, go to File-Export-Media.

Export Settings Dialogue

Export Settings Dialogue Box



 

In the upper right of the panel, Choose Format: QuickTime. Then, click on the Preset button, and look at the puny list of QuickTime presets that Premiere Pro ships with. I’ve had several people assume from this list that Premiere Pro can only export DV format QuickTime files! NOT SO!!

Is this all QuickTime can do? OF COURSE NOT.

Is this all QuickTime can do? OF COURSE NOT.



To access other QuickTime formats and flavors, including ProRes, we need to create additional QuickTime Presets. These are one-time setups – in the future, we can just choose the preset and output without additional setup.

To get started, head down to this part of the Output Settings screen, and click on the Video tab:

Where the Output Magic happens...

Where the Output Magic happens...



We are going to make a matching Output Preset for our earlier ProRes 422 1080p24 Timeline Preset.

Change the Video Codec to Apple ProRes 422.

Change the Width to 1920.

Change the height to 1080.

Change the Frame Rate to 23.976

Change the Field Type to Progressive.

Change the Aspect to Square Pixels (1.0)

Now switch to the Audio Tab:

Audio Settings Tab

Audio Settings Tab



Change the Sample Type from 16-bit to 24-bit. This will match most source ProRes files, but if you know that your source media uses a different sampling rate, use that.

Double-check your settings in the Video Tab one more time, and if everything looks good, save your preset by clicking here:

Click to save your Output Preset

Click to save your Output Preset



Again, make sure and give your preset a descriptive file name. I’m calling mine “ProRes 422 1080p24 (24-bit Stereo).”

Now, when it’s time to output, I can output a ProRes master that matches my source footage, my preview files, and my Timeline Settings.

Oh, one last tip for longtime FCP users – I’ve heard from FCP users that they are used to ProRes outputs taking less time. That’s probably because, by default, FCP uses the preview files, and just copies the frames into the output file. To make Premiere Pro mimic this behavior, you need to check this box:

Check this box to use your ProRes Preview files.

Check this box to use your ProRes Preview files.



 

Because a lot of native file formats are extremely lossy, Premiere, by default, doesn’t use the previews for final output. It prefers to re-render the effects in the timeline from scratch to get the maximum quality. But, with an end-to-end ProRes workflow, that’s not really necessary. So, using the preview files will speed up the output when going back to the same ProRes format.

You’ll want to make a number of different Output Presets following these steps – one for each format of source material. Again, I’ve created output presets that match the same timeline presets:

My ProRes output Presets

My ProRes output Presets



 

Whew! Okay, now the hard part is done! In actual use, now you can open up Premiere Pro any time, choose a ProRes timeline, and start editing. Previews will automatically be in ProRes format, and when you choose to output your timeline, you can output to the same ProRes format by choosing QuickTime, and then choosing the appropriate preset from your list of ProRes presets. End-to-End Workflow!

 

June 28, 2010

Color Subsampling, or What is 4:4:4 or 4:2:2??

In the video space, there’s always a lot of talk about these number ratios – 4:4:4, or 4:2:2, or 4:1:1, but what exactly do they mean? Recently, someone argued with me that it was better to convert every video clip from my Canon Rebel T2i DSLR camera into a 4:4:4 intermediate codec before editing; that this would make the color magically “better” and that editing natively was somehow bad. They were wrong, and I’m going to explain why.

Before you read on, make sure you’ve read my earlier articles on 32-bit floating point and on YUV color, and look at the picture from the Wikimedia Commons site of the barn in YUV breakdown.

In the picture of the barn, try to look at the fine detail in the U and V channels.Typically, without any brightness information, it’s hard to see any detail in the color channels. The naked eye just does a much better job distinguishing brightness than color. This fact holds true for moving pictures. If the video uses YUV color space, the most important data is in the Y channel. You can throw away a lot of the color information, and the average viewer can’t tell that it’s gone.

One trick that video engineers have used for years is to toss away a lot of the color information. Basically, they can toss away the color values on every other pixel, and it’s not very noticeable. In some cases, they throw away even more color information. This is called Color Subsampling, and it’s a big part of a lot of modern HD formats for video.

When looking at color subsampling, you use a ratio to express what the color subsampling is. Most of us are familiar with these numbers: 4:4:4, or 4:2:2, or 4:1:1, and most of us are aware that bigger numbers are better. Fewer people understand what the numbers actually mean. It’s actually pretty easy.

Let’s pretend that we are looking at a small part of a frame – just a 4×4 matrix of pixels in an image:

Pixel Grid 444.jpg

In this example, every pixel has a Y value, a Cb value, and a Cr value. If you look at a line of pixels, and count how many Y, U, and V values, you’d say that there are 4 values of Y, 4 values for U, and 4 values of V. In color shorthand, we’d say that this is a 4:4:4 image.

4:4:4 color is a platinum standard for color, and it’s extremely rare to see a recording device or camera that outputs 4:4:4 color. Since the human eye doesn’t really notice when color is removed, most of the higher-end devices output something called 4:2:2. Here’s what that 4×4 matrix would look like for 4:2:2:

Pixel Grid 422.jpg

As you can see, half of the pixels are missing the color data. Looking at that 4×4 grid, 4:2:2 color may not look that good, but 4:2:2 color is actually considered a very good color standard. Most computer software can use the neighboring color values and average in the values of the missing color values.

Let’s look at 4:1:1 color, which is used for NTSC DV video:

Pixel Grid 411.jpg

Bleaccch. 75% of the color for each pixel is tossed away! With bigger “gaps” between color information, it’s even harder for software to “rebuild” the missing values, but it happens. This is one of the reasons that re-compressing DV can cause color smearing from generation to generation.

Let’s look at one other color subsampling, which is called 4:2:0, and is used very frequently in MPEG encoding schemes:

Pixel Grid 420.jpg

This diagram shows one of many ways that 4:2:0 color subsampling can be accomplished, but the general idea is the same – Luma samples for each pixel, one line has Cb samples for every other pixel, and the next line has Cr samples for every other pixel.

With a color subsampled image, it’s up to the program decoding the picture to estimate the missing pixel values, using the surrounding intact color values, and providing smoothing between the averaged values.

Okay – we’ve defined what color subsampling is. Now, how does that relate to my friend’s earlier argument?

Well, in my DSLR camera, the color information is subsampled to 4:2:0 color space in the camera. In other words, the camera is throwing away the color information. It’s the weakest link in the chain! Converting from 4:2:0 to 4:4:4 at this stage doesn’t “magically” bring back the thrown-away data – the data was lost prior to hitting the memory card. It’s just taking the data that’s already there, and “upsampling” the missing color values by averaging between the adjoining values.

Inside Premiere Pro, the images will stay exactly as they were recorded in-camera for cuts-only edits. If there’s no color work going on, the 4:2:0 values remain untouched. If I need to do some color grading, Premiere Pro will, on-the-fly, upsample the footage to 4:4:4, and it does this very well, and in a lot of cases, in real-time.

Going to a 4:4:4 intermediate codec does have some benefits – in the transcode process, upsampling every frame to 4:4:4 means that your CPU doesn’t have as much work to do, and may give you better performance on older systems, but there’s a huge time penalty in transcoding. And, it doesn’t get you any “better color” than going native. Whether you upsample prior to editing or do it on-the-fly in Premiere Pro, the color info was already lost in the camera.

In fact, I could argue that Premiere Pro is the better solution for certain types of editing because we leave the color samples alone when possible. If the edit is re-encoded to a 4:2:0 format, Premiere Pro can use the original color samples and pass those along to the encoder in certain circumstances. Upsampling and downsampling can cause errors, since the encoder can’t tell the difference between the original color samples and the rebuilt, averaged ones.

I’m not trying to knock intermediate codecs – there are some very valid reasons why certain people need them in their pipeline. But, for people just editing in the Adobe Production Premium suite, they won’t magically add more color data, and may waste you a lot of time. Take advantage of the native editing in Premiere Pro CS5, and you’ll like what you see. :-)

June 26, 2010

What is YUV?

What is YUV?

Another area I’m getting pelting with questions about is the little YUV logo on some Premiere Pro effects:

YUV Filters.jpg

What exactly is YUV when talking about video? Well, it’s a way of breaking the brightness and colors in the image down into numbers, and it’s a little different from RGB, which we discussed last time. Just as a refresher, most cameras take the light coming into the lens, and convert that into 3 sets of numbers, one for Red, one for Green, and one for Blue. This is called RGB, and we discussed how each of those numbers come in different bit depths in my last article.

There’s one big problem with RGB color – it’s tough to work with. If I need to lower the brightness uniformly on an image, I need to do it to all 3 colors. There’s also a LOT of redundancy in the data. To combat this redundancy, there’s a different way of storing the info called YCbCr, which breaks the signal down into a Y, or luminance channel, and 2 channels that store color info without brightness info – a Blue channel and a Red channel that don’t contain any brightness.

Now, the correct way to abbreviate this would be Y, Cb, Cr. However, I want you to try saying YCbCr 10 times, and compare that with saying RGB 10 times. Y, Cb, Cr, is a mouthful. Some engineer somewhere decided that saying YCbCr was just too inconvenient, and borrowed another color term, YUV, to use instead. Technically speaking, saying “YUV” to describe YCbCr is not accurate at all, but the name stuck, and so now, most people who are talking about YCbCr use the term YUV incorrectly. It’s like calling an American football team that plays in New Jersey the “New York Giants.”

Here’s a graphic from the Wikimedia Commons site that shows RGB breakdowns of a frame:
Barn_grand_tetons_rgb_separation.jpg

As you can see, the full color image is separated into a Red channel, a Green Channel, and a Blue channel. Pretty straightforward.

Here’s the same image, broken down into YUV channels:

Barns_grand_tetons_YCbCr_separation.jpg

You can see that YUV provides essentially a “grayscale” image from one channel, and the color information is separated out into 2 channels – one for Blue information minus the brightness, and one for Red info minus the brightness. In this image, the Blue channel is showing purple/yellow colors, and the Red channel has more red/cyan colors in this particular image.

The Luminance channel behaves similar to the numbers I described in my last article – it’s a number that starts at 0 and goes up to a maximum number. Easy Squeezy, lemon peasy.

However, the 2 color channels operate a little differently – 0 is a “midpoint” in the number, and there are positive and negative steps. If you look in the above example, you’ll see that the white snow in the shots has a grey color in both the color channels. That’s because it has a 0 value in both color channels. The barn has a negative value in the Blue channel (producing a yellowish color) and a positive value in the Red channel (producing, well, red colors.) Green colors are derived from a mix of negative values in both channels.

Using the “knob” graphic from my last post, here’s what a set of control knobs would look like for an 8-bit YUV signal:
3 knobs.jpg
The Y knob has 256 steps, from 0-255, and the U and V knobs range from -128 to +128.

Most major video formats – MPEG-2, AVCHD, DVCPROHD, H.264, XDCAM – all use YUV color space. The pixels are all stored using some variant of Y, Cb, Cr. There are 10-bit and 12-bit versions of YUV as well, and they also behave similarly to 10/12 bit per channel RGB.

When effects are used on a video frame, sometimes the effect needs to convert the values back to RGB before the effect can be applied. In 8-bit or 16-bit-per-channel color space, there can potentially be “rounding errors” when performing this calculation. There are YUV color values that, when converted to RGB, could create negative values. That can’t happen in 8-bit or 16-bit values. This can mean situations where pixels that should pass through an effect unchanged will, in fact change in an unwanted way.

Effects in Premiere Pro that have the YUV logo do the processing directly on the YUV values without converting them to RGB first. At no point are the pixel values converted to RGB, and you won’t see any unwanted color shifting.

32-bit-per-channel color space has the color precision to convert cleanly from YUV to RGB, and will not cause any of these rounding errors. In Premiere Pro, all of the 32-bit effects are “safe” to use.

Let’s look at the same examples from my last article:

1. A DV file with a Gaussian blur and a YUV color corrector exported to DV without the max bit depth flag. We will import the 8-bit DV file, apply the blur to get an 8-bit frame, apply the color corrector to the 8-bit frame to get another 8-bit frame, then write DV at 8-bit. Color and Gaussian Blur are processed natively in YUV, so color accuracy is maintained (although it’s 8-bit.)

2. A DV file with a blur and a YUV color corrector exported to DV with the max bit depth flag. We will import the 8-bit DV file, apply the blur to get an 32-bit frame, apply the color corrector to the 32-bit frame to get another 32-bit frame, then write DV at 8-bit. The color corrector working on the 32-bit blurred frame will be higher quality then the previous example, and again, the signal path is pure YUV.

3. A DV file with a blur and a color corrector exported to DPX with the max bit depth flag. We will import the 8-bit DV file, apply the blur to get an 32-bit frame, apply the color corrector to the 32-bit frame to get another 32-bit frame, then write DPX at 10-bit. This will be still higher quality because the final output format supports greater precision. AND, the signal path again is pure YUV.

4. A DPX file with a blur and a color corrector exported to DPX without the max bit depth flag. We will clamp 10-bit DPX file to 8-bits, apply the blur to get an 8-bit frame, apply the color corrector to the 8-bit frame to get another 8-bit frame, then write 10-bit DPX from 8-bit data. YUV as well.

5. A DPX file with a blur and a color corrector exported to DPX with the max bit depth flag. We will import the 10-bit DPX file, apply the blur to get an 32-bit frame, apply the color corrector to the 32-bit frame to get another 32-bit frame, then write DPX at 10-bit. This will retain full 32-bit YUV precision through the whole pipeline.

As you can see, Premiere Pro really tries to keep color fidelity, and by using either YUV or 32-bit effects, you can be sure that the color in your video is as accurate as possible.

May 16, 2010

Using the Fast Color Corrector

Here’s a nice tutorial for using the Fast Color Corrector in Premiere Pro from the folks at Lynda.com:

The Fast Color Corrector is one of the 30+ effects that are GPU accelerated in CS5, so with the right Nvidia video card in the system, you can use it tons of times in the timeline without ever rendering. Even if you don’t have a recommended GPU, such as in a laptop, the Fast Color Corrector is, well, fast. It works great on my laptop with DSLR footage, and in most cases, I still don’t have to render a preview file to see full frame rate.

March 29, 2010

Debunking Mercury Myths

Now that the word is out – April 12th is the big day when ALL of CS5 will be announced, I’m seeing a lot of misinformation on Twitter over what’s necessary to take advantage of Premiere Pro CS5 and the Mercury Playback Engine:

1. It won’t run on laptops. FALSE
I’m running it today on my MacBook Pro, and taking full advantage of 64-bit goodness and multicore optimization. Even though there’s not a supported GPU in my MacBook, the performance gains over CS4 and CS3 are very significant. In my own, unofficial testing, I would say that Mercury is 25%-30% faster in all day-to-day activities. Your mileage may vary, of course, depending on the format you edit.

2. I can only get real-time effects with an expensive graphics card. FALSE Your performance will vary depending on the CPU and amount of RAM in your system, but Premiere Pro will always try to play effects in real time on the timeline. Now, it IS true that the correct Nvidia graphics card will accelerate a LOT of effects, but to say that you absolutely need a graphics card to use effects in the timeline is a myth. In CPU-only, or software mode, Premiere Pro CS5 is taking much better advantage of RAM and multicore CPU’s, and you’ll definitely be able to play back more effects than CS4 in real-time.

3. I need to change my Operating System to run Premiere Pro CS5 POSSIBLY TRUE
Premiere Pro CS5 is a native 64-bit application, and needs a 64-bit OS in order to run. On Windows, this means running Vista or Windows 7, 64-bit edition. Mac users will need to run Leopard for most functions, and to take advantage of GPU acceleration you’ll need Snow Leopard.

4. My Mac Pro Tower can’t use an Nvidia Quadro FX4800 card. POSSIBLY TRUE Most Mac Pro Towers can upgrade to an Nvidia GeForce GTX285 or Quadro FX 4800. Sadly, however, there are some very early Mac Pro towers that are not compatible with the Nvidia cards. Go to About this Mac, click on More Info, and check your Model Identifier number. If you have 3,1 or 4,1, or higher numbers, you’re fine. If you have 1,1, you can’t upgrade your video card.

If you have any specific questions on Mercury, the latest hardware information is found here: www.adobe.com/go/64bitsupport and I will be answering any questions in the comments as I get them.

March 13, 2010

RED resolution and DSLR performance using the Adobe Mercury Engine

I recently presented at the SF Cutters user group 10th anniversary, and shared a few new tidbits about the Adobe Mercury Engine.

First off, Mercury has higher resolution limits than the current release of Premiere Pro CS4. Right now, Premiere can create timelines up to 4096×4096 resolution. This is great for the current RED cameras, but with 4.5k Mysterium-X sensors now being retrofitted in the RED ONE, it’s not enough for the future. In the Adobe Mercury Engine, the maximum timeline resolution is 10240 pixels by 8192 pixels, more than enough for any mastering resolution we’ll see in the near future. The maximum resolution for clips dropped in any timeline will be limited to 256 megapixels in any direction. So, for example, footage from a 32000 by 8000 pixel sensor could be imported and dropped onto a timeline. This is higher than the resolution announced by RED in the 28k MONSTRO sensor, so RED users shouldn’t hit any resolution limits anytime soon.

As some of you may have noticed, I’ve been busy with a second blog that focuses on DSLR workflows, particularly with the new Canon Rebel T2i. I just picked up one of these cameras, and I’m having a LOT of fun with it. It’s giving me the opportunity to gear up at a reasonable price. Check out more info at www.rebelshooters.com.

Working with Mercury has already given me an advantage over other DSLR users, since Mercury has preset timelines for Canon DSLRS, and can work natively with the footage, with no transcoding or re-rendering prior to beginning the editing process. It’s still early to provide exact performance numbers, but I can very quickly pull clips right off the SD card, drop them on a timeline, and do a one-pass color grade with a 3-way color corrector (or my new favorite tool, the RGB Curves effect,) without rendering a single frame. This has worked wonderfully for quick camera tests. Output is also super-fast, since the GPU accelerated effects don’t impact the rendering process.

Copyright © 2014 Adobe Systems Incorporated. All rights reserved.
Terms of Use | Privacy Policy and Cookies (Updated)