Recently in Color | Main

July 1, 2010

More on color – Y,Pb,Pr and Y,Cb,Cr, and my fevered brain…

*sigh*. This wasn’t a planned posting. This is a quick mea culpa from my last several blog posts. I made a mistake. It’s actually a mistake I’ve been making for about 10 years now.

You see, in the engineering world, when discussing YUV color channels properly, there are two separate terms for analog color signals and digital colored signals. The term Y,Pb,Pr is referring to an analog signal, and I used it in a couple of my posts to describe a digital signal. The correct term for the digital signal is Y,Cb,Cr. Graeme Nattress from RED pointed out the mistake in the repost of this blog on Pro Video Coalition.

(BTW – if you don’t read PVC right now, you should. Excellent resource!)

I’ve been switching these two terms in my head for years now. Can’t explain why. Somehow, I associate the “C” for analog color and the “P” for printed on a screen. Makes no sense. At least I’m consistent about it. :-/

Discussing color is an alphabet soup of letters, numbers, variables, and more. I’m trying to break down some of the terms and make it easier for a beginner to understand. Making mistakes like this don’t help. That’s why we call Y,Cb,Cr “YUV” all the time! :-)

June 28, 2010

Color Subsampling, or What is 4:4:4 or 4:2:2??

In the video space, there’s always a lot of talk about these number ratios – 4:4:4, or 4:2:2, or 4:1:1, but what exactly do they mean? Recently, someone argued with me that it was better to convert every video clip from my Canon Rebel T2i DSLR camera into a 4:4:4 intermediate codec before editing; that this would make the color magically “better” and that editing natively was somehow bad. They were wrong, and I’m going to explain why.

Before you read on, make sure you’ve read my earlier articles on 32-bit floating point and on YUV color, and look at the picture from the Wikimedia Commons site of the barn in YUV breakdown.

In the picture of the barn, try to look at the fine detail in the U and V channels.Typically, without any brightness information, it’s hard to see any detail in the color channels. The naked eye just does a much better job distinguishing brightness than color. This fact holds true for moving pictures. If the video uses YUV color space, the most important data is in the Y channel. You can throw away a lot of the color information, and the average viewer can’t tell that it’s gone.

One trick that video engineers have used for years is to toss away a lot of the color information. Basically, they can toss away the color values on every other pixel, and it’s not very noticeable. In some cases, they throw away even more color information. This is called Color Subsampling, and it’s a big part of a lot of modern HD formats for video.

When looking at color subsampling, you use a ratio to express what the color subsampling is. Most of us are familiar with these numbers: 4:4:4, or 4:2:2, or 4:1:1, and most of us are aware that bigger numbers are better. Fewer people understand what the numbers actually mean. It’s actually pretty easy.

Let’s pretend that we are looking at a small part of a frame – just a 4×4 matrix of pixels in an image:

Pixel Grid 444.jpg

In this example, every pixel has a Y value, a Cb value, and a Cr value. If you look at a line of pixels, and count how many Y, U, and V values, you’d say that there are 4 values of Y, 4 values for U, and 4 values of V. In color shorthand, we’d say that this is a 4:4:4 image.

4:4:4 color is a platinum standard for color, and it’s extremely rare to see a recording device or camera that outputs 4:4:4 color. Since the human eye doesn’t really notice when color is removed, most of the higher-end devices output something called 4:2:2. Here’s what that 4×4 matrix would look like for 4:2:2:

Pixel Grid 422.jpg

As you can see, half of the pixels are missing the color data. Looking at that 4×4 grid, 4:2:2 color may not look that good, but 4:2:2 color is actually considered a very good color standard. Most computer software can use the neighboring color values and average in the values of the missing color values.

Let’s look at 4:1:1 color, which is used for NTSC DV video:

Pixel Grid 411.jpg

Bleaccch. 75% of the color for each pixel is tossed away! With bigger “gaps” between color information, it’s even harder for software to “rebuild” the missing values, but it happens. This is one of the reasons that re-compressing DV can cause color smearing from generation to generation.

Let’s look at one other color subsampling, which is called 4:2:0, and is used very frequently in MPEG encoding schemes:

Pixel Grid 420.jpg

This diagram shows one of many ways that 4:2:0 color subsampling can be accomplished, but the general idea is the same – Luma samples for each pixel, one line has Cb samples for every other pixel, and the next line has Cr samples for every other pixel.

With a color subsampled image, it’s up to the program decoding the picture to estimate the missing pixel values, using the surrounding intact color values, and providing smoothing between the averaged values.

Okay – we’ve defined what color subsampling is. Now, how does that relate to my friend’s earlier argument?

Well, in my DSLR camera, the color information is subsampled to 4:2:0 color space in the camera. In other words, the camera is throwing away the color information. It’s the weakest link in the chain! Converting from 4:2:0 to 4:4:4 at this stage doesn’t “magically” bring back the thrown-away data – the data was lost prior to hitting the memory card. It’s just taking the data that’s already there, and “upsampling” the missing color values by averaging between the adjoining values.

Inside Premiere Pro, the images will stay exactly as they were recorded in-camera for cuts-only edits. If there’s no color work going on, the 4:2:0 values remain untouched. If I need to do some color grading, Premiere Pro will, on-the-fly, upsample the footage to 4:4:4, and it does this very well, and in a lot of cases, in real-time.

Going to a 4:4:4 intermediate codec does have some benefits – in the transcode process, upsampling every frame to 4:4:4 means that your CPU doesn’t have as much work to do, and may give you better performance on older systems, but there’s a huge time penalty in transcoding. And, it doesn’t get you any “better color” than going native. Whether you upsample prior to editing or do it on-the-fly in Premiere Pro, the color info was already lost in the camera.

In fact, I could argue that Premiere Pro is the better solution for certain types of editing because we leave the color samples alone when possible. If the edit is re-encoded to a 4:2:0 format, Premiere Pro can use the original color samples and pass those along to the encoder in certain circumstances. Upsampling and downsampling can cause errors, since the encoder can’t tell the difference between the original color samples and the rebuilt, averaged ones.

I’m not trying to knock intermediate codecs – there are some very valid reasons why certain people need them in their pipeline. But, for people just editing in the Adobe Production Premium suite, they won’t magically add more color data, and may waste you a lot of time. Take advantage of the native editing in Premiere Pro CS5, and you’ll like what you see. :-)

June 26, 2010

What is YUV?

What is YUV?

Another area I’m getting pelting with questions about is the little YUV logo on some Premiere Pro effects:

YUV Filters.jpg

What exactly is YUV when talking about video? Well, it’s a way of breaking the brightness and colors in the image down into numbers, and it’s a little different from RGB, which we discussed last time. Just as a refresher, most cameras take the light coming into the lens, and convert that into 3 sets of numbers, one for Red, one for Green, and one for Blue. This is called RGB, and we discussed how each of those numbers come in different bit depths in my last article.

There’s one big problem with RGB color – it’s tough to work with. If I need to lower the brightness uniformly on an image, I need to do it to all 3 colors. There’s also a LOT of redundancy in the data. To combat this redundancy, there’s a different way of storing the info called YCbCr, which breaks the signal down into a Y, or luminance channel, and 2 channels that store color info without brightness info – a Blue channel and a Red channel that don’t contain any brightness.

Now, the correct way to abbreviate this would be Y, Cb, Cr. However, I want you to try saying YCbCr 10 times, and compare that with saying RGB 10 times. Y, Cb, Cr, is a mouthful. Some engineer somewhere decided that saying YCbCr was just too inconvenient, and borrowed another color term, YUV, to use instead. Technically speaking, saying “YUV” to describe YCbCr is not accurate at all, but the name stuck, and so now, most people who are talking about YCbCr use the term YUV incorrectly. It’s like calling an American football team that plays in New Jersey the “New York Giants.”

Here’s a graphic from the Wikimedia Commons site that shows RGB breakdowns of a frame:
Barn_grand_tetons_rgb_separation.jpg

As you can see, the full color image is separated into a Red channel, a Green Channel, and a Blue channel. Pretty straightforward.

Here’s the same image, broken down into YUV channels:

Barns_grand_tetons_YCbCr_separation.jpg

You can see that YUV provides essentially a “grayscale” image from one channel, and the color information is separated out into 2 channels – one for Blue information minus the brightness, and one for Red info minus the brightness. In this image, the Blue channel is showing purple/yellow colors, and the Red channel has more red/cyan colors in this particular image.

The Luminance channel behaves similar to the numbers I described in my last article – it’s a number that starts at 0 and goes up to a maximum number. Easy Squeezy, lemon peasy.

However, the 2 color channels operate a little differently – 0 is a “midpoint” in the number, and there are positive and negative steps. If you look in the above example, you’ll see that the white snow in the shots has a grey color in both the color channels. That’s because it has a 0 value in both color channels. The barn has a negative value in the Blue channel (producing a yellowish color) and a positive value in the Red channel (producing, well, red colors.) Green colors are derived from a mix of negative values in both channels.

Using the “knob” graphic from my last post, here’s what a set of control knobs would look like for an 8-bit YUV signal:
3 knobs.jpg
The Y knob has 256 steps, from 0-255, and the U and V knobs range from -128 to +128.

Most major video formats – MPEG-2, AVCHD, DVCPROHD, H.264, XDCAM – all use YUV color space. The pixels are all stored using some variant of Y, Cb, Cr. There are 10-bit and 12-bit versions of YUV as well, and they also behave similarly to 10/12 bit per channel RGB.

When effects are used on a video frame, sometimes the effect needs to convert the values back to RGB before the effect can be applied. In 8-bit or 16-bit-per-channel color space, there can potentially be “rounding errors” when performing this calculation. There are YUV color values that, when converted to RGB, could create negative values. That can’t happen in 8-bit or 16-bit values. This can mean situations where pixels that should pass through an effect unchanged will, in fact change in an unwanted way.

Effects in Premiere Pro that have the YUV logo do the processing directly on the YUV values without converting them to RGB first. At no point are the pixel values converted to RGB, and you won’t see any unwanted color shifting.

32-bit-per-channel color space has the color precision to convert cleanly from YUV to RGB, and will not cause any of these rounding errors. In Premiere Pro, all of the 32-bit effects are “safe” to use.

Let’s look at the same examples from my last article:

1. A DV file with a Gaussian blur and a YUV color corrector exported to DV without the max bit depth flag. We will import the 8-bit DV file, apply the blur to get an 8-bit frame, apply the color corrector to the 8-bit frame to get another 8-bit frame, then write DV at 8-bit. Color and Gaussian Blur are processed natively in YUV, so color accuracy is maintained (although it’s 8-bit.)

2. A DV file with a blur and a YUV color corrector exported to DV with the max bit depth flag. We will import the 8-bit DV file, apply the blur to get an 32-bit frame, apply the color corrector to the 32-bit frame to get another 32-bit frame, then write DV at 8-bit. The color corrector working on the 32-bit blurred frame will be higher quality then the previous example, and again, the signal path is pure YUV.

3. A DV file with a blur and a color corrector exported to DPX with the max bit depth flag. We will import the 8-bit DV file, apply the blur to get an 32-bit frame, apply the color corrector to the 32-bit frame to get another 32-bit frame, then write DPX at 10-bit. This will be still higher quality because the final output format supports greater precision. AND, the signal path again is pure YUV.

4. A DPX file with a blur and a color corrector exported to DPX without the max bit depth flag. We will clamp 10-bit DPX file to 8-bits, apply the blur to get an 8-bit frame, apply the color corrector to the 8-bit frame to get another 8-bit frame, then write 10-bit DPX from 8-bit data. YUV as well.

5. A DPX file with a blur and a color corrector exported to DPX with the max bit depth flag. We will import the 10-bit DPX file, apply the blur to get an 32-bit frame, apply the color corrector to the 32-bit frame to get another 32-bit frame, then write DPX at 10-bit. This will retain full 32-bit YUV precision through the whole pipeline.

As you can see, Premiere Pro really tries to keep color fidelity, and by using either YUV or 32-bit effects, you can be sure that the color in your video is as accurate as possible.

Copyright © 2014 Adobe Systems Incorporated. All rights reserved.
Terms of Use | Privacy Policy and Cookies (Updated)