Archives for June, 2010 | Main

June 28, 2010

Color Subsampling, or What is 4:4:4 or 4:2:2??

In the video space, there’s always a lot of talk about these number ratios – 4:4:4, or 4:2:2, or 4:1:1, but what exactly do they mean? Recently, someone argued with me that it was better to convert every video clip from my Canon Rebel T2i DSLR camera into a 4:4:4 intermediate codec before editing; that this would make the color magically “better” and that editing natively was somehow bad. They were wrong, and I’m going to explain why.

Before you read on, make sure you’ve read my earlier articles on 32-bit floating point and on YUV color, and look at the picture from the Wikimedia Commons site of the barn in YUV breakdown.

In the picture of the barn, try to look at the fine detail in the U and V channels.Typically, without any brightness information, it’s hard to see any detail in the color channels. The naked eye just does a much better job distinguishing brightness than color. This fact holds true for moving pictures. If the video uses YUV color space, the most important data is in the Y channel. You can throw away a lot of the color information, and the average viewer can’t tell that it’s gone.

One trick that video engineers have used for years is to toss away a lot of the color information. Basically, they can toss away the color values on every other pixel, and it’s not very noticeable. In some cases, they throw away even more color information. This is called Color Subsampling, and it’s a big part of a lot of modern HD formats for video.

When looking at color subsampling, you use a ratio to express what the color subsampling is. Most of us are familiar with these numbers: 4:4:4, or 4:2:2, or 4:1:1, and most of us are aware that bigger numbers are better. Fewer people understand what the numbers actually mean. It’s actually pretty easy.

Let’s pretend that we are looking at a small part of a frame – just a 4×4 matrix of pixels in an image:

Pixel Grid 444.jpg

In this example, every pixel has a Y value, a Cb value, and a Cr value. If you look at a line of pixels, and count how many Y, U, and V values, you’d say that there are 4 values of Y, 4 values for U, and 4 values of V. In color shorthand, we’d say that this is a 4:4:4 image.

4:4:4 color is a platinum standard for color, and it’s extremely rare to see a recording device or camera that outputs 4:4:4 color. Since the human eye doesn’t really notice when color is removed, most of the higher-end devices output something called 4:2:2. Here’s what that 4×4 matrix would look like for 4:2:2:

Pixel Grid 422.jpg

As you can see, half of the pixels are missing the color data. Looking at that 4×4 grid, 4:2:2 color may not look that good, but 4:2:2 color is actually considered a very good color standard. Most computer software can use the neighboring color values and average in the values of the missing color values.

Let’s look at 4:1:1 color, which is used for NTSC DV video:

Pixel Grid 411.jpg

Bleaccch. 75% of the color for each pixel is tossed away! With bigger “gaps” between color information, it’s even harder for software to “rebuild” the missing values, but it happens. This is one of the reasons that re-compressing DV can cause color smearing from generation to generation.

Let’s look at one other color subsampling, which is called 4:2:0, and is used very frequently in MPEG encoding schemes:

Pixel Grid 420.jpg

This diagram shows one of many ways that 4:2:0 color subsampling can be accomplished, but the general idea is the same – Luma samples for each pixel, one line has Cb samples for every other pixel, and the next line has Cr samples for every other pixel.

With a color subsampled image, it’s up to the program decoding the picture to estimate the missing pixel values, using the surrounding intact color values, and providing smoothing between the averaged values.

Okay – we’ve defined what color subsampling is. Now, how does that relate to my friend’s earlier argument?

Well, in my DSLR camera, the color information is subsampled to 4:2:0 color space in the camera. In other words, the camera is throwing away the color information. It’s the weakest link in the chain! Converting from 4:2:0 to 4:4:4 at this stage doesn’t “magically” bring back the thrown-away data – the data was lost prior to hitting the memory card. It’s just taking the data that’s already there, and “upsampling” the missing color values by averaging between the adjoining values.

Inside Premiere Pro, the images will stay exactly as they were recorded in-camera for cuts-only edits. If there’s no color work going on, the 4:2:0 values remain untouched. If I need to do some color grading, Premiere Pro will, on-the-fly, upsample the footage to 4:4:4, and it does this very well, and in a lot of cases, in real-time.

Going to a 4:4:4 intermediate codec does have some benefits – in the transcode process, upsampling every frame to 4:4:4 means that your CPU doesn’t have as much work to do, and may give you better performance on older systems, but there’s a huge time penalty in transcoding. And, it doesn’t get you any “better color” than going native. Whether you upsample prior to editing or do it on-the-fly in Premiere Pro, the color info was already lost in the camera.

In fact, I could argue that Premiere Pro is the better solution for certain types of editing because we leave the color samples alone when possible. If the edit is re-encoded to a 4:2:0 format, Premiere Pro can use the original color samples and pass those along to the encoder in certain circumstances. Upsampling and downsampling can cause errors, since the encoder can’t tell the difference between the original color samples and the rebuilt, averaged ones.

I’m not trying to knock intermediate codecs – there are some very valid reasons why certain people need them in their pipeline. But, for people just editing in the Adobe Production Premium suite, they won’t magically add more color data, and may waste you a lot of time. Take advantage of the native editing in Premiere Pro CS5, and you’ll like what you see. :-)

June 26, 2010

What is YUV?

What is YUV?

Another area I’m getting pelting with questions about is the little YUV logo on some Premiere Pro effects:

YUV Filters.jpg

What exactly is YUV when talking about video? Well, it’s a way of breaking the brightness and colors in the image down into numbers, and it’s a little different from RGB, which we discussed last time. Just as a refresher, most cameras take the light coming into the lens, and convert that into 3 sets of numbers, one for Red, one for Green, and one for Blue. This is called RGB, and we discussed how each of those numbers come in different bit depths in my last article.

There’s one big problem with RGB color – it’s tough to work with. If I need to lower the brightness uniformly on an image, I need to do it to all 3 colors. There’s also a LOT of redundancy in the data. To combat this redundancy, there’s a different way of storing the info called YCbCr, which breaks the signal down into a Y, or luminance channel, and 2 channels that store color info without brightness info – a Blue channel and a Red channel that don’t contain any brightness.

Now, the correct way to abbreviate this would be Y, Cb, Cr. However, I want you to try saying YCbCr 10 times, and compare that with saying RGB 10 times. Y, Cb, Cr, is a mouthful. Some engineer somewhere decided that saying YCbCr was just too inconvenient, and borrowed another color term, YUV, to use instead. Technically speaking, saying “YUV” to describe YCbCr is not accurate at all, but the name stuck, and so now, most people who are talking about YCbCr use the term YUV incorrectly. It’s like calling an American football team that plays in New Jersey the “New York Giants.”

Here’s a graphic from the Wikimedia Commons site that shows RGB breakdowns of a frame:
Barn_grand_tetons_rgb_separation.jpg

As you can see, the full color image is separated into a Red channel, a Green Channel, and a Blue channel. Pretty straightforward.

Here’s the same image, broken down into YUV channels:

Barns_grand_tetons_YCbCr_separation.jpg

You can see that YUV provides essentially a “grayscale” image from one channel, and the color information is separated out into 2 channels – one for Blue information minus the brightness, and one for Red info minus the brightness. In this image, the Blue channel is showing purple/yellow colors, and the Red channel has more red/cyan colors in this particular image.

The Luminance channel behaves similar to the numbers I described in my last article – it’s a number that starts at 0 and goes up to a maximum number. Easy Squeezy, lemon peasy.

However, the 2 color channels operate a little differently – 0 is a “midpoint” in the number, and there are positive and negative steps. If you look in the above example, you’ll see that the white snow in the shots has a grey color in both the color channels. That’s because it has a 0 value in both color channels. The barn has a negative value in the Blue channel (producing a yellowish color) and a positive value in the Red channel (producing, well, red colors.) Green colors are derived from a mix of negative values in both channels.

Using the “knob” graphic from my last post, here’s what a set of control knobs would look like for an 8-bit YUV signal:
3 knobs.jpg
The Y knob has 256 steps, from 0-255, and the U and V knobs range from -128 to +128.

Most major video formats – MPEG-2, AVCHD, DVCPROHD, H.264, XDCAM – all use YUV color space. The pixels are all stored using some variant of Y, Cb, Cr. There are 10-bit and 12-bit versions of YUV as well, and they also behave similarly to 10/12 bit per channel RGB.

When effects are used on a video frame, sometimes the effect needs to convert the values back to RGB before the effect can be applied. In 8-bit or 16-bit-per-channel color space, there can potentially be “rounding errors” when performing this calculation. There are YUV color values that, when converted to RGB, could create negative values. That can’t happen in 8-bit or 16-bit values. This can mean situations where pixels that should pass through an effect unchanged will, in fact change in an unwanted way.

Effects in Premiere Pro that have the YUV logo do the processing directly on the YUV values without converting them to RGB first. At no point are the pixel values converted to RGB, and you won’t see any unwanted color shifting.

32-bit-per-channel color space has the color precision to convert cleanly from YUV to RGB, and will not cause any of these rounding errors. In Premiere Pro, all of the 32-bit effects are “safe” to use.

Let’s look at the same examples from my last article:

1. A DV file with a Gaussian blur and a YUV color corrector exported to DV without the max bit depth flag. We will import the 8-bit DV file, apply the blur to get an 8-bit frame, apply the color corrector to the 8-bit frame to get another 8-bit frame, then write DV at 8-bit. Color and Gaussian Blur are processed natively in YUV, so color accuracy is maintained (although it’s 8-bit.)

2. A DV file with a blur and a YUV color corrector exported to DV with the max bit depth flag. We will import the 8-bit DV file, apply the blur to get an 32-bit frame, apply the color corrector to the 32-bit frame to get another 32-bit frame, then write DV at 8-bit. The color corrector working on the 32-bit blurred frame will be higher quality then the previous example, and again, the signal path is pure YUV.

3. A DV file with a blur and a color corrector exported to DPX with the max bit depth flag. We will import the 8-bit DV file, apply the blur to get an 32-bit frame, apply the color corrector to the 32-bit frame to get another 32-bit frame, then write DPX at 10-bit. This will be still higher quality because the final output format supports greater precision. AND, the signal path again is pure YUV.

4. A DPX file with a blur and a color corrector exported to DPX without the max bit depth flag. We will clamp 10-bit DPX file to 8-bits, apply the blur to get an 8-bit frame, apply the color corrector to the 8-bit frame to get another 8-bit frame, then write 10-bit DPX from 8-bit data. YUV as well.

5. A DPX file with a blur and a color corrector exported to DPX with the max bit depth flag. We will import the 10-bit DPX file, apply the blur to get an 32-bit frame, apply the color corrector to the 32-bit frame to get another 32-bit frame, then write DPX at 10-bit. This will retain full 32-bit YUV precision through the whole pipeline.

As you can see, Premiere Pro really tries to keep color fidelity, and by using either YUV or 32-bit effects, you can be sure that the color in your video is as accurate as possible.

June 3, 2010

Understanding Color Processing: 8-bit, 10-bit, 32-bit, and more

Recently, I’ve been getting a lot of questions about the new icons in the Premiere Pro Effects panel, in particular, the “32-Bit” icon seen here:
32BitIcon.png

People have asked how these effects relate to the 64-bit Mercury Engine, if they are limited in some way? The answer is no – these icons mean that these effects use 32-bit floating point color, the gold standard of color processing.

Trying to understand video color precision is, well, a confusing task. There are so many different terms floated around – 8-bit and 10-bit color are used to describe cameras, while software talks about 8 bits per channel, 16 bits per channel, and 32-bits per channel “floating point” color. What does it all mean?? And, for the colorist, how does Premiere Pro handle color?? If these are burning questions in your mind, then read on.

When your camera processes the light coming in the lens into data, it has to assign a number to each of the colors being recorded. Each pixel gets its own set of numbers. Typically a low number means very little of that color – a pixel with an RGB value of 0,0,0 would be completely black.

If 0,0,0 represents black, then what represents white? Well, that depends on what we call the bit depth. The higher the bit depth, the bigger each number can get.

Let’s look at one color – blue. In an 8-bit world, blue is represented by a number than can be between 0 and 255. If I had a knob to adjust the value of blue, it would look like this:

8BitBlueKnob.png

Pretend that this knob makes a “click” every time you raise or lower the value, and there are 256 distinct “clicks” on the knob. This means that there are 256 “steps” between the brightest, most saturated blue, and no blue at all. A “middle-of-the-road” value of blue would be around 128 on this scale. Adjustments have to be made in whole “clicks” – there is no value of “127.5” in 8-bit color precision.

Now, let’s look at 10-bit blue. A knob to adjust blue on a 10-bit device might look like this:

10BitBlueKnob.png

Wow! The knob now goes to 1023! This doesn’t mean that 10-bit blue is more saturated – it means that there are more steps to get to the maximum saturated shade of blue. A 10-bit value of 1023 is potentially the same color as the 8 bit value of 255. If you look at the two knobs in the pictures, you’ll see that the middle points of the knob are 128 and 512, and these values also represent the same color. There are just a LOT more subtle shades of blue selectable on the 10-bit knob. Again, there are no intermediate steps; no decimal values. There are 1024 distinct “clicks” on the knob.

Just for giggles, here’s what a blue control knob would look like for a 12-bit device:

12BitBlueKnob.png

Starting to see the pattern? The higher the color bit depth, the higher the color precision. A higher color bit depth means more variety, more choices on how much color can be used for each pixel.

Each pixel has more than just one color – each pixel usually has 3 numbers assigned to it – either RGB or something called YUV, which I’m not going to explain here. Each of these values are all the same bit depth – If a camera is an 8-bit recording format, each value for each pixel is an 8-bit number.

Now, 8-bit, 10-bit, and 12-bit color are the industry standards for recording color in a device. The vast majority of cameras use 8-bits for color. If your camera doesn’t mention the color bit depth, it’s using 8-bits per channel. Higher-end cameras use 10-bit, and they make a big deal about using “10-bit precision” in their literature. Only a select few cameras use 12-bits, like the digital cinema camera, the RED ONE.

Software like After Effects and Premiere Pro processes color images using color precision of 8-bits, 16-bits, and a special color bit depth called 32-bit floating point. You’ve probably seen these color modes in After Effects, and you’ve seen the new “32” icons on some of the effects in Premiere Pro CS5.

8-bit processing actually works the same as 8-bit on the camera – each color for each pixel is stored as a value of 0-255. When adjustments to colors are made, they move up one whole number. So, for example, if I had a blue value of 128, and wanted to make a small adjustment, I could change the value to 127 or 129.

To enable more steps, there’s 16-bit color. 16-bit color is used by After Effects and Photoshop, but isn’t in Premiere Pro CS5. This works the same way, except each channel has 32,768 steps to choose from. Any time you drop an 8-bit source into a project using 16-bit color, the 8-bit values are remapped to their relative positions in the new color space. Zero stays zero, and 255 becomes 32768. The midpoint value of 128 in my last example would be mapped to 16384. That’s a whole lot more steps to work with – I can make much more subtle adjustments to the amount of blue in the image. 16-bit color also requires whole “clicks” – You still can’t use a decimal value, like 16384.5 to define a color value.

Both 8-bit and 16-bit color still suffer from the limits when you hit the top end or bottom end of the range. If I start to play with values, I can brighten and darken color values that push pixels to the maximum values. If colors are pushed too far, the values can run into a “wall”. Bright areas in 8-bit precision can turn into a big undefined blob. In 16-bit, there’s more latitude in the middle, but the extra precision doesn’t help if the values go over the top or the bottom end of the scale.

Here’s an example image where a brighten filter has been applied in an 8 or 16-bit image, and a darken filter has been applied to the actor’s head:

8BitWizard.png

If you look at the darkened area on the upper right part of the head, it’s just a big bright blob. All the detail is gone from the bright parts of the image, because all the values were maxed out, and the darken filter is just reducing these values to uniform shades of gray. The brighten filter brought all the pixels up to 255,255,255,(8-bit) or 32768,32768,32768 (16-bit) and the darken filter is reducing all 3 values by the same amount.

32-bit color gets around this by mapping the colors differently, leaving some room for over-bright and under-dark values. Instead of mapping 0 to 0 and 255 to some HUGE number, zero is actually mapped to the midpoint of the range. Zero is moved to the middle of the dial, and there are many steps in either direction above and below the old, mapped, “maximum” values.

Here’s the same image as shown above, but the processing was done in a 32-bit sequence:

32BitWizard.png

As you can see, the Darken filter is bringing back the detail, where the head and the light meet, because 32-bit floating point color can store the differences in the pixels, even when the values are pushed above 100% white.

When you see a 32-bit value mapped into a number, it’s expressed with a decimal. The standard range of colors are mapped to a value of 0.0000 and 1.0000, so 0 in 8-bit mode is 0.0000 in 32-bit, and 255 in 8-bit is mapped to 1.00000. The middle of the regular range is 0.5000. Thanks to the decimal place, there are still many thousands of steps to make adjustments, but now, there’s also the potential to go “out of range” and create values that aren’t visible.

Using the same knob graphic, as above, you’d have to think of 32-bit floating point as a smooth-turning knob, with an LED readout above it, like this:

FloatingPointBlue.png

With 32-bit float color, you have a near-infinite amount of values, and you can store over-bright and under-dark values as you manipulate and play with color values.

(BTW – the reason it’s called “floating point” is because the position of the decimal can change as needed. The maximum number is not 9.9999. It’s 99999. The decimal values get lost at the higher ends of the scales, but the ends of the scales are almost never used.)

Chris Meyer has a great tutorial that describes 32-bit float color here:

http://www.lynda.com/home/Player.aspx?lpk4=30903

Okay, now how does this relate to Premiere Pro? Some of Premiere Pro’s effects are full 32-bit floating point effects and have the ability to work in this high color precision. There’s a little secret to making this happen, however. Since 32-bit float color is more memory intensive, you need to turn on a small check box in the Sequence settings:

MaximumBitDepth.png

This “Maximum Bit Depth” check box enables your timeline sequence to work in 32-bit floating point color if those effects are used on the timeline. Keep in mind that this does increase the RAM used by Premiere Pro, so it’s recommended for higher-end systems.

If you have existing sequences, right-click on the sequence in the bin, and choose Sequence Settings to change this value. You can change it any time.

Most file formats are 8-bit formats – rendering back to a DV file, or a QuickTime file means that the color precision needs to be crunched back to 8-bit. If a file format does support a higher color precision (DPX and AVC-Intra P2 are 2 formats that support 10-bit precision) then there will be a “Maximum Bit Depth” selection in the Export Settings dialog box as well. Here’s an example of the Max Bit Depth check box for AVC-Intra output:

AVCIntraBitDepth.png

Some formats, like DPX, have built-in presets for output that include this extra color precision:

DPXMaxBitDepthPresets.png

Steve Hoeg, one of the Premiere Pro engineers, provided some examples of how Premiere Pro will handle color precision in different scenarios:

1. A DV file with a blur and a color corrector exported to DV without the max bit depth flag. We will import the 8-bit DV file, apply the blur to get an 8-bit frame, apply the color corrector to the 8-bit frame to get another 8-bit frame, then write DV at 8-bit.

2. A DV file with a blur and a color corrector exported to DV with the max bit depth flag. We will import the 8-bit DV file, apply the blur to get an 32-bit frame, apply the color corrector to the 32-bit frame to get another 32-bit frame, then write DV at 8-bit. The color corrector working on the 32-bit blurred frame will be higher quality then the previous example.

3. A DV file with a blur and a color corrector exported to DPX with the max bit depth flag. We will import the 8-bit DV file, apply the blur to get an 32-bit frame, apply the color corrector to the 32-bit frame to get another 32-bit frame, then write DPX at 10-bit. This will be still higher quality because the final output format supports greater precision.

4. A DPX file with a blur and a color corrector exported to DPX without the max bit depth flag. We will clamp 10-bit DPX file to 8-bits, apply the blur to get an 8-bit frame, apply the color corrector to the 8-bit frame to get another 8-bit frame, then write 10-bit DPX from 8-bit data.

5. A DPX file with a blur and a color corrector exported to DPX with the max bit depth flag. We will import the 10-bit DPX file, apply the blur to get an 32-bit frame, apply the color corrector to the 32-bit frame to get another 32-bit frame, then write DPX at 10-bit. This will retain full precision through the whole pipeline.

6. A title with a gradient and a blur on a 8-bit monitor. This will display in 8-bit, may show banding.

7. A title with a gradient and a blur on a 10-bit monitor (with hardware acceleration enabled.) This will render the blur in 32-bit, then display at 10-bit. The gradient should be smooth.

There are other examples, which I hope to highlight in my next blog entry.

Copyright © 2014 Adobe Systems Incorporated. All rights reserved.
Terms of Use | Privacy Policy and Cookies (Updated)