Tools

Slugline. Simple, elegant screenwriting.

Red Giant Color Suite, with Magic Bullet Looks 2.5 and Colorista II

Needables
  • Sony Alpha a7S Compact Interchangeable Lens Digital Camera
    Sony Alpha a7S Compact Interchangeable Lens Digital Camera
    Sony
  • Panasonic LUMIX DMC-GH4KBODY 16.05MP Digital Single Lens Mirrorless Camera with 4K Cinematic Video (Body Only)
    Panasonic LUMIX DMC-GH4KBODY 16.05MP Digital Single Lens Mirrorless Camera with 4K Cinematic Video (Body Only)
    Panasonic
  • TASCAM DR-100mkII 2-Channel Portable Digital Recorder
    TASCAM DR-100mkII 2-Channel Portable Digital Recorder
    TASCAM
  • The DV Rebel's Guide: An All-Digital Approach to Making Killer Action Movies on the Cheap (Peachpit)
    The DV Rebel's Guide: An All-Digital Approach to Making Killer Action Movies on the Cheap (Peachpit)
    by Stu Maschwitz

Entries in Image Nerdery (53)

Thursday
Nov062008

What Should RED Do?


With a week to go before RED's big announcement of revised Scarlet and Epic specs, there's a temptation to speculate on what wonders Jim has in store for us. Instead, I'd like to present a roadmap that I would like to see RED follow for mitigating some of the confusion that surrounds their raw workflow. Specifically, as you might imagine, regarding color.

The RED One shoots raw. No color processing is baked into the footage, you do that yourself in post. The best aspect of this feature is also the worst—you can process it however you want. So people do—everyone a different way. This flexibility provides a lot of power, and more than enough rope to hang yourself. While the RED One, as comparatively affordable as it may be, is professional equipment that expects a professional post pipeline run by knowledgeable personnel, Scarlet is likely to be a much more accessible camera. Upon its release, thousands of new customers will be shooting raw and processing it "however they like," which will all too often mean not getting the very best images the camera was capable of capturing.

Part of the brilliance of the Panavision Genesis camera is the Panalog color space it uses. Panalog packs the substantial dynamic range of the Genesis into a 10-bit image that can be recorded to tape or disc, and dropped directly into a video post pipeline or a film DI workflow.

RED needs their own Panalog. And they almost have it, as they have created a RED Log transfer function that does a good job of safely storing the full-range raw signal in a 10-bit file. But even when using RED Log, there are still a dozen other settings that can radically affect the image. Panalog, on the other hand, is more than a predefined curve. It's a color matrix as well, and most importantly, an exposure guideline. Rather than give you a hundred ways to screw up your images, Panavision instead suggests one very reliable way to shoot with the Genesis, and only one flavor of results. Your mileage may vary and of course you can branch out from their guidelines, but the camera rolls off the truck with an exposure cheat sheet and a recommended workflow that every post house understands. This is what RED needs. This is what RED Log must become if the diverse range of Scarlet customers are to avoid shooting themselves in the feet.

Here's how Panalog works. The Genesis, like all digital cameras, records linear light energy at the sensor. That signal is then converted to Panalog, which involves both matrixing the colors (to the native white balance of the sensor, no choice) and remapping the light values to a logarithmic scale. Log, as you may remember, is where exposure changes are represented by a consistent numerical offset. This has many advantages for color correction, and has the side benefit of being perceptually uniform on most displays. Shadow detail is preserved unquantized.

In other words, it's an efficient package, and it looks nice. It throws nothing away, you can view it intuitively on a monitor, and you can drop it right into an color correction suite—set up for either video or film—and get right to work.

10-bit images have pixel values ranging from zero to 1023. Panalog maps the camera's "black" to 64 and its brightest signal to 1019.

Panavision recommends the following exposure guideline: Put 18% gray at 36% on your waveform monitor. This will create a Panalog value of 382 in a 10-bit file.

This simple recommendation, combined with the Panalog curve and the low noise floor of the Genesis's sensor, comprises the true might of the Genesis/Panalog combo.

The rest of the values shake out like this (remember that waveforms max out at 109%):


You have five solid stops both over and under 18%. You are holding up to 600% scene illumination, or just over five stops over 18% gray.

This, in my experience, is awesome. You can safely expose one way for your entire shoot, indoors and out, and never risk noisy shadows or egregiously clipped highlights. If you're in an uncontrolled setting, you can safely underexpose by a stop or so to try to hold onto a highlight. You can transfer these images straight to film, and all that highlight latitude will fill the print's ample shoulder, resulting in a film-like overexposure characteristic.

Is it as good as Kodak color negative scanned to Cineon log? No. But it's the best thing going in digital cinema.

It's good that Panalog works, but it's also good because it's not a moving target. None of this information has changed since the introduction of the Genesis in 2004. Meanwhile every single RED user out there is figuring out their own workflow. Their own ISO rating, their own exposure rules of thumb, and their own favorite RED Alert settings. And then starting over from scratch with each new firmware build. It's a digital cinema Tower of Babel.

What RED needs to do is hone RED Log into a strong and consistent enough colorimetry that we can default to it 99% of the time. RED needs to publish exposure guidelines for RED Log and build tools into the cameras (such as a false color mode) to aide us in hewing to them. If I hand someone an R3D file and ask for it to be transferred RED Log, there should be only one way to do it. Don't dispense with the infinite flexibility of RED Alert, but give us a solid default setting that faithfully represent the best of what RED's cameras have to offer.

You've built an amazing camera RED, and you're about to build two more. It's time to show us how to use them.

Sunday
Jun012008

On Clipping, Part 1

Everybody clips. It's like a children's book title for the digital cinema crowd—Everybody Clips. The message: It's OK to clip, young digital cinema camera—everyone has to sooner or later. Even film.

It's true. Throw enough light at a piece of color negative and eventually it stops being able to generate any more density. Clipping, i.e. exceeding the upper limits of a media's ability to record light, happens with all image capture systems.

Everybody clips. So the thing worth talking about is how, and when.

One of the reasons this subject gets confusing is that it starts with understanding a common topic here on ProLost, that of light's linearity and how that linearity is remapped for our viewing pleasure. We all know that increasing light by one stop is the same thing as doubling the amount of light. Yet something one stop brighter doesn't look "twice as bright" to our eyes. We perceive light non-linearly.

Linear-light images, or images where the pixel values are mapped 1:1 to light intensity values, are useful for some tasks (like visual effects compositing) but inconvenient for others. They don't "look correct" on our non-linear displays. And they are inefficient when encoded into a finite number of bits. It's worth understanding that inefficiency, since every digital cinema image (and digital photograph) starts as a linear-light record.

Let's say you have a grayscale chip chart that looks like this:


Each swatch in the chart is twice as reflective as the previous, and photographs as one stop brighter. Each of these swatches is double the brightness of the previous swatch. If you chart those light values, it looks like this:


Each chip is twice as bright as the previous, so each bar is half as high as the next. This image shows off the disparity between measured light and perceived light. Does the swatch on the far right look 125 times brighter than the one on the far left? No, and yet it is.

Notice also how the orange bars representing the reflectance of the swatches are quite short at the left. This is the problem with storing linear light images—they require a lot of fidelity in the shadow areas. One of the many reasons to gamma-encode an image is to better distribute the image brightness values across the available range of pixel values. Here are those same chips, graphed with the gamma 2.2 encoding used to display them:


Note that we now have much more room for variation in dark tones. If we have a limited bid-depth available for storing this image, we're far less likely to see banding or noise in the shadows with this arrangement.

Why is this important? As we've already discussed, it is underexposure tolerance that limits our ability to capture a scene on a digital sensor. We can set our exposure to capture whatever highlight detail we want, but at some point our shadows will become too noisy or quantized to be usable. And this danger is exponentially more apparent when you realize that the chip is capturing linear light values. The first thing we do with those values is brighten them up for display and/or storage, revealing in the process any nastiness in the lower registers.

You can think of the orange bars in the first graph as pixel values. When you shoot the chart with your digital camera, you're using half of your sensor's dynamic range to capture the values between the rightmost chip and the one next to it! By the time you start looking at the mid-gray levels (the center two chips), you're already into the bottom 1/8th of your sensor's light-capturing power. You're putting precious things like skin tones at the weakest portion of the chip's response and saving huge amounts of resolution for highlights that we have trouble resolving with our naked eyes.

This is the damning state of digital image capture. We are pushing these CCD and CMOS chips to their limits just to capture a normal image. Because in those lower registers of the digital image sensor lurk noise, static-pattern artifacting, and discoloration. If you've every tried to salvage an underexposed shot, you've seen these artifacts. We flirt with them every time we open the shutter.

Any amount of additional exposure we can add at the time of capture creates a drastic reduction in these artifacts. This is the "expose to the right" philosophy: capture as bright an image as you dare, and make it darker in processing if you like. You'll take whatever artifacts were there and crush them into nice, silky blacks. This works perfectly—until you clip.

The amount of noise and static-pattern nastiness you can handle is a subjective thing, but clipping is not. You can debate about whether an underexposed image is salvageable or not, but no such argument can be had about overexposure. Clipping is clipping, and while software such as Lightroom and Aperture feature clever ways of milking every last bit of captured highlight detail out of a raw file, they too eventually hit a brick wall.

And that's OK. While HDR enthusiasts might disagree, artful overexposure is as much a part of photography and cinematography as anything else. Everybody clips, even film, and some great films such as Road to Perdition, Million Dollar Baby and 2001: A Space Odyssey would be crippled without their consciously overexposed whites.

The difference, of course, is how film clips, and when.

How does film clip? The answer is gracefully. Where digital sensors slam into a brick wall, film tapers off gradually.

When does film clip? The answer is long, long after even the best digital camera sensors has given up.

More on that in part 2. For now, more swatches! A digital camera sensor converts linear light to linear numbers. Further processing is required to create a viewable image. Film, on the other hand, converts linear light to logarithmic densities. Remember how I described the exponential increase in light values as doubling with each stop? If you graph that increase logarithmically, the results look like a straight line. Here are our swatches with their values converted to log:


Notice that the tops of the orange bars are now a straight line. This is no accident, of course. By converting the image to log, we've both maximized the ability of a lower-bit-depth medium to store the image and we've distributed the image values in a manner that simulates our own perception of light. Exponential light increase is now numerically linear, as it is perceptually linear to our eyes.

We've also, in a way, simulated film's response. As I said, film responds logarithmically to light. In other words, it responds to light much the way our eyes do. This sounds nice, and it is. A big reason is that both film and digital sensors have noise in their responses, noise evenly distributed across their sensitivity. Because film's noise is proportional to its logarithmic response, which matches our perception of light, the result is noise that appears evenly distributed throughout the image. Digital sensors have noise evenly distributed across their linear response, which means that when we boost the shadows into alignment (i.e. gamma encode), we boost the noise as well. This results in images with clean highlights and noisy shadows. Another way to think of it is that in a digital photo of our chip chart, each swatch will be twice as noisy as the one to its right! You can see this simulated below, where I've added 3% noise to the (simulated) linear capture space before applying the gamma of 2.2 for display:


On film, each chip would have roughly the same amount of noise. Film is both more accommodating at the top end and more forgiving of underexposure, as it does not have a preponderance of noise lurking in its shadows.

Next time: Some concrete clipping examples from movies and TV, and more about how the top-end response of film can be simulated by a digital camera.

Friday
Mar142008

RED Log

It's Friday, the traditional time where we at ProLost sit back with a glass of tawny port and reverse-engineer camera transfer functions. Join us, won't you? Today's subject, RED Log.

RED Log is one of the options available in RED Alert (or the at-your-own-risk RED Cine). It's a logarithmic transfer function that appears to be designed to map the camera's linear-light image to a DI-friendly tonal range.

Like Panalog, the RED Log transfer function can be matched using the Cineon log/lin tools available in common compositing applications.

Here are the correct settings in After Effects (set to display pixel values in decimal):


In Shake:

And in Nuke:

Sunday
Mar022008

Exposing to the Left vs. Exposing to the Right

Google this topic and you will discover a war of sorts, a raging disagreement between those who say you should overexpose digital photography as much as possible, referred to as “exposing to the right” since it piles up the histogram toward the right edge, and those who recommend the opposite: underexpose and create a histogram that is left-leaning.

You must expose to the left because clipping is bad, some say. Overexposure is a non-concept in digital photography. Even when shooting RAW, the rightmost edge of your histogram is a sharp cliff, and it must be avoided at all costs. While you can sometimes recover highlight information from a RAW file, you’ll never get much, and if you miscalculate and clip some highlights, the results will be un-film-like and harsh. Expose to the left and always be safe.

Others purport that you must overexpose your digital photography, i.e. expose to the right, because of how digital sensors work. Unlike film, which has a logarithmic response to light, digital sensors have a linear response. So while film grain is evenly distributed across perceptual values, sensor noise lives predominantly in the shadows. It’s easy to see why when you imagine a linear sensor trying to hold four stops of exposure—if 100% sensor charge is four stops up, then one stop down is half that, or 50%, and one more stop down from there is 25% and one more down is 12.5%. As Jason Rodriguez commented on my dynamic range post, fully half your chip’s sensitivity is devoted to the brightest stop you can hold. Each stop you drop from there doubles your noise. So to maximize your signal to noise ratio and get the cleanest image, you must overexpose as much as you can, to distribute your image across the chip’s least-noisy sensitivity range.

Like the raging war between the half-black, half white aliens in Star Trek episode 70 (oh yeah, I went there), this is a non-argument. Both philosophies are 100% correct, and should be in play in the digital photographer’s mind when deciding on an exposure.

It’s so simple to state the combination of these two philosophies that renders both extremes silly: You should expose as bright an image as you can without clipping.

Man, that’s so much easier. I don’t know why people put so much effort into the debate.

Let’s look at some images. Here are some trucks at f/11, 1/500:

(click images for larger size)

Here’s that same view at f/8, 1/250 (for a total of two stops brighter):

The first image is a about a stop underexposed, although it does have small highlights that are just barely being held. The second image is clearly blown-out, and appears clipped in the highlights, but holds nice shadow detail.

But these images are raw, so we have some flexibility. Here they are again, color corrected in Lightroom into a similar look:

They almost match, but if you look closely at the overexposed image you can see that, while Lightroom was able to recover a surprising mount of detail in the highlights, the backs of the white trailers are still a flat, featureless white, with abrupt, cyan-to-white transitions in shading. Almost worse are the highlights in the clouds, which reveal Lightroom’s desperation.

Meanwhile, in the shadows, the underexposed image is a bit noisy, whereas the overexposed image is cleaner.

But the difference is not very noticeable. In this case, it seems the advantage goes to the underexposed image. Had I opened up a stop I could have reduced noise in the grays by half, but I’d be missing some highlight detail on the foreground truck. My fear of blowing out caused me to expose to the left, with happy results.

And now the counterexample: Another pair of images two stops apart:

Another attempt to make them match:

And while in the second image one could say that the clouds are a bit clipped, and the sky a bit low saturation, this is hardly as noticeable as the noise in the shadows of the underexposed image:

So our second image would seem to indicate a victory for exposing to the right.

You’re probably way ahead of me on the conclusion: You cannot apply a blanket philosophy of underexposure nor of overexposure to digital photography. Instead, you must learn your camera’s nuances and seek the correct exposure for the scene—which will almost always be as richly exposed as possible without clipping. Sometimes the resultant histogram will be left-leaning, and sometimes it will be piled up to the right. Make the shot, not the histogram.