Tools

Slugline. Simple, elegant screenwriting.

Red Giant Color Suite, with Magic Bullet Looks 2.5 and Colorista II

Needables
  • Sony Alpha a7S Compact Interchangeable Lens Digital Camera
    Sony Alpha a7S Compact Interchangeable Lens Digital Camera
    Sony
  • Panasonic LUMIX DMC-GH4KBODY 16.05MP Digital Single Lens Mirrorless Camera with 4K Cinematic Video (Body Only)
    Panasonic LUMIX DMC-GH4KBODY 16.05MP Digital Single Lens Mirrorless Camera with 4K Cinematic Video (Body Only)
    Panasonic
  • TASCAM DR-100mkII 2-Channel Portable Digital Recorder
    TASCAM DR-100mkII 2-Channel Portable Digital Recorder
    TASCAM
  • The DV Rebel's Guide: An All-Digital Approach to Making Killer Action Movies on the Cheap (Peachpit)
    The DV Rebel's Guide: An All-Digital Approach to Making Killer Action Movies on the Cheap (Peachpit)
    by Stu Maschwitz

Entries in Image Nerdery (53)

Wednesday
Sep302009

Passing the Linear Torch

I used to show you weird crap like this all the time

Back in the day I blogged a lot about how compositing and rendering computer graphics in “linear light.” a color space in which pixel values equate to light intensities, can produce more realistic results, cure some artifacts, and eliminate the need for clever hacks to emulate natural phenomena. Along with Brendan Bolles, who worked with me at The Orphanage at the time, I created eLin, a system of plug-ins that allowed linear-light compositing in Adobe After Effects 6 (at the mild expense of your sanity). I also created macros for using the eLin color model in Shake and Fusion. Along the way I evangelized an end to the use of the term linear to describe images with a baked-in gamma correction.

Then Adobe released After Effects 7.0, which for the first time featured a 32-bit floating point mode, along with the beginnings of ICC color management, which could be used to semi-automate a linear-light workflow. The process was not exactly self-explanatory though, so I wrote a series of articles (1, 2, 3, 4, 5, 6) on how one might go about it.

Then I rambled endlessly on fxguide about this, and in the processes managed to cast a geek spell on Mike Seymour and John Montgomery, who republished my articles on their fine site with my blessing.

This week Mike interviewed Håkan “MasterZap” Andersson of Mental Images about the state of linear workflows today on that same fxguide podcast.

Which is so very awesome, because I don’t want to talk about it anymore.

It’s just no fun going around telling people “Oh, so you put one layer over another in After Effects? Yeah, you’re doing it wrong.” Or “Oh, you launched your 3D application and rendered a teapot? Yeah, you’re totally doing it wrong.”

You are doing it wrong. And I spent a good few years trying to explain why. But now I don’t have to, because Mike and MasterZap had a great conversation about it, and nailed it, and despite the nice things they said about ProLost you should listen to their chat instead of reading my crusty old posts on the subject.

Because it has gotten much, much simpler since then.

For example, there’s Nuke. Nuke makes it hard to do anything but work in linear color space. Brings a tear to my eye.

And the color management stuff in After Effects has matured to the point that its nearly usable by mortal men.

Since I’ve seen a lot of recent traffic on those crusty old posts, here’s my linear-light swan song: a super brief update on how to do it in AE CS4:

In your Project Settings, select 32 bpc mode, choose sRGB as your color space, and check the Linearize Working Space option:


When importing video footage, use the Color Management tab in Interpret Footage > Main to assign a color profile of sRGB to your footage:


Composite your brains out (not pictured).

When rendering to a video-space file, use the Color Management tab in your Output Module Settings to convert to sRGB on render:

That’s the how. For the why, well, those crusty old articles could possibly help with that, especially this one on color correction in linear float, and this one on when not to use linear color space. Part 6 is still pretty much accurate in describing how to extract a linear HDR image from a single raw file using Adobe Camera Raw importer in After Effects, and this article covers the basics fairly well, although you should ignore all the specifics about the Cineon Emulation mode, which never should have existed. This little bit of evangelism is still a good read.

But the ultimate why answer is, and has been for a while now, within the pages of a book you should have anyway: Adobe After Effects CS4 Visual Effects and Compositing Studio Techniques (deep breath). Brendan Bolles guest-authored a chapter on linear light workflow, and not only does he explain it well, he gives many visual examples. And unlike me, Mark keeps his book up-to-date, so Brendan’s evergreen concepts are linked directly to the recent innovations in After Effects’s color manglement.

OK, that’s it. Let us never speak of this again.

Thursday
Aug062009

The Foundry Un-Rolls Your Shutter

The Foundry have released a $500 plug-in for Nuke and After Effects that attempts to remove rolling shutter artifacting, AKA “Jell-o cam,” from CMOS footage (RED One, Canon HV40, Canon 5D Mark II, etc.). It’s based on a technology demo that they showed at NAB earlier this year.

I’ve tried it, and it works—sometimes. The Foundry are well known for being leaders in the motion estimation field, and they have harnessed their unique experience in this area to attempt the impossible: a per-pixel reconstruction of every part of the frame, where it “would have been” if the shutter had been global open-close instead of read out a line at a time.

When it works, it’s brilliant. But the more motion you have in the frame, the more likely this plug-in, mighty though it may be, will get confused. Unfortunately, when this happens, parts of the image turn to scrambled eggs.

It’s a very similar problem to re-timing 30p footage to 24p actually—in fact, I wish The Foundry had added an option for frame rate conversion to this plug-in. Although, in fairness, I would only rarely use it.

Folks are always asking me about converting 30p to 24. I responded in a thread on the Rebel Café:

…The more motion you have, the more likely it is that any optical flow re-timing system is going to encounter problems.

Not even getting into the combat shots, here’s the Apple Compressor method referenced in Philip’s tutorial failing on one of the more sedate shots in After The Subway:


Now I’m not saying that you won’t occasionally see results from 30-to-24p conversions that look good. The technology is amazing. But while it can work often, it will fail often. And that’s not a workflow. It’s finger-crossing.

On a more subtle note, I don’t think it’s acceptable that every frame of a film should be a computer’s best guess as to what happened between captured frames. The magic of filmmaking comes in part from capturing and revealing a narrow, selective slice of something resonant that happened in front of the lens. When you do these motion-interpolated frame rate conversions, you invite a clever computer algorithm to replace your artfully crafted sliver of reality with a best-guess. I feel (and feel free to disagree, I won’t bother arguing) that this artificiality accumulates to create a feeling of unphotographic plasticness. Screw that. Didn’t you select the 5D because you wanted emotionally resonant imagery? You’d be better off with a true 24p video camera that works with you rather than against you, even if it doesn’t give you the convenient crutch of shallow DOF.

To be crystal clear, that’s Apple Compresser failing to properly estimate the motion in one particular frame when trying to convert from 30p to 24p. Nothing to do with The Foundry or their new plug-in, which actually tackled that same shot quite well.

The Foundry knows that they’ve made a tool to help us limp along while camera manufacturers sort out this CMOS issue. Like any crutch, I wouldn’t plan on leaning too hard on it—but kudos to The Foundry for attacking this problem head on, and making a product out of a technology demo in record time.

Wednesday
Nov262008

Adobe Can Has Science

Dan Goldman is an old friend of mine from ILM. He now works for Adobe's top-secret God Dammit Put This In A Product Now division.


Interactive Video Object Manipulation from Dan Goldman on Vimeo.

Its no Cartoon effect, but it's got promise.

Monday
Nov102008

Panalog Pipelines

I mentioned that Panalog images from the Panavision Genesis camera can be used easily in either video or film grading workflows. Here's what I mean by that.

Here's an image in Panalog color space:


You can treat this image as a video source, and many do. I just finished three commercials this way. You can just load in the Panalog source, and color correct it to taste. No fuss, no LUTs. As an example, here's that same Panalog image corrected with Colorista:


You can also load the Panalog footage directly into a film grading pipeline. Here's how that same uncorrected Panalog shot would appear under a standard Kodak Vision preview LUT:


It's a bit dark and crunchy, but no matter—Colorista can take care of that. Here's the image color corrected underneath the Vision preview LUT:


Here are the Colorista settings used on the video version:


And here are the settings for the film version:


In either a film or video post environment, a colorist can take Panalog footage into his or her existing workflow and start working with it immediately, color correcting to taste. That's pretty cool.

The big difference between the two is that the film version has that nice soft highlight rolloff. The white t-shirt reaches almost 600% scene illuminance in this image, and that detail is preserved in Panalog. It gets crushed out in the video correction, but on film that detail is preserved, albeit compacted into the soft, sloping shoulder of the print stock.

Note that the sample image used here was not shot with a Genesis. It was shot with a DSLR and converted to Panalog (accurately) from raw. I don't have any good Genesis sample images that I'm free to use just now.