Tools

Slugline. Simple, elegant screenwriting.

Red Giant Color Suite, with Magic Bullet Looks 2.5 and Colorista II

Needables
  • Sony Alpha a7S Compact Interchangeable Lens Digital Camera
    Sony Alpha a7S Compact Interchangeable Lens Digital Camera
    Sony
  • Panasonic LUMIX DMC-GH4KBODY 16.05MP Digital Single Lens Mirrorless Camera with 4K Cinematic Video (Body Only)
    Panasonic LUMIX DMC-GH4KBODY 16.05MP Digital Single Lens Mirrorless Camera with 4K Cinematic Video (Body Only)
    Panasonic
  • TASCAM DR-100mkII 2-Channel Portable Digital Recorder
    TASCAM DR-100mkII 2-Channel Portable Digital Recorder
    TASCAM
  • The DV Rebel's Guide: An All-Digital Approach to Making Killer Action Movies on the Cheap (Peachpit)
    The DV Rebel's Guide: An All-Digital Approach to Making Killer Action Movies on the Cheap (Peachpit)
    by Stu Maschwitz

Entries in Fusion (17)

Wednesday
Jul052006

The Orphanage is Hiring Compositors!

If you're in the Bay Area or would like to be, know AE and have feature film work on your reel, send an email to recruiting[at]theorphanage[dot]com.

And if you've read and understood all the color mumbo jumbo on this blog, tell them you're applying for my job. :)

Sunday
Jun042006

Know when to log ’em, know when to lin ’em

Bluepixel posted this comment today (quoted here in its entirety):

Hi Stu!

Having used different colorspace workflows, and after reading your stories on the use of eLin on your blog, I was wondering if you could ellaborate a bit more on the topic “comping in real linear space makes tools behave in a more natural way, except for some tools”.

Let me explain my point. I do see the benefits within image processing tools that require “averaging” pixels, where the maths on a linear space behave more naturally than on a gamma=corrected space. But on my experience, color correction tools have a weird behaviour on linear images. Actually, I’ve found that only “mult” or brightness color corrections do behave more naturally on linear than they do on gamma-corrected space.

On the other hand, if you try to apply any correction involving changing the curve ono the blacks side, like a contrast, or even a gamma, the results are better when working on a gamma-corrected space. Same applies to pulling a key from a green/blue screen.

I have often found myself using different colorspaces depending on my needs, which is safe if you know what you’re doing at each stage, but I was wondering if that’s what you meant when you say that linear feels more organic to the use of compositing tools with some exceptions… are you referring to any of the issues I described above? It would be great to have some in-depth talk about that.

Thanks for your attention and for the great resource your blog is…

Cheers,

blue.

Blue, you said it perfectly. Your assessment of which color operations work well under which circumstances is exactly in sync with my opinions.

Simple RGB channel gain color correction works better in lin than in vid or log. So does image resizing, motion blur, focus blur, layering, adding, multiplying, simulating fog, simulating light, simulating a double exposure. Text rendering looks better in lin, as does rasterizing vector art. 3D lights and shading work better and more realistically in lin. All of these operations are cases of using simple math to simulate how light interacts in the real world. These are linear, physical events we’re simulating.

But some image processing falls outside of this category. Some image processing is perceptual, and wants to be performed in perceptually linear, AKA gamma-corrected space (vid). Examples include inverting an image, color corrections that want to have visually equivalent effects at any brightness level (such as crushing the blacks of an image). Some image sharpening wants to be perceptual (sharpening before printing maybe?) whereas some wants to be linear light (for example, canceling the effects of a slight defocus).

Some operations want to be in vid or log rather than lin because they are simulating events that have a native color space. Film has a logarithmic response to light, so adding grain, or simulating a cross-dissolve, or creating a fade to black all may want to be done in log, or at least vid.

When you perform a telecine-style color correction you are essentially creating a new “magic film stock” with uniquely non-linear responses to light. It makes perfect sense that you’d do this in a non-linear color space. Note that colorists use the same exact controls to color correct vid material as log, reinforcing the similarities between these color spaces.

So yes, I agree, there is no one color space that works perfectly for all situations. The key to effective image manipulation is to use the correct color space for the particular thing you’re doing, and that might mean bouncing back and forth between lin, log and vid within one project, regardless of where your source material came from or what format you’re outputting to.

Friday
Jan202006

Mine‘s better than yours

An interesting debate sprang up in the comments on my previous posts. The question came up of "What do you do in AE that Fusion/Shake aren't good tools for?" While I feel that "my software is better than yours" discussions are silly, there is much to be gained by users and developers from a healthy discourse about the workflows that particular programs have really nailed. "After Effects suxx0rz compared to Shake" == not productive. "I can do this one thing I do a lot really fast in Combustion, whereas it takes forever in Fusion" == productive.

Last year I was putting together a teaser trailer and found myself missing a shot. The shot we desperately needed was a rack-focus from a barn to flies buzzing in the foreground over some unseen form. I had a still photo of a barn, and was able to camera-map it onto some planes in After Effects and create a convincing camera move. Next came the flies.

Creating the swarm of flies that buzzed around the camera, flitting in and out of focus, took 20 minutes.

Don't believe me? Well, I recreated the feat in AE7 for you to watch, The link below is to an unedited screen capture of a 17-minute session in AE7. I was able to shave three minutes off my time because I'd done it once before.

I considered the creation of this shot to be a triumph of AE's flexibility, 3D capabilities (including depth-of-field), and expressions. It might be a bit beyond what most people would consider appropriate for a comping app, but of course that's what I loved about it. I made a cool, spooky shot out of a still photo in only a few hours. There are many things I know I could do in Fusion or Shake that I would have a tough time with in AE, but the comment specifically asked for an example of something easier in AE than in Fusion, and I submit to you:

flies.mov (5.3mb Quicktime 7)

Feel free to comment and include links to your own examples of stuff you did in your favorite app. Better still, include a link that proves me wrong, showing you doing this in 16 minutes with some other tool!

Friday
Apr222005

sliceaholic

I remember reading through Steve Wright's book and getting to the part about the slice tool and thinking, what is this crazy slice tool thing? What punkass comping software is this cat using? And then I wanted my punkass comping software to have one.



O-slice is a Fusion macro that lets you graph two slices through your image. You can move the graphs around and control how they look. You can scale the values of the graphs. You can even see how the alpha channel graphs against RGB.

If you don't know why you need this, Steve will tell you.

Download O-slice_v1.4 (5kb .rar file)

Props: I never would have figured out how to use Paint to create the slice lines if it weren't for Raf Schoenmaekers’ helpful post on the Fusion list.