It's not the same, but in most cases a matrix can be implemented through a 3D LUT.
Basically, a matrix is used to "cross pollinate" the three color channels. Each channel is allowed to have a mixture of the other two as well as its own component. So the Red channel, for instance, can contain the information from the Red channel as well as a portion of the information from the Blue and Green channels. Each of the three channels, in effect, is given three components. All of this is useful because in terms of how a camera works, it is almost impossible to ensure that each channel is getting pure information. Crosstalk is inherent in the capture process, due to numerous factors including but not limited to impure color dyes in the color filters present on the sensor (in the case of a sensor with a color filter array, such as a Bayer pattern sensor), imperfections in the dichroics of a 3 chip camera design, lack of perfect focus on to the underlying pixels in a color filter array sensor, and imperfections in the optical elements. The use of a matrix allows minimization of crosstalk by analyzing the sensor's output and applying information from all three channels to each of the three channels in proper proportions, sometimes using mathematical expressions for more specific control (that's what Peter is talking about). When done correctly (often by the manufacturer, as they have much better knowledge of the sensor design) this yields purer colors and restores "normal" saturation. Graeme can explain this in more depth, but basically the colorimetry of Red images has improved in large part due to his updating of the color matrix, which in the case of Red is currently presented to you in the form of Redcolor, Redcolor2, and Redcolor3. In the early days of Red's cameras, the saturation was typically much too low and the colors were not as "true to life" as they are today, in large part due to the lack of a well evolved color matrix. By carefully altering the proportions I'm talking about, both color purity and saturation have been altered based on feedback from users, in order to provide more pleasing results.
In a post environment, matrixing can be used exactly as Peter describes, to "normalize" a signal that is "wrong", such as an image that was shot with the wrong white balance. One fairly accessible implementation of this is DaVinci Resolve's Color Mixer, which is essentially a variable matrix. I've used that tool to alter entire scenes that were shot with improper white balance, resulting in a very blue image (the color temperature was incorrectly set at around 3200K, which is roughly equivalent to tungsten lighting, even though the scene was a day exterior). By populating the deficient Red channel with information from the Blue channel, and minimizing the blue component of the blue channel, as well as some other channel manipulations, I was able to "normalize" the image in a way that was not really achievable by changing normal balance. Essentially, I used the matrix as a color temperature/tint control, using the blue channel component to alter temperature, and the green channel to alter tint (essentially a change on the green/magenta axis).
I hope that's not overly confusing. A color matrix can be a very useful tool in the right situation and in the right hands. It is not a "normal" color correction tool (unless you're a video engineer), but in some situations it can definitely do things that other tools are not easily capable of.
Thanks Mike, that explains a lot.
Is there a similar tool in Baselight or would they have built a custom OFX plugin maybe? I assume the expressions they were using prevented it from being just a Truelight LUT...
You can of course express a matrix as a formula transform such as:
R = (0.412453*R + 0.357580*G + 0.180423*B) / 0.9505
G = (0.212671*R + 0.715160*G + 0.072169*B)
B = (0.019334*R + 0.119193*G + 0.950227*B) / 1.089
This can be implemented in Baselight as a Truelight layer. Although Truelight in fact then converts the formula into a 3D LUT for speed of processing.
I believe channel-swapping was also used to create the 2-strip Technicolor look of The Aviator, done a few years ago by Steven Nakamura when he was at TDI/Burbank. It's a powerful tool for certain situations, but not something that I think is necessary for many projects.
There are many, many ways to create unique looks like that, and I'd bet if you shoved five top colorists in five different rooms, told them exactly what look you wanted, they'd each come up with a different way that would probably work. Some of this is more art than science.
If you've already covered this I'll go back and break out the fine-tooth comb.
Here's the situation (theoretical as we don't have files in hand). A customer supplies DPX files for a job that was graded for a film out. We need to grade for broadcast and web as close as possible to the film grade. We're using a properly calibrated (hardware assisted) Cinemage B420 in REC 709 mode and Resolve. My first question is: what is the ideal workflow with and without having access to the preview LUT that was used for the film out grading session, specifically what type of LUTs do we need and where should we place them? Does it make sense to use the DI's preview LUT for emulation and then load a log-to-lin LUT on a node further down the pipeline, or should we be using a custom LUT that performs a color transform as well as the log-to-lin conversion?
While there are probably quite a few ways of doing this, including by eye, my personal preference would be to apply the original emulation lut in the grade itself (I like to put it in the first node of each clip's grade), and do whatever trims are necessary on top of that to have a technically acceptable and pleasing image within the constraints of the standard you are targeting (rec709). You'd want to make sure that the LUT they originally used to grade isn't going to introduce any unacceptable artifacts, but if it's good on that count then I believe it is the most direct path to getting the results from the original grade in the new colorspace.
Other than that you can do a log-to-lin conversion (lut or custom curve), then grade on top of that to match the original movie. If you have a side-by-side setup with film and digital projectors it's easier to match by eye, otherwise you have to do it from memory, which is not ideal IMHO.
Under ideal circumstances, the facility doing the DI also does the video deliverables, and in most cases, that facility has different print emulation LUTs that are targeted to different display types. So once the film version is done, the LUT is swapped out for a different LUT that incorporates the same film target, but is calibrated to produce a proper image on a Rec709 display. The picture is then trimmed to taste. There is no real substitute for this that will provide the same image as accurately or as automatically. So the answer to the original question is to either obtain or create a LUT that is specifically designed to have a Cineon image as its input, has the same film path (I.e., intermediate stock and print stock targets) emulation as the original film DI, and is targeted to display properly on a Rec709 display. That is the only really effective way of matching the original DI, assuming the DPX files you were given have the color correction from that DI session baked in. It is also why this is best done by the facility that did the original DI in the first place.
Can you please explain how facilities create their custom print stock LUTs? Do they measure the film prior to projection or do they measure projected values? Also, are new print LUTs generally created on a job-per-job basis as variables such as printer lights are adjusted?
|« Previous Thread | Next Thread »|