Thread: Newbie: Understand RC3 and the theory of LUTs.

Reply to Thread
Results 1 to 7 of 7
  1. #1 Newbie: Understand RC3 and the theory of LUTs. 
    Senior Member
    Join Date
    Mar 2012
    Posts
    221
    Yes, I know this has been asked 1000 times before. And I've read some about it. But I need to ask in my own words, as I can't seem to get my head around all this.

    Personally, I've just set the Scarlet to the newest RC3 and RG3, and it works fine. But I just want to be more certain of WHY things are like they are, so that I know what's happening, when I start to experiment with this.

    This is what's bugging me:
    Why do we always talk about the RedColor1, 2 and 3, and Redlog, and S-Log and stuff. As I understand, the sensor in the Red cameras picks up light and puts it straight into a RAW file. Nothing gets baked in. Then we take the footage into RedCineX or some other grading, and do colorgrading, while viewing on a good monitor. And if necessary using a LUT on the output, so that the monitor shows us how it will look on cinemascreen, PAL, vimeo...or whatever our final output is.

    But why do we need to pick RC3 and Redgamma before grading starts? Is for getting the sensor "into the right ballpark", before we start grading (some kind of rough colorgrading)? I can't understand, why the best approach wouldn't be to add NO RC or RG, and just grade the RAW data, as it would be as clean as possible. Or is it not possible to have no "interpretation" of raw pixel data?

    (Maybe I've answered my own question, while writing this. But I need to be sure)

    If this is correct, then I have some more questions :)
    Last edited by Thomas Koba; 04-20-2012 at 04:38 AM.
    Reply With Quote  
     

  2. #2  
    Banned
    Join Date
    Oct 2008
    Location
    Pittsburgh, PA
    Posts
    511
    Because you can't manipulate a RAW image in the literal sense. It needs to be debayered into an RGB format and the various RC/RG options do that for you.
    Reply With Quote  
     

  3. #3  
    Senior Member
    Join Date
    Mar 2012
    Posts
    221
    Question:

    1. Is REDlog then most representative of what the sensor captured?

    2. Is REDlog what I see in camera, when switching to RAW view?
    Reply With Quote  
     

  4. #4  
    Senior Member
    Join Date
    Apr 2007
    Posts
    4,029
    Quote Originally Posted by Thomas Koba View Post
    Question:

    1. Is REDlog then most representative of what the sensor captured?
    No. Linear Light is. But what the sensor captured is not what you want to see because neither the display you're using nor your human eyes see the world that way. That's why you apply a gamma curve.
    Reply With Quote  
     

  5. #5  
    Senior Member
    Join Date
    Mar 2012
    Posts
    221
    Thanx. It makes pretty much sense to me now. :)
    Reply With Quote  
     

  6. #6  
    Senior Member Björn Benckert's Avatar
    Join Date
    Dec 2007
    Location
    Stockholm, Sweden
    Posts
    2,778
    Quote Originally Posted by Thomas Koba View Post
    Yes, I know this has been asked 1000 times before. And I've read some about it. But I need to ask in my own words, as I can't seem to get my head around all this.

    Personally, I've just set the Scarlet to the newest RC3 and RG3, and it works fine. But I just want to be more certain of WHY things are like they are, so that I know what's happening, when I start to experiment with this.

    This is what's bugging me:
    Why do we always talk about the RedColor1, 2 and 3, and Redlog, and S-Log and stuff. As I understand, the sensor in the Red cameras picks up light and puts it straight into a RAW file. Nothing gets baked in. Then we take the footage into RedCineX or some other grading, and do colorgrading, while viewing on a good monitor. And if necessary using a LUT on the output, so that the monitor shows us how it will look on cinemascreen, PAL, vimeo...or whatever our final output is.

    But why do we need to pick RC3 and Redgamma before grading starts? Is for getting the sensor "into the right ballpark", before we start grading (some kind of rough colorgrading)? I can't understand, why the best approach wouldn't be to add NO RC or RG, and just grade the RAW data, as it would be as clean as possible. Or is it not possible to have no "interpretation" of raw pixel data?

    (Maybe I've answered my own question, while writing this. But I need to be sure)

    If this is correct, then I have some more questions :)

    You need to apply something to the raw data to even see a picture... as the sensor is basically capturing a black and white pattern. there needs to be something applied before you start to tweak the skin tones etc. So there you have the gamma settings and the other nobs.

    The good thing about it all is that you actually have only saved the sensor feed, the rest is math applied as an union skin, that is not part of the original capture... so no matter what decisions where done while shooting, white balance, iso, contrast etc are open to be altered in post.

    Thats, all.
    Björn Benckert
    Creative Lead & Founder Syndicate Entertainment AB
    +46855524900 www.syndicate.se
    Flame - 3D - Epic - HAWK C 35-135mm - Milo MoCo - Droidworx Mikrokopter
    Reply With Quote  
     

  7. #7  
    Senior Member William Albertini's Avatar
    Join Date
    Nov 2011
    Location
    New York, NY
    Posts
    334
    Red's formula may differ from this but the idea is the same: http://en.wikipedia.org/wiki/Bayer_filter
    Reply With Quote  
     

Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts