Yes, I know this has been asked 1000 times before. And I've read some about it. But I need to ask in my own words, as I can't seem to get my head around all this.
Personally, I've just set the Scarlet to the newest RC3 and RG3, and it works fine. But I just want to be more certain of WHY things are like they are, so that I know what's happening, when I start to experiment with this.
This is what's bugging me:
Why do we always talk about the RedColor1, 2 and 3, and Redlog, and S-Log and stuff. As I understand, the sensor in the Red cameras picks up light and puts it straight into a RAW file. Nothing gets baked in. Then we take the footage into RedCineX or some other grading, and do colorgrading, while viewing on a good monitor. And if necessary using a LUT on the output, so that the monitor shows us how it will look on cinemascreen, PAL, vimeo...or whatever our final output is.
But why do we need to pick RC3 and Redgamma before grading starts? Is for getting the sensor "into the right ballpark", before we start grading (some kind of rough colorgrading)? I can't understand, why the best approach wouldn't be to add NO RC or RG, and just grade the RAW data, as it would be as clean as possible. Or is it not possible to have no "interpretation" of raw pixel data?
(Maybe I've answered my own question, while writing this. But I need to be sure)
If this is correct, then I have some more questions :)