[QUOTE=Bruce Allen;1001048]I suspect it is because the color filters on the M and MX sensors are a little... odd. They filter some wavelengths too much and others to little, or something like that - and there is crosstalk / missing color info. Perhaps this was a compromise to reduce noise? I don't know. Anyway, to me, natively they don't work quite right in conjunction with certain light sources and subject matter. I suspect that Graeme or someone is doing very smart compensation in the R3D development process to fix for this. So this is why RED is advocating staying in RAW so much - because they know that, more than other cameras, if you get the white balance wrong, you kill detail./QUOTE]
Nope, not odd. Go take a look at how Adobe do raw conversion colorimetry for DSLRs - they profile at A and D65 if I remember rightly, and then interpolate on the mirek of the white balance between those two for any other colour temperature. In other words, rather than profile the colorimetry for every possible colour temp, the main ones are profiled and the rest are interpolated. We work similarly but take more profiles and less interpolation. That is why setting white balance is important. Of course, white balance only effect colorimetry, which is the conversion of the rather ill-defined camera raw space (ill defined because no camera is colorimetric device) to XYZ, and from there to any defined colour space of choice.
RAW is advocated because it keeps decisions as live as possible for as long as possible. It's like the difference between the old days of Photoshop when every levels operation you applied was burned in, compared to today where we have live layer effects keeping that decision alterable until you bake off a file for print. Or how RAW has been very successful in the photographic world. As Jim and I very much enjoy the RAW workflow with stills, we worked hard to figure how to do the same for motion.
I do actually appreachiate that a lot, and now - for a short while - derailing my own discussion... :)
There are two issues really
1. I work in a rather largish organisation (in Norwegian context, it is kinda the largest as to moving image production...) where we strive to replace inferior solutions (read: "DSLR crap") with better, yet cost-effective solutions.
And I happen to be in the "well but RED could be that as well as the C300" camp.
To prove that point, I have to make simple foolproof workflows.
The conform is not much of an issue these days. Happily.
So what I tell the aspiring S35 photographers is: Make a beautifull image @ 800 ISO with in-camera WB and the contrast you'd like to see, and chances are pretty good that it will look good after post.
Don't fiddle with anything else, and your chances of a lovely result are pretty high...
So... These productions go out more or less "as is", (simple transcode to edit in RCX RG3/RC3, OR quite a few of them go through some kind of colorcorrection.
Thus: Simple preset, that has to take images from R1 M/MX and Epic - no scarlets yet.
And thing is: It works.
I know the flexibility you have built into the system, but I have to make more or less foolproof solutions for a productionchain delivering content to 3 national broadcast channels. And by dumbing it down like this, RED is getting some traction outside Drama/promo. The alternative is C300/DSLR, some XDCAM. Obviously the users are happy.... :)
2. At some point you'll have to lock down for consistency.
I am currently (and as I write) working on a 24 episode show. We'll be in post on that until November - give or take. Episodes are edited while others are finnished. They shall look the same. There are several colorists, editors, VFX artists etc involved. Thus I lock anything critical into fixed RGB formats (EXRs mostly, some 16-bit DPXs bouncing around, too) with strict parameters as to how they are treated colorwise in VFX.
That way The graders can grade the VFX shots (mostly) long before they get them, and the result will still be accurate and consistent (without major tweaks after VFX delivery).
As Erich pointed out, rendering is just saving the cache.... I cannot see a worse result from the EXRs/DPXs compared to what comes out of the rocket. Actually - at times it looks better (even though it isn't really) because the rocket cannot manage more than half debayer in realtime with resolve, and the macs we use don't really have slots for many more rockets, even though we use the expansion cases. (Now: Why we still use Macs for this is another discussion... :) )
There are so many assumptions as to signalchains out there, and people tend to be a bit more religious than practical about the issues.
My point is: It doesn't harm to be practical. Sometimes that means RAW all the way, sometimes not. You'll need to have an idea about what you do either way, because the results will be extremely similar with the same signalchain/metadatasettings, whether you transcode or not.
(digressing anecdote: A guy at one of the workshops had learned through an internal study how "bad" prores is - which it of course to some extent actually is. But the practical result of his argument is: Shoot DSLR/XDCAM - as those are the options... Kinda... why not just do the prores then? :)
And that's why I am in favour of some baseline settings for transcodes, as that insures a result which will never be worse than X. And that X, is actually pretty cool... THEN experiment to get it better and get familiar with the format.
Thanks for chiming in BTW!
Great thread Gunleik.
Also, how do you pronounce your first name?
Thanks and hahahahaha to the popcorn...
My name is pronounced like...
Gun is with the same vocal sound as in "goon" but short, like it had a double "n". Thus "goonn"
leik is pronounced like the "leic" in "leica".
Back On Topic!
|« Previous Thread | Next Thread »|