Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Properly graded...

Status
Not open for further replies.
Yes a native pipeline. Please.

I was trying to use a Multicam Off-line edit, as base, to send it from Premiere Pro Cs6 to RCX. No show.

Adobe's Premiere Pro Cs6 has no option to write out a Multicam shot sequences as XML. Which was a stopper for me with that software.

I guess many people run into problems and missing out the support from the big players, then they go back to things that has worked for them since long. With all the limitations of the past.
 
A couple of notes. It's common sense to start and maintain the highest quality source material through the pipeline. As a photographer and post production guy, I feel confident preparing footage before handing it over to a client. Even in the photography world, I would never hand over my RAW files because I'm confident they would bugger it up. I have witnessed productions where the will take raw footage and not do a single colour correct and put it online. I won't mention this company. RED has shaken up the industry and put the power of 4K+ RAW files in the hands of independent filmmakers. Hence the pushback from the establishment. But that also requires a skill set I feel many filmmakers lack and thats the post work process. Just like the average person doesn't really understand shooting RAW on still cameras and shooting flat for retaining information. WIth Adobe premiere you can edit right off the RAW files, but people don't want to edit looking at REDLOGFilm. There is no easy way to apply a LUT that I'm aware. So... they edit on whatever the camera was set to and grade within the editing software instead of exporting an XML file back to Redcine-X for proper first pass and then out to something better like Davinci.

I think part of the problem is that shooters still don't understand the exposure tools on RED cameras. People have so many varying theories on how the camera works and exposes. I personally don't have any problems but when I see drastically over or under exposed footage, it's because they're going by what they see on the LCD vs. the exposure tools.

Grading and generally making footage look amazing is a talent that not all possess. But I feel having a camera like the RED, grading is a skill that needs to be learned to get the best from it. A RED camera is like driving a Ferrari, some people will just drive it from A to B, while others will make that trip the most glorious journey and squeeze every drop of goodness from those two letters. Drive a legacy.... leave a legacy :)
 
They say they work digital the filmic way...

LaKjk.gif
 
You should consider publishing a white paper and putting it on red.com. Something that says "no matter what kind of updates we continue to improve upon our color science in the future, this same guideline for processing Red footage is preferred for optimal results. If you must transcode before grading here are the steps you should take to ensure a successful exposure will be baked into the transcodes.

Exactly! Too many times I've had a producer balk at shooting with a RED because "it's too complicated" or "we don't have time for the workflow". If I could just point them to an established pipeline (or at least some basic rules) it would be a huge help.

I find that a lot of people don't want to take the time to learn anything new, you have to hand it to them.
 
It amazes me that some are still converting to DPX or EXR without a white balance or FLUT adjustment and grading from there.

Jim,

What you're saying is that there is an extra step involved in processing R3D footage, compared to other, very much post-1990 cameras and workflows.
(It's actually the 1990 film workflow that required money and time to adjust footage prior to the final grade, or editing. Avoiding that expense and time was one of the benefits of going digital).

I'm not sure it's beneficial to anyone to say that this extra expense and time is needed correcting R3Ds to ensure a good final grade, no matter how quick and simple that step could be.

If all the dynamic range isn't being transcoded from a simple non-adjustment operation in Redcine-X, then maybe Redcine-X is failing to provide what's needed in the real world?

Maybe shots should open in Redcine-X by default in LOG, instead?

Most Alexa users I know record to LOG ProRes in camera, by default. Dynamic range isn't an issue there, because it leads on to an idiot-proof workflow.


... ducking for cover...
 
The ASC manual use to be great, but it really does not cover digital cinema in a meaningful way. But using it as an example, what we need is a digital cinema wiki site done at the same level as the old ASC manual. In the old manual, every section was written by a total expert, so each section was definitive. So in my digital cinema perfect world, there would be a red wiki that would be written by "ASC" types approved by red(would have to be more then ASC members to cover digital fully), a great start would by editing/moving the more definitive content in reduser to this "Red Digital Cinema" wiki. I'd like the content to be pro level (not introductory - a lot of the current training is more aimed at introducing ...). For example, an area of the wiki would contain detail processes on 4k vfx processing with charts with exact work flows of 5k redlogfilm to 4k vfx compositing(there would be different sub area's for each of the major packages).
 
If I could just point them to an established pipeline (or at least some basic rules) it would be a huge help.

I am surprised that Red have not yet produced a graphical flowchart to guide those determined to remain clueless. Same message as Jim's first post, but presented in a graphical manner. Something formatted to A4 paper size so that people worldwide can readily print it off the nearest mono laser printer and circulate among the colorist / grader / editor as need be.

It is evident that written word alone is not doing the job. People respond differently depending on how information is presented to them, and for many how it is presented takes precedennce over what.

If they do not like how it is presented, they won't bother with the what no matter how important it is to their self interest. Not everyone reads the fabulous manual. These people -must resist the urge to call them simpletons!- need pretty images and an 'infographics' style presentation to make a three step guideline digestable, as maddening as it might sound to the rest of us.

I think Red should oblige.
 
I am surprised that Red have not yet produced a graphical flowchart to guide those determined to remain clueless. Same message as Jim's first post, but presented in a graphical manner. Something formatted to A4 paper size so that people worldwide can readily print it off the nearest mono laser printer and circulate among the colorist / grader / editor as need be.

It is evident that written word alone is not doing the job. People respond differently depending on how information is presented to them, and for many how it is presented takes precedennce over what.

If they do not like how it is presented, they won't bother with the what no matter how important it is to their self interest. Not everyone reads the fabulous manual. These people -must resist the urge to call them simpletons!- need pretty images and an 'infographics' style presentation to make a three step guideline digestable, as maddening as it might sound to the rest of us.

I think Red should oblige.

Very well presented. '-)
 
Adding a grading session prior to a transcode costs time and money. Education about that will only turn people to other cameras.

It would be safer if shots opened Redcine-X in redlogfilm, or required less work in Redcine-X

Right now, I'm personally more interested in the USABLE dynamic range in the shadows that's being lost by compressing to R3D before ISO/Gain (or white balance correction for, say tungsten) is applied, creating blocky artifacts.
With ISO settings much higher than 320, usable dynamic range is being lost in the shadows, compared to a HD-SDI or HDMI recording out of the camera.

...It's due to encoding being done before adjustments, Jim's concern in the opening post...
 
Jim
If you had a choice with your R3D footage:

A)
grade 90% in RCX (e.g. RGB curves, FLUT, ISO, Saturation, Kelvin, etc.)
grade 10% in POST HOUSE

*OR*

B)
grade only ISO, FLUT, Saturation, Kelvin in RCX -- and DONT touch the RGB curves in RCX
grade the RGB and all additional coloring/grading in POST HOUSE.


So, in your opinion, A or B gets most out of the R3D footage?
 
Reply to Jim's statement.
The reason is simple, the bigger posthouses does not update their machinery and software to often. If they sit on old color grading gear. They will not be able to look at the r3d's and grade from there. Also all 3D packages and most composting packages are lacking r3d import, atleast with the latest color science. Most compositors use flame, if you do not have flame 2013 then you do not have red gamma3 import...

It's easy to sit around and say that everyone should work from the native files... well, the software developers does not really keep up with the number of updates... and a lot of people do not think that the anual updates from autodesk costing about 15k USD per seat is worth buying (if you have 15 flames, then thats a quite large number, especially if the only new thing in it is new color lut from red...

There is today not a valid VFX pipeline that can stay R3D all the way... People that fiddle around in Premier might say there is... but then how do they texture their 3D elements... you need to get r3d into 3DSmax, Maya and softimage and all the regular software not to have people going straight to DPX...

When working with 3D texturing you want to have both background and texture elements as native as possible and grade on top... well how do you do that with r3d?
with alexa and the others you might not have the latitude, but alot of postpeople will find that to be more simple, just not having all backdoors open.

Nuke is on track and so if autodesk with flame even though the updates are very expansive.

Im not saying they are doing it right. But the above is the reasons.

My soution to you... make a Flame import spark that is easy to update with your lates color science then the pros will like it aswell. For example, Flame 2012 does not have the color picker in the built in r3d importer so offcourse people get their whitebalance set to shit.

I find the most latitude using redgama2, redlogfilm into flame and then do, denoise, sharpen and log-lin conversion in the flame lut editor...and add color warpers and grade nods from there... that beats RCXP, resolve and all the others... Still it could be improved heavily...

Your statements are valid but I think you and Jim aren't quite talking about the same thing. When he mentioned DPX and EXR, I'm pretty sure he meant RED footage also has more than enough dynamic range to work with after you convert to DPX and EXR, provided you take the necessary basic steps first, which many people don't seem to be doing.

It amazes me that some are still converting to DPX or EXR without a white balance or FLUT adjustment* and grading from there. Even worse... ACES.

If you have the means or the occasion to actually grade with and otherwise utilize r3d files through the course of a project (what you're talking about), then great. Although, I believe Jim was acknowledging that people often screw that process up.

I think his other point was referring to people who can't or prefer not to stay with r3d's not even bothering to, at the very least, white balance or adjust FLUT while they're dealing with the RAW files before they bake the DPX or EXR files.

As deep as these formats are, it doesn't hurt to give them a head start by neutralizing/normalizing before you grade. That way, the maximum amount of info is transferred and the colorist isn't wasting bits trying to back off a look that's not even close to the desired final graded output.

RED provides FREE software for doing the very basic of tasks that will ensure we have as much flexibility as possible before transcode to another format.

I think Jim's main point was if you take the steps he outlined and you're not getting representative dynamic range from your grade (unless it was intentional) you're doing something wrong, regardless of whether you're using RAW, DPX or EXR.


*color emphasis is mine
 
Why doesn't RED make everything default to developing settings that don't crush highlights and shadows then?

The reason people get better results with Alexa is not because they have studied the Alexa pipeline more - but because it doesn't crush latitude by default.

The assumption with Alexa is "folks like ProRes files... let's give them the most latitude possible within what they already know. And for the geeks we give them ArriRAW."

The assumption with RED is "folks must learn our whole R3D pipeline and update their software. And DPX files suck."

One gives better results within current production pipelines.

The other tells you your current production pipeline is invalid. So of course it gets ignored.

Bruce Allen
www.boacinema.com
 
For one-off clips it's super easy... When you're trying to bring an edited timeline to/from RCX, I've usually run into problems. A shame too, since it's necessary to go through RCX last if you want to fine-tune alchemy group (and want to keep it RAW.) If you have first-pass audio, video filters, and transitions, you're pretty screwed.

I know CS6 uses RC3/RG3, but does it support Alchemy Group & Grain from the RMD (or better yet, allow you to adjust those settings from within Premiere/AE)?
 
Why doesn't RED make everything default to developing settings that don't crush highlights and shadows then?

The reason people get better results with Alexa is not because they have studied the Alexa pipeline more - but because it doesn't crush latitude by default.

The assumption with Alexa is "folks like ProRes files... let's give them the most latitude possible within what they already know. And for the geeks we give them ArriRAW."

The assumption with RED is "folks must learn our whole R3D pipeline and update their software. And DPX files suck."

One gives better results within current production pipelines.

The other tells you your current production pipeline is invalid. So of course it gets ignored.

Bruce Allen
www.boacinema.com

Not to beat a dead horse as I feel this is getting to be the case at this point but Bruce makes a very valid point here. Our projects here at the studio and with our cart systems have been almost completely Alexa since March this year until now minus "Chasing Mavericks" which was mostly 2nd Unit Epic. Red holds for pay classes ie Reducation to try to educate the community which is great and all but there's never been a formal document or workflow diagram / efforts to assist newcomers to the workflow that is available to download or gives confidence to folks that this is the official RED way to get the most out of the camera and post workflow. So, you have a ton of owner operator and small/large post businesses all working and processing differently instead of to a standard. So of course its going to cause confusion and confusion is the first thing that will drive folks to use a different system. For those of us who have been working with the cameras since the beginning we have had time to figure out and discuss best practices and we probably all visit this site enough to get a general consensus on best post practices but its way past time RED have a qualified document that industry folks can refer to and be guided so that newcomers to the format and workflow don't do more damage than what's already been caused.
 
Adding a grading session prior to a transcode costs time and money. Education about that will only turn people to other cameras.

It would be safer if shots opened Redcine-X in redlogfilm, or required less work in Redcine-X

Right now, I'm personally more interested in the USABLE dynamic range in the shadows that's being lost by compressing to R3D before ISO/Gain (or white balance correction for, say tungsten) is applied, creating blocky artifacts.
With ISO settings much higher than 320, usable dynamic range is being lost in the shadows, compared to a HD-SDI or HDMI recording out of the camera.

...It's due to encoding being done before adjustments, Jim's concern in the opening post...


The problem with flexibility is it gives people more choices. Despite what people would have you believe, when it comes down to it, people who actually take real advantage of flexibility are in the minority.

Defaulting to Redlogfilm will have numerous people new to RED asking why their footage looks so washed out. They would demand that Redcine-X default to the settings they shot with. Both positions make sense, depending on your perspective.

Making it a preference setting would fix that but then you are assuming that people will read the manual to find out they can change the default color space settings. Again, not ideal.


I've never worked on a movie production before so I don't know how a lot of these things work in practice. Having said that, don't most productions plan things out before they actually show up and start shooting?

Wouldn't an 18% gray card at the head of every shot mean that whoever is doing the transcode (DIT? PA?) could do the white balance?

FLUT settings might take a little bit more grading know-how but it seems like a suitable solution could be found.

Don't DITs routinely set looks for dailies? If they're skilled enough for that they should be able to set the FLUT.

Couldn't batch white balancing and FLUT setting be done at the dailies stage for all the day's footage when it's known that the footage is going to end up converted to DPX, EXR or even ProRes 4444?
 
I feel Alchemy and Grain is too definitive at the RCX stage. Meaning that its too baked as a look. That level of tuning should happen at the grading stage. Adding grain messes with grading later too much. The grain would be the wrong scale if the shots were reframed and scaled in editing.
 
Defaulting to Redlogfilm will have numerous people new to RED asking why their footage looks so washed out.
No doubt. They wouldn't have a problem with dynamic range, though.
Couldn't batch white balancing and FLUT setting be done at the dailies stage for all the day's footage when it's known that the footage is going to end up converted to DPX, EXR or even ProRes 4444?

Not by many. That's why Jim's been raising the issue, for about 5 years now.
 
Why doesn't RED make everything default to developing settings that don't crush highlights and shadows then?

The reason people get better results with Alexa is not because they have studied the Alexa pipeline more - but because it doesn't crush latitude by default.

The assumption with Alexa is "folks like ProRes files... let's give them the most latitude possible within what they already know. And for the geeks we give them ArriRAW."

The assumption with RED is "folks must learn our whole R3D pipeline and update their software. And DPX files suck."

One gives better results within current production pipelines.

The other tells you your current production pipeline is invalid. So of course it gets ignored.

Bruce Allen
www.boacinema.com

This is a really great point Bruce. I think the mentality of RED has always been to educate and revolutionize the industry for the better. It's not necessarily that RED is saying that DPX's suck, but that frankly we shouldn't be catering to any legacy that it limits our full potential. But therein does lie the catch-22. Because as you said, it essentially tells you that your current production pipeline is invalid. Whether you choose to ignore it or not is another thing. People get upset when you mess with workflow and routine, they like being comfortable. Not everyone likes it when you rattle the cages, I think this has always been the source of the stigma and anti-RED mentality. But it's as Frederick Douglass once wrote, "If there is no struggle, there is no progress."
 
I too have seen a lot of crap on vimeo of bad color, bad exposure, everything. They are clueless about color and are not doing the camera justice, which fuels the "Alexa is better" argument. Epic is as excellent in every way as Alexa. But too many people don't understand the process. "This looks good enough" is their mentality, and end up exporting with ugly green skin tones thinking it looks edgy or something. I don't really know. I can link (but I won't) hundreds of examples of "lazy grades" of epic footage on vimeo or youtube. IF YOU CAN'T GRADE COLOR OR DON'T CARE TO, SHOOT WITH A 5D PLEASE. I feel your pain Jim.
 
This is a really great point Bruce. I think the mentality of RED has always been to educate and revolutionize the industry for the better. It's not necessarily that RED is saying that DPX's suck, but that frankly we shouldn't be catering to any legacy that it limits our full potential. But therein does lie the catch-22. Because as you said, it essentially tells you that your current production pipeline is invalid. Whether you choose to ignore it or not is another thing. People get upset when you mess with workflow and routine, they like being comfortable. Not everyone likes it when you rattle the cages, I think this has always been the source of the stigma and anti-RED mentality. But it's as Frederick Douglass once wrote, "If there is no struggle, there is no progress."

Right. But the thing is... DPX worked great with film which had greater dynamic range and color than RED.

DPX was used for Prometheus, Spider Man, Girl with the Dragon Tattoo, etc. I spoke to the guy who graded Prometheus and he had NO IDEA about any of the color balance / FLUT / whatever R3D settings. He just had a good assistant that made him nice DPX files using a good preset.

This whole "work in R3D right up till the end" thing is a distraction in my opinion. It's a nice option but should not be the primary workflow. Because it falls apart the moment you add VFX or graphics.

My 2c is RED should put less energy into pushing R3D workflow - and more into making it easy to make DPX / EXR / DNxHD / ProRes files with great latitude by default.

Bruce Allen
www.boacinema.com
 
Status
Not open for further replies.
Back
Top