Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Name for R3D editing workflow

R3d

R3d

Is there an offical line on where the R3D file extension came from?

My thought was they wanted to use *.RED, but that was taken or not registrable, so they turned the E around and made it a 3, like in the Russian alphabet? So you get *.R3D rather than *.RED?
 
The RED ONE camera is not a true RAW sensor data recording camera, so you cannot say that R3D workflow is RAW workflow without qualification.

What makes the RED ONE different from say Canon 5DMkII, it is the so called 12 bit data, the RED ONE has High Dynamic Range, not so much as stacked exposures, but maybe more than more highly compressed recording systems.

So what makes the workflow for RED ONE better than some other workflows, what distinguishes it is Low Artifacts, yes some are there, but they are not major, and maybe being better than H.264 and such is notable, and High Dynamic Range, well maybe not as high as some, but high enough to be notable over some video recording systems, so you can abbreviate that as "LAHDRW" maybe pronounced as?

I think for the most part "R3D Workflow" is better since people will know what you are talking about. There is no one R3D workflow for the camera, I convert the R3D into TIF for use with my system, others convert the R3D into something else at some point in the Workflow, so there is no single R3D workflow.

If you have a specific workflow in mind, the add the names of the programs used to "R3D Workflow"?

Dan, with all due respect - Canon (and the other camera makers) also compress the RAW data. I don't know of any camera that actually records TRUE 100% uncompressed RAW data...

I agree with the term "R3D Workflow" though. It is true and to the point...

:) Peter

PS: How is everything? Hope the scans helped with Your project...
 
no disrespect...but perhaps this thread is moot. In a not too distant future everyone will know REDCODE & CINEFORM and will not need any elaborate explanation as to what they mean or what they do.
 
true lossless sensor data vs. something else

true lossless sensor data vs. something else

Canon (and the other camera makers) also compress the RAW data. I don't know of any camera that actually records TRUE 100% uncompressed RAW data...

Its not so much a matter of what scheme is used to save data space and disk space, such as true lossless encoding, rather if the true sensor data is available for processing in software after the data is recorded, which it is not in R3D recording.

For the data to be true RAW Bayer sensor data it must include all the bits for each pixel (and all the bad pixel, bad row, and bad column data as well), just as the light falling on the sensor set them.

Does anyone know if the Kinor-2K can record true sensor data?

ARRI says thay have a "data mode" but they also say they patch the bad pixels, so what happens to the un-patched pixels in data mode seems vague?

http://www.arri.de/camera/35_format_digital/arriflex_d_21/faqs_on_shooting_with_d_21.html

Many industral type cameras should be able to output true RAW data from the sensor's pins unaltered. You cannot get better than the original data, what you do with it after that is to reduce accuracy in one way or another. One might say it looks better after processing, that extreme is H.264 as a recording format, but any loss of data is a loss of options in processing in some regard and burning in alterations, such as the noise reduction in R3D limiting the ability to extract more details from the so called noise that is noise+data. It would be simple and low cost to offer a 25pin connector on Scarlet so that 3rd party Co. could offer true RAW Bayer Data recording, something like the tap used by the Andromeda conversion.

The sensor with a Bayer filter is just a monochrome CMOS sensor with a filter over it, you should be able to save the data in its original pixel order just as industral cameras do.

Cameras like Canon do not want you to see the burned out pixels, fixed pattern noise, and such, so their so called "RAW" data are patched up?

There is a big difference between "true lossless encoding" that can be 100% reversed to get the original pixel data, and some noise reduced wavelet recording format that cannot be used to restore the original pixel data. If the RED ONE had a true RAW data port, one could difference the recovered Bayer data with the true RAW data to see how large the bit errors are, are they limited to the bottom 4 bits, or do they exceed that and maybe go as high as 6 bits in parts of the image? On an 8bit display you could not see the noise reduction if it was not above the 4th bit, so since they say you can see it, then it must be major.

It gets confusing, since for marketing, people are calling things by names that are not clear as to what is going on with the data, "lossless compression", "almost lossless", "you won't know how much was lost lostless"... When "somewhat lossy" might be more clear.

I'm more concerned with getting the best images from the actual RAW sensor data, at least what I find best for a given use, without having artifacts added that don't need to be there. I know that people like R3D files because of their small size, they are fine for what they are...

I think it is sad though that people who would like to get the most from their sensor are not allowed to find out what that is in this case. Does it affect ticket sales, well what does, and how can you tell one way or the other?

==

Back to the subject of this thread, maybe some indicator of what losses are included in the R3D workflow would be important in their discussion.

I am using 16bit (48bpp) TIF files so that I can hold most of the 12bit sensor data all the way to the film recorder (still being developed for DIY recorders, but the gamma spread seems to be the way to go so far).

Other workflows intrduce double compression losses perhaps, so some rating system on how many bits are "good" after post could be used, like,

REDiting12 for 12bits accurate
REDiting06 for 6bits accurate and so on?

Or rate the area of the image that is within 99% of what the sensor data would have made it, like

REDiting100 for 100%
REDiting015 for 15% and so on?
 
For what it's worth, my argument for the status quo is that the meaning of offline and online are more than enough to tell people what sort of workflow they are using.

For example, I will call the current Adobe CS4.1 workflow ONLINE because you are directly editing with the R3D files on the timeline, along with playback of those files with effects, colour correction etc without needing to ever render it out to some intermediate format. Only time you bake anything in is at the very end when you render to your final output formats. That, in my definition, is an ONLINE workflow.

I call the Final Cut Pro workflow OFFLINE at this stage because it relies on the Quicktime proxy files to edit, needs transcoding to apply effects etc and can't work with the full resolution originals. Most commonly the transcode/intermediate editing format of choice is ProRes422 with a replacement back to online (before rendering) coming at the end of the process or when taking your timeline into Color.

The confusion/fear here in Japan seems to be one of clients wanting to pigeon-hole the workflow into an easy naming box, for whatever reason, to be able to say they are expert at that particular type of workflow. The issue with doing that is that there are many different and equally valid ways of working with Red originated footage. Two examples are listed above for the more common NLE programs, but there are tonnes of others.

As long as the end result can be output to the client's chosen delivery format at the best possible quality that format provides, none of the semantics should matter.

The words online and offline have pretty established meanings IMHO that don't need adding to, to accommodate a particular brand of camera. Beyond saying it's an R3D workflow, or P2 workflow, or HDCAM-EX workflow etc, as a descriptor of the originating file format, online and offline are quite enough to tell a client (if they care) how it will be edited and put together.

My 2 yen :)

Paul
 
Wow, you guys are all trying way too hard..

R3D is the file format. Like AVI, MOV, OGG, etc.
REDCODE is the codec. Like H.264, MPEG-2, ProRes 422, etc.

It really isn't that complicated.

RAW and all that other stuff just describes various, relatively generic processes that are not very RED specific. RAW for RED vs RAW for Canon... Same thing. Both basically mean that you are given the bayer-pattern, un-color-processed image, and leave you to alter how that information is processed into the final, "baked" image.
 
Last edited:
Back
Top