Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

The Truth about 2K 4K the future of pixel from Creative Cow

RGB in R3D?

RGB in R3D?

I understand RED ONE don't record in RGB format (correct me if I am wrong). So is the comparison accurate?

As far as I can make out R3D files hold one large green image made from the two green pixels in each Bayer group, and a smaller Red and Blue image made from the single Red and Blue pixels in the Bayer group.

After de-wavelet compression the Red and Blue images are resized and used in a de-Bayer like process to make a 12bit RGB image that is filtered and output full size or resized in various ways.

Something like what is done in the Elphel cameras JP4 encoding,

http://linuxdevices.com/articles/AT4187053130.html

Since JP4 uses JPG like encoding (?) and not "wavelet" like JPG2000 the encoding of the R3D files should produce less blocking artifacts.

If you look at dark 4K images taken from R3D to TIF the noise blocking in the RED and BLUE images seems twice as big as in the GREEN image, also the GREEN image seens twice as sharp. In better exposed R3D files these artifacts seem less of an issue, so I would say that if you expose your RED ONE footage well, but do not overexpose, then you should get images as good or better than using film of T500 speed. Film has uneven RGB resolution so is somewhat like a Bayer filter, except that the colored patches overlap somewhat. Also in film the Blue layer has heaver grain and can have lower resolution.
 
What is this, like the sixth thread to be started over this article?

Let me just say this -- it's a good article and John Galt is making a serious point about the problem of trying to describe the sharpness / detail of a system by the pixel count of the sensor. We throw around the term "4K" far too easily as some sort of shortcut to describing resolution.

On the other hand, John is speaking from his own point of view as a designer of the Genesis and that whole approach of using an RGB striped sensor and recording to 1080P (which I suspect will eventually go away and cameras like the Genesis and F35 will end up recording to data by replacing the SRW1 decks). There are strengths and weaknesses to this approach just as with using a Bayer pattern instead of an RGB striped pattern. John obviously feels that having an equal number of filtered photosites for red, green, and blue is a better approach, but it creates other limitations.

As Graeme has said, the flip side to describing a system in terms of actual line resolution and MTF is that you have to factor the amount of aliasing that the system creates as well. You can improve the specs in one area but you may be creating more aliasing as a result, so there is no free lunch.

So given John's position at Panavision and thus the fact that he presents a particular point of view, I think it's an excellent paper. But you have to factor in that he is basically "selling" his approach by picking apart his competitors approaches, so it's not necessarily a "neutral" work. But I don't think he is being inaccurate -- he certainly is being educational -- but he can't help being who he is, which is highly opinionated.

Maybe we need a whole new labelling system for describing resolution...
 
The Genesis and Sony F35 have a single-sensor with a stripe pattern of red, green, and blue filtered photosites, and equal number of each, whereas most single-sensor cameras use a Bayer filter pattern, two green filtered photosites for every one blue and red, in a mosaic pattern.

Either way, those patterns have to be converted into three separate red, green, and blue signals by deconstructing the pattern and filling in the missing information for the other two colors not represented by that filtered photosite.

The Genesis and Sony F35 do their conversion into RGB live in real-time for recording to HDCAM-SR tape, but the RED only does that for live monitoring -- what you record is unconverted -- RAW -- and you save the conversion for post later. This saves on the amount of data that has to be recorded -- a straight conversion from RAW to RGB triples the amount of data.

Also, the live RAW conversion that the RED does, since it is only to 720P, does not have to be full-quality since it is for viewing only; this saves on the amount of real-time processing needed. I suspect the simpler RGB striped pattern that the Genesis and F35 use makes the conversion algorithm a little simpler and less processor intensive.

The ARRI-D21, which uses a Bayer sensor like the RED, offers both methods, either a live full-quality conversion to 1080P RGB or you can record 2.8K RAW and convert later.
 
The ARRI-D21, which uses a Bayer sensor like the RED, offers both methods, either a live full-quality conversion to 1080P RGB or you can record 2.8K RAW and convert later.

The D21 is an interesting case, because the sensor itself is 4x3. So if you use the video output, you're using only part of the sensor, cropping the top and bottom, although I believe you can use a specific mode called Mscope to vertically squeeze the full 4x3 image into the 16x9 HD frame. If you record the RAW output, you get the full contents of the sensor, at 2880x2160, with no processing. Because of this, it's particularly well suited to anamorphic shooting, and having just been involved in putting together the first D21 feature production done this way in the US, I can say that this works quite well.
 
Back
Top