Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Panavision videos...

Does downsizing actually give 4:4:4?

I can see how the Bayer filter could be used as 4:4:4, but does that happen in post? I doubt it.

I was thinking the same thing about 2K 4:4:4, could the extra data be thrown away leaving us with a lower data rate 2K 4:4:4 image using the full sensor. I'm guessing it is a bit more complicated than that.
 
Yes, if you take the 4k down to 2k, the measured resolution is "full" in each of red, green and blue. You could call that 4:4:4 RGB, but it will be better than any camera which generates 2k RGB 4:4:4 from either a stripe RGB sensor with 2k of pixels in each stripe or a 3 chip and prism system because it's downsampled and you can control the downsample filter to get a much better balance between aliasing, sharpness and ringing.

Graeme
 
In post only, and yes, retain DOF.

Graeme
 
Why the Red one is first and foremost a 4K camera, and not a 2K 4:4:4 (or how ever you want to enterprit it) with the option to shoot 4K?
This answer at least is pretty easy. RED is a RAW camera, which means 1 intensity value per pixel, while a 4:4:4 image has 3 intensity values per pixel. Therefore, your 2k 4:4:4 image requires 75% of the space as a 4k image (from an uncompressed bandwidth perspective).

A 2k 4:4:4 image from 4k requires a debayer calculation which is irreversible, and therefore destructive. Following the debayer calculation, the image again needs to be compressed to come off the camera.

In summary, producing a 2k 4:4:4 image from a 4k sensor requires both that you do more calculations in-camera, and throw away some image data in-camera. The first increases the cost and power consumption of the camera, decreasing battery life and locking in algorithms. The second decreases image quality.

Ideally what you want is for camera to do all the in-camera analog stuff as best as possible, and get it to a manageable digital form that gives the most options for post-production. I'm guessing that in RED's estimation, RAW was the best way to do the is.
 
RED is not cheap - it's affordable!
 
RED is not cheap - it's affordable!

Amen to that, Brother Graeme. :-) I'm at 50k and counting for a complete system.

Not cheap. For me and many others... barely affordable.

Worth it? Yes, the value of the Red is many times it's cost.

-Thor Wixom
 
Yes, if you take the 4k down to 2k, the measured resolution is "full" in each of red, green and blue...

I see.

So is this done by ignoring specific pixels in the grid and only so when you do exactly 2K? Are pixels not interpolated based on the pixels next to them or is the process the same as when you use 4K but there is much more information available?

...You could call that 4:4:4 RGB, but it will be better than any camera which generates 2k RGB 4:4:4 from either a stripe RGB sensor with 2k of pixels in each stripe or a 3 chip and prism system because it's downsampled and you can control the downsample filter to get a much better balance between aliasing, sharpness and ringing.

Graeme

You can control or we can control?
 
If you develop at 4k and downsample yourself, then you have full control. If you let us do it for you, we have control. We do a nice 4k to 2k downsample. All the pixels are interpolated to ensure they all appear totally co-sited in the 2k result, unlike on a RGB stripe sensor where the red, green and blue are obviously not co-sited and I don't think anything is really done to bring them back into alignment.

Graeme
 
I think I see what you mean. I was thinking some of the pixels were ignored, but I guess they aren't

So would it then be better to work (grade etc) in 2K then output to 1080 rather than export from Alert/Cine to 1080?

Thanks for the info.
 
Incorrect assumption on RGB stripe sensors

Incorrect assumption on RGB stripe sensors

If you develop at 4k and downsample yourself, then you have full control. If you let us do it for you, we have control. We do a nice 4k to 2k downsample. All the pixels are interpolated to ensure they all appear totally co-sited in the 2k result, unlike on a RGB stripe sensor where the red, green and blue are obviously not co-sited and I don't think anything is really done to bring them back into alignment.

Graeme

Hi Graeme,

If we did not co-site the full bandwidth RGB outputs from Genesis then we would have mis-registered color images. The fact that we do not speaks for itself.

Regards,

Andy

Andy Romanoff
Panavision
 
Thank Andy. I guess the bit where you do that is missing from the white paper describing the sensor? I guess we've all seen three chip and prism cameras that have a pixel offset in them, in RGB mode, and the resultant lack of registration that occurs in the images.

Graeme
 
Andy, this is the only reference I have to Genesis sensor: http://panavision.com/publish/2007/12/10/GenesisFAQs20071207.pdf

Given the non-square nature of the small pixels, the red, say is just 1/3 of a macro pixel offset from green as is blue just 1/3 of a macro pixel offset from green. Would the 1/3 offset, in the final 1920x1080 image be noticeable, and what losses are incurred from the interpolation necessary to compute what the co-sited red and blue would have looked like?

Graeme
 
Genesis sensor

Genesis sensor

Andy, this is the only reference I have to Genesis sensor: http://panavision.com/publish/2007/12/10/GenesisFAQs20071207.pdf

Given the non-square nature of the small pixels, the red, say is just 1/3 of a macro pixel offset from green as is blue just 1/3 of a macro pixel offset from green. Would the 1/3 offset, in the final 1920x1080 image be noticeable, and what losses are incurred from the interpolation necessary to compute what the co-sited red and blue would have looked like?

Graeme

Hi Graeme,

A 1/3 pixel offset might be noticeable but the Genesis output has zero offset. Also, there is no interpolation of the photosite output and therefore no losses from computational processes.

Regards,

Andy

Andy Romanoff
Panavision
 
So, there's an offset in the pixels on the sensor - they're certainly not co-sited there for an RGB stripe, and they don't appear so in the published diagrams. But by the time we see the output, they are co-sited, and no interpolation is done to them. Can you explain exactly how they pixels do get co-sited then?

Graeme
 
Are there any image samples of test patterns from the Genesis to prove or disprove this point? There must be lots of example shots that demostrate how clean the image is.
 
It seems quite a trick to me that non co-sited pixels on the sensor become co-sited "as if by magic" on output. Andy... I don't think we would push this issue except for the very technical video Panavision produced to explain the disadvantage of Bayer pattern. It only seems fair that this RGB stripe discrepancy be clearly explained.

Jim
 
Panavision Shhmanovision



...except for their behind the lens filtration(not a issue in digital) and anamorphic glass..
 
If we did not co-site the full bandwidth RGB outputs from Genesis then we would have mis-registered color images.

There is no interpolation of the photosite output and therefore no losses from computational processes.

This is a complete contradiction. They DO co-site but they DON'T interpolate. :waaa:
 
"as if by magic"

"as if by magic"

It seems quite a trick to me that non co-sited pixels on the sensor become co-sited "as if by magic" on output. Andy... I don't think we would push this issue except for the very technical video Panavision produced to explain the disadvantage of Bayer pattern. It only seems fair that this RGB stripe discrepancy be clearly explained.

Jim

Jim, Graeme, While the Genesis sensor is not a mysterium there are some things that will have to remain mysterious for reasons of completive advantage. I assure you that the photosites are adjacent, and yet the output is properly and exactly co-sited without interpolation - and no magic is involved.

Regards,

Andy

Andy Romanoff
Panavision
 
Thanks Andy. I normally say "and that's where the magic pixies come in" :-)

I guess the difference here is though, that there are tonnes of websites explaining how Bayer patterns work, you can find lots of white papers and scholarly works on demosaicing algorithms, so on and so forth, but there is incredibly little information out there on RGB stripe pattern sensors, so when somebody goes and uses one, you're bound to get questions about it, to explain how the technology works.

Graeme
 
Back
Top