Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Panavision videos...

One could say that a keying shoot-out is in order just to demonstrate that RED keys as well as Panavision. The only problem with shoot-outs though, is that each camera will having lighting optimisations based upon its sensor characteristics. If you light for Panavision, it may produce better results than RED, and vice versa.

What we really need is a keying "fickleness" parameter for the cameras. A number that indicates how much caution you can throw to the wind when lighting, and still get good keying results.
 
Graeme,

I wish people would read the forums more before you start beating your head off a rock :) LOL

RED have never claimed there is
 
One could say that a keying shoot-out is in order just to demonstrate that RED keys as well as Panavision. The only problem with shoot-outs though, is that each camera will having lighting optimisations based upon its sensor characteristics. If you light for Panavision, it may produce better results than RED, and vice versa.

What we really need is a keying "fickleness" parameter for the cameras. A number that indicates how much caution you can throw to the wind when lighting, and still get good keying results.

That is why each camera rep should be present so he can choose the best setting for his camera test.
You can do keying more than one way and still have perfect results
 
Really though aside from a terrible presentation, the information seemed pretty good. But as with anything one can get so caught up in the mathematics that you forget to actually look at the visible difference in images. I have not had the opportunity to see a 4k RED projection yet, but until I do I'm not going to make a judgement either way.

What I would love to see is Ted do a presentation. I think it would be much more palatable.
 
The Panavision v. Red comparison, while informative and interesting, is noteworthy for the simple fact that it even exists. Why is the senior leadership at Panavision taking time away from their core business responsibilities to "help us all make sense of the digital cinema landscape"? Why should they spend one minute away from renting cameras and designing and building lenses to host this presentation? Why even backhandedly endorse "4K Cinema" cameras by grouping them in the same league and class as their own cameras? Why do they defend their approach to digital image acquisition at the same time they are explaining it? Panavision is a market leader, right? Panavision's position in the market is unassailable, right?

Here's what I take away from all this: From talking with Graeme and seeing the videos in question, it is clear that there is more than one path to a digital cinema imaging solution. It really doesn't matter which one is "better" because there is no way to reconcile the vast array of variables to allow a true comparison. The context with which we all relate to digital cinema has changed and we really need to establish a new model for talking about it and evaluating it. What I find consistently lacking in the technical debate is any measure of how the imagery affects the viewer. The only really important criteria that we should care about in the end is "Can the audience tell the difference? Do they enjoy the content less? Are they somehow prevented from becoming engrossed in the content because of the means by which the image was acquired?"

RED and Panavision (even the digital Pana's) are two vastly different expressions of the same toolset. What truly differentiates them is their suitability to Filmmakers. Filmmakers in a global, all-encompassing sense. Both are capable of producing stunning imagery, but the real forces at work that will determine the longevity of either platform is how efficiently do they accomplish the task? This answer is obvious to anyone who has viewed the RED 4K footage projected and seen for themselves how accessible and easy the workflow is. Filmmaking has not evolved, it has mutated. I didn't fully realize just how much more efficient the entire process can be until I was shooting with the camera. I no longer group Panavision and RED into the same category. They are fundamentally different in the way they approach the moving image. In the marketplace they are competitors, but in the process of filmmaking their differences are expressed best in efficiency.

From this perspective, I now see little difference in film and tape. Both require migration to the NLE in realtime: film has to run in realtime twice to get to the point at which I can edit it on my computer. Tape has to run in realtime once to get to that point. In each case, specialized equipment is required and in the case of film, one more layer of specialized technical staff. So, just inside a week, using the camera, I've changed my thinking regarding the underlying economics of using RED. This is not an added benefit, this is the whole point: shortening the pathway that critical creative decision-making must take to get to the DI, or the picture-lock, or wherever you step off the boat creatively.
 
...The only really important criteria that we should care about in the end is "Can the audience tell the difference? Do they enjoy the content less? Are they somehow prevented from becoming engrossed in the content because of the means by which the image was acquired?"...

...shortening the pathway that critical creative decision-making must take to get to the DI, or the picture-lock, or wherever you step off the boat creatively.

I agree completely. I think it can be easy to get lost in the science and forget about the art that we're all here to do.

What I admire about RED is their desire to create professional tools that can truly have mass reach. They're really making it possible for people with great ideas to get their work done in a way they've always wanted to get it done. Yeah I can shoot my short film in SD, but if the audience is distracted because of the look of the film then there goes that dream. Scarlet takes that even further. There is no way I could have dreamt of being able to shoot a feature in 3k before this years NAB. Thanks to RED I will.

Thanks RED.
 
Theoretically, there might be a small advantage to going with a system with more than 3 primaries, both in acquisition and in display. This would avoid any metamerism (assuming perfect capture + display).

I think with the way current spectral responses are measured one might need a little over 30 primaries to have no metamers in practise.
 
The Panavision v. Red comparison, while informative and interesting, is noteworthy for the simple fact that it even exists. Why is the senior leadership at Panavision taking time away from their core business responsibilities to "help us all make sense of the digital cinema landscape"? Why should they spend one minute away from renting cameras and designing and building lenses to host this presentation? Why even backhandedly endorse "4K Cinema" cameras by grouping them in the same league and class as their own cameras? Why do they defend their approach to digital image acquisition at the same time they are explaining it? Panavision is a market leader, right? Panavision's position in the market is unassailable, right?

Here's what I take away from all this: From talking with Graeme and seeing the videos in question, it is clear that there is more than one path to a digital cinema imaging solution. It really doesn't matter which one is "better" because there is no way to reconcile the vast array of variables to allow a true comparison. The context with which we all relate to digital cinema has changed and we really need to establish a new model for talking about it and evaluating it. What I find consistently lacking in the technical debate is any measure of how the imagery affects the viewer. The only really important criteria that we should care about in the end is "Can the audience tell the difference? Do they enjoy the content less? Are they somehow prevented from becoming engrossed in the content because of the means by which the image was acquired?"

RED and Panavision (even the digital Pana's) are two vastly different expressions of the same toolset. What truly differentiates them is their suitability to Filmmakers. Filmmakers in a global, all-encompassing sense. Both are capable of producing stunning imagery, but the real forces at work that will determine the longevity of either platform is how efficiently do they accomplish the task? This answer is obvious to anyone who has viewed the RED 4K footage projected and seen for themselves how accessible and easy the workflow is. Filmmaking has not evolved, it has mutated. I didn't fully realize just how much more efficient the entire process can be until I was shooting with the camera. I no longer group Panavision and RED into the same category. They are fundamentally different in the way they approach the moving image. In the marketplace they are competitors, but in the process of filmmaking their differences are expressed best in efficiency.

From this perspective, I now see little difference in film and tape. Both require migration to the NLE in realtime: film has to run in realtime twice to get to the point at which I can edit it on my computer. Tape has to run in realtime once to get to that point. In each case, specialized equipment is required and in the case of film, one more layer of specialized technical staff. So, just inside a week, using the camera, I've changed my thinking regarding the underlying economics of using RED. This is not an added benefit, this is the whole point: shortening the pathway that critical creative decision-making must take to get to the DI, or the picture-lock, or wherever you step off the boat creatively.

Jeff,

This is one of the best posts I have ever read on Reduser. It really puts things in perspective.

Thanks!

-Thor
 
So while it's alarming (and inaccurate) to label the sensor itself as 4:2:0, is the ultimate underlying message inaccurate? The underlying message being: "you can't get a full 4K from a 4096-wide Bayer-pattern sensor, and you certainly can't get 4K @ 4:4:4 from a 4096-wide Bayer sensor." Which I think we can all agree with.
Well said, Barry. One of the most reasoned responses in this entire thread.

Where they really go astray is using reasonable arguments to lead the audience to an unreasonable conclusion. If all you're looking for is 1920x1080, then the Genesis approach looks pretty good -- but the Red approach looks just as good. However, if you want more than 1920x1080, the Red offers that and the Genesis doesn't, and to totally ignore that is irresponsible on their part.
I think this is the crux of the problem...what is the methodology of testing camera/image performance side by side with this new crop of imagers? How do you get a fair apples-to-apples comparison from a technical standpoint (beyond the "looks pretty damn good to me" screening test)? If this video series highlights anything, it is the need for standardized testing between companies and cameras.

What we need next is a good metric of aliasing based upon both charts and real world images.
In talking to folks at Panavision, they seem to be interested in this same point.

I think these people know a little bit about the topic of image capture & MTFs. To discount & "blow off" as "scary" the collective knowledge of these gentlemen is foolish. I think the marketing propaganda is flowing both ways. Even the RED cannot escape the laws of physics & MTFs.
Well said.

For viewing convenience, we've syndicated the Panavision series at FreshTV. Parts 1-5 are now up, 6 and 7 go live on Monday.
 
But the RED approach does not lead to non-clean keys....

Graeme

I'm pretty sure RED makes cleaner keys than most 35mm stock or HD cameras but you cannot deny that the Genesis has a higher chroma resolution, especially when using blue screen.

Of course all this is purely theoretical. Modern keyers are amazing and I don't think that it would make a difference in practical situations.

I bet that most non-working keys are because of a soft edge due to the limited depth of field of the 35mm format or motion blur due to a slow shutter angle.
 
I think with the way current spectral responses are measured one might need a little over 30 primaries to have no metamers in practise.

I had this Sony 828 digital still camera that had a fourth color in the bayer pattern. It was a blueish-green color. I've never seen this being done again. I don't know if it was because of cost reasons or because it dind't do much. But I would love to know.

One thing that would really improve the dynamic range was a chip with a hexagonal two-photosites design like the Fuji S3 sensor. This sensor has almost two stops more lattitude than a bayer sensor. It needs more processing power though and there are interferences in diagonal lines. But it would be closer to the dynamic range of film than anything else.
 
Sure I can deny that the Genesis has a higher chroma resolution, for it must have < 1920 blue resolution across, whereas we have =2048 and can benefit up to around 3200 if there is any amount of luma on that edge.

The problem with blue screen on 35mm film is probably related to how bad the blue channel looks...

Graeme
 
Sure I can deny that the Genesis has a higher chroma resolution, for it must have < 1920 blue resolution across, whereas we have =2048 and can benefit up to around 3200 if there is any amount of luma on that edge.

What about vertical chroma resolution? As I know (and I don't know that much) Genesis has 2160 blue pixels vertically as Red One has 1152. Total number of blue pixels for Genesis is 1920*2160 = 4.1 million and for Red One 2048*1152 = 2,4 million. So based on these numbers one could make a conclusion that Red One has more horizontal chroma resolution and less vertical chroma resolution but then again the color filter arrays are different (probably cannot be compared this way) and results in the real world could be something else.
If the target resolution is 1920x1080 then there's probably no significant difference between the resolutions of these cameras and I'd guess they both produce good resolution pictures (I don't have any experience with either camera, I think Red One is remarkable camera and this is just speculation).
 
All I can say is that John Galt's presentations made me want to tear my skin off like uh, uh, uh, uh, uh, uh, uh, bad.

Really though aside from a terrible presentation, the information seemed pretty good. But as with anything one can get so caught up in the mathematics that you forget to actually look at the visible difference in images. I have not had the opportunity to see a 4k RED projection yet, but until I do I'm not going to make a judgement either way.

What I would love to see is Ted do a presentation. I think it would be much more palatable.

Dam and i was thinking about going

Venue Address
Overseas Passenger Terminal
Customs Hall, Level 3
West Circular Quay
Sydney NSW 2000 AUSTRALIA

Wednesday 28 May 08
11.30AM - 1.00AM Demystifying Digital Camera Specifications
Location: Conference Theatre
Speaker: John Galt (Senior Vice President of Advanced Digital Imaging , Panavision)
Details: A seminar discussing the scientific concepts that underlie the performance of modern digital cameras, featuring John Galt, Senior Vice President, Advanced Digital Imaging.
 
Genesis uses on chip binning, we think, to give an output of 1080 vertical. The extra rows of pixels are not individually addressable and do not count towards resolution.

Graeme
 
One thing that Genesis and RED are in common are

Both Cameras cannot output " Uncompressed Raw DPX files " but only Dalsa + all of us at S.Two ( Digital Field Uncompressed recording solution ) as discussed the RED one camera @IDIFF in Paris dated Jan 31 2008.

Stewart
Founder and TOP
REDHKSC
 
Genesis uses on chip binning, we think, to give an output of 1080 vertical. The extra rows of pixels are not individually addressable and do not count towards resolution.

Ok, I remember reading something about it. Do you know the reason for this? Is it just to get higher pixel count so they can advertise it as 12megapixel camera or are there some real benefits? On chip binning isn't similar to downsampling?
 
I bet that most non-working keys are because of a soft edge due to the limited depth of field of the 35mm format or motion blur due to a slow shutter angle.
There are many ways to make difficult keys.

On the camera side, noise is a big issue. Which would be noise coming from (A) the sensor and (B) from any compression [the compression noise from Redcode is pretty negligible from what I've seen]. If keying is your main concern, I don't think there is enough emphasis on total noise... and too much emphasis on what the color resolution theoretically is.
If you have shadows on your key color (this happens), then it tends to create some difficulty in keying. A compositor would likely erode the crap out of the key to handle that.

I had this Sony 828 digital still camera that had a fourth color in the bayer pattern.
The purpose of the fourth color filter is to increase color accuracy / avoid metamerism due to the camera's spectral response.

You can research metamerism on Google.

We probably don't notice small color inaccuracies / a certain degree of metamerism since it naturally occurs. So this might be why this feature doesn't get a lot of attention. I also believe that Sony says it only decreased color inaccuracy by "50%", so it's not perfect (probably a cost thing, since I believe it's expensive to get accurate color filters).
 
Genesis uses on chip binning, we think, to give an output of 1080 vertical. The extra rows of pixels are not individually addressable and do not count towards resolution.

Graeme

When I went to the Panavision open house a few months ago the Genesis engineer told me straight out that they "bin" two rows together. I was like "bin"? And he had to explain it to me (never heard that term before) My point is I definitely did not mis-hear him or mis-interpret it. He said "bin" I said "What?" And he then proceeded to explain to me what "bin" means and why they do it. They use binning to improve the dynamic range of the sensor.
 
Back
Top