Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Why Bayer

What do you think of Leica's decision to not use an OLPF on their M8 camera and use software anti-aliasing instead?
If you're impatient, you can do a search for "leica" (or m8, etc.) and specify only the user you're interested in.

Could this be done on a digital cinema camera as well? Or do you think it's a bad idea for any type of camera?
I believe it's yes and yes.

For capturing moving images, aliasing looks worse because the moire patterns move around noticeably. So typically on non-still cameras you'd want a stronger OLPF.

There are also some bayer implications... Graeme would know better than I would. But with a Bayer camera with weak OLPF, it's possible to get color artifacts and mazing artifacts depending on how you do the debayer. Take a look at M8 images shooting a test chart and you should be able to see em.
 
Glenn I couldn't have said it better. I'm a VERY early adopter of the the M8 and it has been plagued with issues. However the aliasing due to the lack of an OLPF has been very minor. In a motion camera this would be very different, It would really look awefull. You already have to blur the 4k out a bit for SD work since the oversampling can induce some Aliasing. Any more sharpness and you'd be in a world of hurt.
 
But it is a matter of economics. Single Bayer sensor allows us to use SLR glass for just a few hundred dollars for excellent manual lenses to maybe $1700 for a new zoom with outstanding optical quality comparable to $25K or more for B4 glass required for 3 chip prism optical system.

Birger mount provides extremely smooth focus as well as use of most modern lenses.

While modern television zooms are optical marvels, I highly doubt that Canon would say that they provide sharper images than their own L series SLR lenses. Yet due to economies of scale the SLR lenses are a tenth of the cost.

For low budger work economics come into play for sure, but you ignore that there have been options to install 35mm glass onto 2/3 inch for over 20 years.
I own 2000mm 1000mm 400mm, fish eye and ultrawide macro and bellows lenses that I regulary use on 2/3 inch cameras.

But for those wanting to use 22x or 42x lenses for doc work there is no option with 35mm format!

Since you the argument has been expanded with introduced economics perhaps we could also refer to the vastly different ergonimics of using a +10x zoom on 35mm compared to a +10x zoom on 2/3 inch format.
A good used HD zoom is under US$15k and falling so price is becomming less of an issue, reference to the Biger as a solution that equates to geared pro lenses is hopeful but premature.


What it gets down to is horses for courses, 35mm depth of field is useful some of the time.
In respect of sharpness of 35mm vs 2/3 inch Canon lenses Canon tell me that 2/3 inch lenses are sharper, not that it is relevent in a discussion regarding format size.


Mike Brennan
 
I posted this elsewhere, but thought it warrants wider reading....

Of course, 3 chip cameras often have offsets in their chips which stop them delivering properly co-sited chroma in 4:4:4. And of course, if you try to get a triple of 1920x1080 sensors to actually allow you to measure a 1920x1080 resolution you will find that you've contaminated your precious image with significant levels of aliasing artifacts. .

Graeme

A few separate points about this discussion.

Far as I know there are no Pro Sony or Ikegami cameras with offsets. Panasonic favours the principle with a couple of their cameras, but not Varicam or the flagship 3000.

Genesis had pretty severe vertical streaking. Not sure if the f35 will suffer this. Those who say vertical streaking isn't a problem haven't conducted a shoot where a view into sun or bright lights is integral to success of the shot!

As per previous post both 2/3 inch and 35mm formats restrict choice of lenses in their own way.

In respect to HD delivery, until 1920x1080 pixel 444 low compression 12bit recording is available then over sampled single chip 4k is an acceptable compromise in my view, notwithstanding lens cons and pros.

May I bring up discussion of differences, if any, in colour transmittance of a Bayer plastic (?) filter vs dichroic prism?

Also RED's 12bit AD compared to 14bit ADs in f35 and Genesis merits discussion...until RED introduces 14 bit ADs that is:)



Mike Brennan
 
In respect of sharpness of 35mm vs 2/3 inch Canon lenses Canon tell me that 2/3 inch lenses are sharper, not that it is relevent in a discussion regarding format size.

I have to agree with you, Mike.

My Canon HD lenses used with 2/3" FIT CCD seem to outperform Canon L still picture lenses in nearly all situations, even if used on 12 Mega pixel DSLR. Not sure why is that, but something tells me there is a difference between $20k glass and $1k glass, even if laws of physics are against us.
 
In respect of sharpness of 35mm vs 2/3 inch Canon lenses Canon tell me that 2/3 inch lenses are sharper, not that it is relevent in a discussion regarding format size.

Well, in theory a lens for a smaller sensor has to have a higher sharpness / MTF to compensate for the greater degree of enlargement -- but that doesn't mean that all lenses made for smaller sensors or film frames necessarily are sharper than those made for larger formats, just that they should be made that way in theory.
 
Mysterium Sensor Dimensions and Pixel Pitch

Mysterium Sensor Dimensions and Pixel Pitch

David,

Thanks for that clarification. This is what I conclude:

29µm square pixels = 5.4µm pixel pitch
4900 x 2580 pixels = 26.5 x 13.9 mm - ??
4520 x 2540 pixels = 24.4 x 13.7 mm - in between ANSI and DIN Super 35
4096 x 2304 pixels = 22.12 x 12.44 mm - actual sampled area = Std. 35mm

Maximum theoretical spatial resolution: 92.6 lp/mm (not possible due to OLPF - MTF = 0% at 92.6 lp/mm)

Max. practical spatial Res. (as per Graeme): = 92.6 x 0.78 = 72.2 lp/mm

This explains why the 56 lp/mm target was imaged but the 80 lp/mm target was not.

So, to get good useful response on the sensor at 72 lp/mm, the lens used should have visible response (at least 10% MTF) at 216 lp/mm on the lens projector. Most projectors have 200 lp/mm targets, so that will have to do for now. I'd love to have a reticle that would max out at 280 lp/mm.

It would also be important to look for a very crisp response with good edge micro-contrast at 70 lp/mm on the projector.
 
Well, in theory a lens for a smaller sensor has to have a higher sharpness / MTF to compensate for the greater degree of enlargement -- but that doesn't mean that all lenses made for smaller sensors or film frames necessarily are sharper than those made for larger formats, just that they should be made that way in theory.

Also the Japanese lens guys have said that a prism helps them make long zooms such as 22x 42 or 100x lenses because they dont have to focus RGB wavelengths at one plane.

It would be interesting to see if they could make 22x or 42x from the ground up for a single chip.

That consumer cams are still 3 chip may be a pointer that prisms offer benifits to lens design that outweight single bayer vs 3chip advantages?


Mike Brennan
 
I have to agree with you, Mike.

My Canon HD lenses used with 2/3" FIT CCD seem to outperform Canon L still picture lenses in nearly all situations, even if used on 12 Mega pixel DSLR. Not sure why is that, but something tells me there is a difference between $20k glass and $1k glass, even if laws of physics are against us.

Lenses are designed for the format and the sensor size. The smaller the format (film) or sensor size, the higher resolving power they will need for equivalent results. You shouldn't expect a lens made for a 35mm sensor to resolve the same detail in the final image when used on a 3/4" sensor as a lens designed for the 3/4" sensor (quality being equal of course).

Hope I explained this right. It is a very misunderstood principle.
 
That's funny - it's like saying that you don't get a higher resolution, nicer image from a 12mp DLSR than you do from a Genesis video camera :)

Graeme

You don't, the DSLR is Bayer and the Genesis is RGB.
 
You don't, the DSLR is Bayer and the Genesis is RGB.

Zaphod, first you are not correct because our eyes tell us different. Go take a look at an image from a Genesis (or any so called 4:4:4 1080p camera for that matter), and an image from a 12mp DSLR from Nikon, Canon etc. There's a world of difference for the DSLR, only really lacking in motion picture workflow and frame rate.

Second, Genesis may be called "RGB" by it's makers, but it's a still a colour filter array (ie mosaic sensor) just using a pattern that is different from Bayer's pattern. The red, green and blue photosites are NOT co-sited, so, in all honesty could not be called an RGB sensor in the same way that an aligned 3 chip camera is.

There is "nothing wrong" with a Bayer pattern if you design and implement the system correctly, which I do believe we have. If you've been listening otherwise, you're listening to FUD. There's a strong reason that why for the last so many years that all the top performing stills cameras use a Bayer pattern.... And perhaps in the future that will change, but for now, it's the most useful way to get high quality images from your photosite count.

Graeme
 
Zaphod, first you are not correct because our eyes tell us different. Go take a look at an image from a Genesis (or any so called 4:4:4 1080p camera for that matter), and an image from a 12mp DSLR from Nikon, Canon etc. There's a world of difference for the DSLR, only really lacking in motion picture workflow and frame rate.

Graeme

The other issue that could cause serious problems with the Genisis RGB approach is noise. Have much smaller photosites can cause noise to be introduced, all other things being equal. Noise reduction removes image detail. Therefore, in post the less noise the more image detail.
 
Chuck,

Thanks! Actually I didn't think it would be possible to use an autocollimator or see the pixel grid. Steve from Du-All camera had posted that he used a Richter to set up the PL mount on the RED and I had trouble believing it. And here's the really interesting part: It looks different when the camera is on! When I tried to check it with the camera off, I could not see the grid as easily.

I actually took the Richter along as a last minute "what the hell" kind of decision. I have been using a Möller-Wedel collimator (what the Chrosziel and Gecko Cam collimators are based on) for the last two years and not using my Richter as much. I wanted to see what the Richter would be like with an LED (white) for the light source in portable mode.

I'm confident that I was seeing the image reflection off the image plane surface because once it was set up, all the prime lenses eye focus / measured focus matched very well. And it actually looks similar to the reflection from a ARRI fiber optic viewing screen (16SR ground glass).

BTW, the adjustment on the RED PL mount is VERY fine. The thread does not turn easily meaning a very fine pitch thread with tight tolerance.

As for the optical flat, it would have to rest directly on the surface you are checking, and it would test for flatness of the surface. I could check the flatness of the PL mount with a depth gage and teflon tip, but that would make the camera owners too nervous.

I do have concerns that the aluminum (I'm assuming it anodized Al) PL mount flange will get damaged more easily and go out of flatness. If a clumsy AC smacks part of the back of a lens into the flange. If it's treated right it will last a long time, especially if it's hard coated. But when I did my 16SR Super 16 conversions, I used Stainless for that reason, and for the improved temperature stability. Also a steel flange can be lapped if it gets scratched or dented and then you just add some shims to make up for the metal you removed.

I didn't mean to use an optical flat as one would with a monochromatic light to read fringes when in contact with another optical surface. What I meant is that you can place a optical window on the lens seating surface of the camera mount and get a bounce back of both it and the film plane. It's an optical way to determine if the lens seat is paraellel to the film plane. It's a test we used to do with film cameras all the time. Also used it to square ground glasses to lens seats.
Thanks for the input on your real world use of the camera and a autocollimator. I'm not fimilar with that other brand of collimator, got any photos of it?

thanks,
 
What do you think of Leica's decision to not use an OLPF on their M8 camera and use software anti-aliasing instead? Could this be done on a digital cinema camera as well? Or do you think it's a bad idea for any type of camera?

davide

Aliasing can be dealt in software in a "surgical" way if you had some idea of how much they have corrupted the frequency spectrum shy of the Nyquist-frequency. That would require applying a low pass filter that keeps frequencies up to the extent of frequency where the damage start appearing, and would mean that the band of frequencies starting from that "damaging frequency" onwards to Nyquist frequency will have to be zeroed out.

On the other hand (pre) blurring caused by a device such as OLPF helps since it removes unwanted frequencies from "folding back" and corrupting the source. Since, most designs will start the fall off of OLPF before the Nyquist limit, some detail will be lost, but the response should have more detail than the surgical method described above.

The bottom line: In general, OLPF-based approach will give a cleaner response with better (high-frequency) detail.

Okay, now comes the interesting part and not necessarily related to the discussion above. Can aliasing ever by removed once it has corrupted the frequency band before Nyquist frequency. And the answer is surprise, surprise, *YES*, for certain systems, and under some conditions it is possible to have a design where aliasing can be exactly removed after it has happened!

-- Left as an exercise for users to figure out :) :construction:
 
The other issue that could cause serious problems with the Genisis RGB approach is noise. Have much smaller photosites can cause noise to be introduced, all other things being equal. Noise reduction removes image detail. Therefore, in post the less noise the more image detail.

Depends on your noise reduction implementation, and whether or not you're using temporal sampling. In regards to the genesis, compare it against an F23 in a normal daylit scene and yes you'll get more noise. Also there are more than a few other gotchas in regards to dead pixel mapping on stripe cameras. It's really up in the air right now as to what to do in terms of sensor design, firmware, or in post software.

Thats why the idea of epic going to 100 mb/s really intrigues me, as at that data rate they run the possibility of not hitting the top of the wavlet for most scenes, we'll finally be able to pick apart the image, as wavlet compression (on the red one) at such a low rate can be a noise reducer.
 
Depends on your noise reduction implementation, and whether or not you're using temporal sampling. In regards to the genesis, compare it against an F23 in a normal daylit scene and yes you'll get more noise. Also there are more than a few other gotchas in regards to dead pixel mapping on stripe cameras. It's really up in the air right now as to what to do in terms of sensor design, firmware, or in post software.

Thats why the idea of epic going to 100 mb/s really intrigues me, as at that data rate they run the possibility of not hitting the top of the wavlet for most scenes, we'll finally be able to pick apart the image, as wavlet compression (on the red one) at such a low rate can be a noise reducer.

I'm in totally in agreement on the data rate issue. I hate compression. It creates very uneven results and it's often hard to predict when it's going to ruin a shot. Compensating for noise and compression artifacts usually means you have to remove some image detail or replace it with false information. Small sensor cameras often contain large amounts of noise under the best circumstances. Even in bright daylight, the dark portion of the image is full of noise.

Using a Bayer pattern and a larger sensor has a distinct advantage for controlling noise if everything else remains equal. For keying and compositing, the reduced color detail could become an issue. That's the area where 3 chip cameras could in theory do better if more real color and image detail is preserved. Of course, this is meaningless unless a direct comparison is done between specific cameras. It's all in the details.
 
Thats why the idea of epic going to 100 mb/s really intrigues me, as at that data rate they run the possibility of not hitting the top of the wavlet for most scenes, we'll finally be able to pick apart the image, as wavlet compression (on the red one) at such a low rate can be a noise reducer.

I read this many times but could not understand it. Can you kindly elucidate?
 
Bayer

Bayer

Graeme
So if Bayer is utopia why has Kodak redesigned the Bayer filter? They claim the newer design improves the stop loss for example and that they get a better color reproduction.
We have all become slaves to technology the vast majority of the public could not tell whether Saving Private Ryan was shot on film or that Star Wars used Sony F900 or that Apocolyto used Genesis or do they care. I want to be able to use Red, use Sony, use Panasonic etc equipment otherwise we should all just drive Ford cars, just use Samsung refridgerators.
Having seen the Hungarian tests with film, Genesis, D-20, Red, F23, F900R not influanced by manufacturers film was quite frankly a stand out from HD cameras so the HD guys need to go away and work with their algorithms, their chips and stop trying to mimic film but be the watercolors to the oils. The image subject is very complex and becomes very dangerous when left simply to engineers who think they know better (like where does creatrivity come in).
 
The primary reason for the new Kodak Bayer CFA is to replace one of the greens with clear, so as to increase sensitivity of the sensor. This would indicate that the goal is to reduce noise on cheaper, smaller photosite cameras, where sensitivity is a big issue, and those cameras are often used in less favourable lighting conditions. Here's a good article on it: http://johncompton.pluggedin.kodak.com/default.asp?item=624876 you can see that it's all about better sensitivity at the cost of resolution for smaller / cheaper cameras. There is no mention of any improvement in colour.

None of us have seen Hungarian tests. We don't know how they were performed or who they were performed by.
 
Back
Top