Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

My thoughts on CMY vs RGB Bayer filters, including arrangement method

Karim D. Ghantous

Well-known member
Joined
Oct 6, 2011
Messages
2,681
Reaction score
89
Points
48
Location
Melbourne AU
This idea is quite old, and goes back to the '70s (if not earlier). However, I am slightly obsessed with it right now, and I thought I'd vent about it.

I also would like some pushback. I'm going to skip some logical steps in my reasoning, because I can't quite articulate them (and I might not even know I'm making logical leaps). So this will be a learning exercise more than anything. I doubt that anyone is going to make a CMY sensor.

So here's the thing: RGB filters each block two colours, but CMY filters each block only one colour. So, for example, a Cyan photosite blocks red light but allows through green and blue light.

But wait. If the C photosite allows in both green and blue light, how does it know how to separate them proportionally? Answer: via neighbouring photosites.

A neighbouring M photosite blocks all green light. Its luminance value relative to the C photosite can allow an approximation as to the percentage of green light that fell onto the C photosite. Also, a neighbouring Y photosite blocks all blue light, and can be employed to approximate the amount of blue light that fell on C. Perhaps you would use simultaneous equations to solve this? It sounds logical to me, anyway.

An outstanding question is how the CMY Bayer array should be arranged. Should it be CY|YM? Or YC|CM? Should it be CMY|MYC, abandoning the x,n|n,y style layout of the RGB array, (i.e. RG|GB)?

A secondary advantage of this system is that the sensor gets more light. I don't really think this is a critical feature, but it's an innate advantage nonetheless.

One more thing: a CMY array, as I perceive it, is a matrix-within-a-matrix-within-a-matrix. Or, let me explain it like this:

Monochrome sensor: 1st level matrix (superficial, e.g. [L]; [L]; [L]...)
RGB sensor: 2nd level matrix (chrominance values encoded, e.g. [R+G+B]; [R+G+B]; [R+G+B]...)
CMY sensor: 3rd level matrix (chrominance values encoded at a deeper level, e.g. [GB+RB+RG]; [GB+RB+RG]; [GB+RB+RG]...)

Keep in mind I'm not an expert on these things. They are very interesting to think about, though.
 
I'm having trouble reasoning at the moment, and don't know much beyond Bayer/rgb derivatives.

Complementary color filters are often considered consumer tech, and prone to errors (such as the yellowing in clouds). Bayer gives an accurate sampling of one color range per pixel, but in complimentary it is harder as two colors are mixed and have to be artificially separated. At its simplest, bayer is used to estimate the two missing colors from their neighbours, and heavily, heavily softly compromised often by optical low pass filters that can spread a point of light over may pixels on each direction. In complementary you start estimating a lot more.

However, cameras like the JVC HD1 used a complementary sensor with a clear tile from memory, with good results. They still suffered estimation problems, such as green match flames. Sky's are an issue, and I don't see as great color gamut, as complementary is off primary response curve.

However, advanced bayer techniques can use complimentary techniques, as the response curves of the human eye over lap a lot, so a sensor can be designed to use this overlap to better detect pixel color changes off primary.

http://photo.stackexchange.com/ques...tream-sensors-use-cym-filters-instead-of-rgb#
 
So colour accuracy is harder to achiever. Fair enough.

I had a further thought, though: if your scene is illuminated with red or blue light, such as that from police sirens, then your resolution is virtually decimated if you're shooting with an RGB array. 2 out of 3 (actually 3 out of 4) photosites will get almost no light at all. But with a CMY array, no more than 1 out of 3 will be starved of light.
 
There is a big overlap between red and green in human vision. The leaks of medium and long wavelength are close together. So, a good sensor has a lot of bleed through between two color bands, but the reality is that something really red is off peak enough to get away from green band pollution, that the eye has lower sensitivity at that region, and blue you are totally stuffed nearly, as the overlap disappears. So, it depends on the sensor response curve to each (many sensors don't match so much), your software to exploit bringing out details, and what curve and placement the red and blue lights use (peak intensity is at areas the curves overlap and add to each other, and is part complementary), but generally it is undesirable). You also should desire a sebsor euth good range and very low light pickup and very low noise, if you have software that can work the way I described, in order to pickup and take advantage of the very low levels of overlap in green channels for a purer red or green police light. Unlike what certain people around here seem to think, that is science, it can be messy until you work put a good solution.

Now, for cheap video, yeah, a really good complementary color sensor and software would be good in that situation. But if you want high end image, bayer or better, or do lots of careful research beyond what I am saying. You will probably rygcbm..red again, or other sensor out there. With complementary you can have it set up to think of the colors as sliding, hence yc both contain g, cm both contain b, my both contain red, hence the primaries are roughly estimated. Having primary and complimentary together you actually get real samples primary to calculate against, and real samples of complimentary, therefore, not for resolution so much, but you get to sample the response curve wider.


I have been exploring using 5 or more bands of color in my cheap and nasty sensor design proposal, stacked like foveon. On this way you can more accurately match towards human vision. The reality is the more layers you can get the more information you have to accurately map to the human vision response curve. So, if you sampled at a fine fine layers (probably not practical due to noise, well capacity, crosstalk etc) you could place that with all other frequencies each on their part of the human response curve for that amount of scene and point luminance and even sequential vision accommodation, and add the resultant adjusted response values into the red green and blue samples for that pixel to build up a more accurate sample. But it would be difficult, as some energy is shed as you move through the layers on these present designs.

I've worked on ideas for a prism design with over 7 color ranges. This stuff gets nasty. I've even worked on ideas for hyper spectral imaging which finely addresses color frequencies in and beyond human vision , which has many scientific, practical and medical uses. Apart from that I was looking at doing a game system and biological ways to increase the amount people could see, which really suites the sorts of games I wanted to do on it, really trippy.
 
Karim, I don't know where I was talking about something like this, but I have something important to add, that is important in reference to bayer.

Generally a complementary color filter might be the worse color filter scheme.

RGB, which is separate to Bayer, which is RGBG sort of thing, the extra green enhancing visual affect and accuracy. However, red green and blue stripes have issues, and I don't know if they use some shifting RGB pattern.

Bayer is better.

Bayer derivatives, such as the Sony one with bluey green tile, is allegedly better. The Kodak one with a clear tile instead of the extra green tile, is better in low light, but don't know if it is in color science.

There are allegedly better arrays again, such as random filter pattern (which I really found to be just big tilted square patterns),

But a three color layer sensor, like five on, which uses a nearly 100% fill factor, is better again. These record at least the three primary color bands per pixel, by sensing the different bands at different depths (as different gray end is penetrate the silicon to different depths) But the issue, is that it still is not a perfect recording, as the light has to pass through the different layers and some gets lost before it hits its target layer, polluting another layer, and leaving red more noisy from lower light. So, the correct color is still estimated a bit.

3 color prisms are better again, but the prima have issues, fringing, and bad image issues when used with very fast lenses (like f1.6 or more).


Now, Bayer has additional issues. Putting the potential color issues aside. Like any sensor, if you use it without a low pass filter, you get interpolation, screen dooring and ringing details. Having a high fill factor helps to compensate , though, as Graeme tells me, you still get issues even with 100% (which I tell him are pretty much what you see with the human eye, except our eyes see at a lot higher resolution than so it makes it less noticeable in everyday life, which he didn't realise in order to comeback at me).

But the low pass filter itself is a big problem for Bayer. Some of these spread the light from a single pixel far and wide over neighbouring pixels, hopefully with the majority of a pixels light being the light from the area it is aiming at, muddying details, reducing apparent contrast sharpness. However, if you shoot st 8k, and you finish at 4k, each of those 4k pixels has one sample of blue and red, and two of green, per pixel. If you finish at 2k, you have four samples of blue and red, and 8 of green. Now, each sample is a small fill factor, but the lowpass filter will be helping to counter this, and hopefully spreading the 4k/2k pixel boundaries enough. So, either down rezing last, or doing a debayering to lower resolution first, one of those is going to give better results. Now bayer is à bit closer to three chip. However, I'd prefer a three color/layer solution.

An interesting thing turned up when David Newman invented the cineform bayer wavelet compression routine, the images, against expectation appeared more compressible. I think what was happening their, was the affect if the low pass filter acting like a lossy compression routine, that reduces the differences between surrounding pixels which is what a massive spread of light from a wide low pass filter would do. The low pass filters on single chip bayer cameras have to be more aggressive than 3 layer or 3 chip, to make up for the gaps in the color mask to reduce issues So, you are not getting the amount of quality you think at full resolution on Bayer. I would be interested in seeing measured loss in quality compared to an actual good three color per pixel setup. However, you should get sufficient quality from Bayer, even at full resolution, with proper handling anyway.

Another thing is, that it seems 4:4:4 pixels might offer some compression ratio advantages compared to 4:2:0 which may make them not as bad as they first seem (still doesn't beat bayer though, due to color estimation restoration and the lossy affects of wide optical low pass filters).
 
I guess the best solution is to shoot film and scan it... Hahahah. But seriously. I do get what you're saying, for the most part. I don't like the RG|GB grid, although in general I have no problem with the concept of a Bayer filter.

Have you figured out how to make a photosite sensitive to colour? I imagine it would be very difficult to get a photosite to return a chrominance value between 0 and 16M. I can foresee a photosite being sensitive to a primary colour - this would make the Bayer filter redundant, thus making the sensor more sensitive. In theory.
 
They did this decades ago, but did not do much with it. I don't know if one single professional video camera has ever used the original products. I mentioned it above, it is the depth sensitive layered color sensor technology that Foveon came out with, but the red channel was too dark from the light being absorbed on the way through. I think Canon uses their own version in the 7D, but the photo sites were too small, small fillfactir, leading to a lot of aliasing issues against Red in test, very unfortunate, very. I was trying to get an early Russian alternative layering sensor, but once they realised I was only small fry they stopped. The sensor didn't look the best, but for the time on a ultra low end camera that was OK somewhat. I was also trying to get hold of an early starvis like sensor that could see into the dark, but I had talked to Jim about a directors viewfinder pocket product, and he had started talking about doing a pocket, then camcorder by reports and announced it, which bevame the scarlet fixed, so I gave up, figuring he would carry through with a good product to fill the gaoping hole in the market.

Now, Sony is aledfedly developing a three color layer sensor years ago with 100mp (that is likely 8k with three stacked color pixels per pixel boundary). However, I imagine that the layer technique probably nyerfetes with low light sensitivity, so that might have something to to with it.

Now, over a decade ago I put forwards a prism technique and afterwards a company made a sensor and came a that used a version of the technique, which bypasses issues. The pixels have the prism (Ihad been contemplating pixel prism designs for a while back then).

The layer technique is the technique I am looking at with my sensor design, but am unsure if it will work the way I envisioned, but is cheap and nasty for the lie end of the market I'm interested in, but aimed to be able to pickup up many more color bands. So, sub $100, $1000 and $10,000 cameras. Maybe I can do hundreds or thousands of bands with the time idea going through my head at the moment. But the data rates would he astounding.

However, I have a better alternative way to do it, not as cheap to produce.

However, I say to you there is a quantum cinema sensor I sometime mention, that works like film having sub micron sized pixels that get flipped, and technologies that register each time photons hit or the well fills to a certain point. There are also low noise extreme astronomy field sensor technology that registers single photons. Look at my previous posts for info on these sort of things.

As far as using and scanning film, don't. General film is not as good, you get ugly grain interfering with sampling and if you want to cover the range of digital you are likely going to compromise performance or end up carrying a number of film stocks, which will be expensive. Its ok for rich artsy people going a big budget picture to get it right with best grade image, otherwise you are going to have to comprising something, like the difference in buying between a BMD mini 4k, mini 4.6k and a helium. Go for the helium, and bynthe look of it Red might he planning a helium based camera phone, that maybe suitable for low end expectations verses a full camera (likely with better datarate).

However, it I will be interesting to do a basic camera with small Sony sensors, hopefully with starvis.
 
Back
Top