Karim D. Ghantous
Well-known member
This idea is quite old, and goes back to the '70s (if not earlier). However, I am slightly obsessed with it right now, and I thought I'd vent about it.
I also would like some pushback. I'm going to skip some logical steps in my reasoning, because I can't quite articulate them (and I might not even know I'm making logical leaps). So this will be a learning exercise more than anything. I doubt that anyone is going to make a CMY sensor.
So here's the thing: RGB filters each block two colours, but CMY filters each block only one colour. So, for example, a Cyan photosite blocks red light but allows through green and blue light.
But wait. If the C photosite allows in both green and blue light, how does it know how to separate them proportionally? Answer: via neighbouring photosites.
A neighbouring M photosite blocks all green light. Its luminance value relative to the C photosite can allow an approximation as to the percentage of green light that fell onto the C photosite. Also, a neighbouring Y photosite blocks all blue light, and can be employed to approximate the amount of blue light that fell on C. Perhaps you would use simultaneous equations to solve this? It sounds logical to me, anyway.
An outstanding question is how the CMY Bayer array should be arranged. Should it be CY|YM? Or YC|CM? Should it be CMY|MYC, abandoning the x,n|n,y style layout of the RGB array, (i.e. RG|GB)?
A secondary advantage of this system is that the sensor gets more light. I don't really think this is a critical feature, but it's an innate advantage nonetheless.
One more thing: a CMY array, as I perceive it, is a matrix-within-a-matrix-within-a-matrix. Or, let me explain it like this:
Monochrome sensor: 1st level matrix (superficial, e.g. [L]; [L]; [L]...)
RGB sensor: 2nd level matrix (chrominance values encoded, e.g. [R+G+B]; [R+G+B]; [R+G+B]...)
CMY sensor: 3rd level matrix (chrominance values encoded at a deeper level, e.g. [GB+RB+RG]; [GB+RB+RG]; [GB+RB+RG]...)
Keep in mind I'm not an expert on these things. They are very interesting to think about, though.
I also would like some pushback. I'm going to skip some logical steps in my reasoning, because I can't quite articulate them (and I might not even know I'm making logical leaps). So this will be a learning exercise more than anything. I doubt that anyone is going to make a CMY sensor.
So here's the thing: RGB filters each block two colours, but CMY filters each block only one colour. So, for example, a Cyan photosite blocks red light but allows through green and blue light.
But wait. If the C photosite allows in both green and blue light, how does it know how to separate them proportionally? Answer: via neighbouring photosites.
A neighbouring M photosite blocks all green light. Its luminance value relative to the C photosite can allow an approximation as to the percentage of green light that fell onto the C photosite. Also, a neighbouring Y photosite blocks all blue light, and can be employed to approximate the amount of blue light that fell on C. Perhaps you would use simultaneous equations to solve this? It sounds logical to me, anyway.
An outstanding question is how the CMY Bayer array should be arranged. Should it be CY|YM? Or YC|CM? Should it be CMY|MYC, abandoning the x,n|n,y style layout of the RGB array, (i.e. RG|GB)?
A secondary advantage of this system is that the sensor gets more light. I don't really think this is a critical feature, but it's an innate advantage nonetheless.
One more thing: a CMY array, as I perceive it, is a matrix-within-a-matrix-within-a-matrix. Or, let me explain it like this:
Monochrome sensor: 1st level matrix (superficial, e.g. [L]; [L]; [L]...)
RGB sensor: 2nd level matrix (chrominance values encoded, e.g. [R+G+B]; [R+G+B]; [R+G+B]...)
CMY sensor: 3rd level matrix (chrominance values encoded at a deeper level, e.g. [GB+RB+RG]; [GB+RB+RG]; [GB+RB+RG]...)
Keep in mind I'm not an expert on these things. They are very interesting to think about, though.