From what I know about how a bayer pattern works, a thought came to me:
If if have a RAW image, and i decide i'm going to reduce the resolution by 1/4, couldn't i have a program like redcine, or apple aperture do the following:
Combined the values on each group of local pixels, R, G G, and B respectively, with the R, G G, and B values from its three neighbors.
Here is a diagram:
In this way, instead of a resulting pixel with RGB values of 0-4095, you'll have each pixel have a value of 0-16383, or a 14-bit image. It won't increase your maximum white value, as if each bayer pixel is overexposed the resulting transformed pixel will be as well. However, it would (i think) significantly increase your signal to noise ratio, because each pixel well is effectively 4 times as big, therefore gathers more light.
What this process would end up doing, is sacrificing resolution for increased dynamic range, which is an AWESOME option, seeing as how on a 4k chip most of us will end up with a 1080p image at most.
Is this even possible? Or am i misunderstanding something that is going on on the chip?