Wayne Morellini
Well-known member
- Joined
- Jan 3, 2008
- Messages
- 6,157
- Reaction score
- 0
- Points
- 0
Updated with questions on what is resolution, and experiment discussion.
I was going to post this to off topic for people not well versed in this, but this seems to be more the correct forum.* I'm trying to beat my sleeping tablet, so I'll give it a quick go.* What will follow will have a fair bit of speculation in it, as I never have been able to lay my hands on a good text on it, but I'm sure some of you will know the exact answers.
Human vision is said to max out at 2400 DPI resolution in monochrome at the tested distance, and 1200 DPI in color.* Some of us cab sit back and see the spaces between the pixels projected on a cinema screen, and screen fixed grain issues on silvered screens.* But in an emissive display like a bright CRT, but also likely other bright screens the resolution drops in half to 1200dpi monochrome and 600dpi color due to our eyes seeing less resolution at brighter levels from a shimmering object etc.* But in my years of hands on research I found that at 150/160dpi color starts to integrate (blend).* Indeed on pocket TV's below these resolutions the sub colors would start to stick out, but on an Atari Lynx handheld game system, that had 160dpi subpixels, you could see the brightness of the sub pixels but not so easily the colors. But I found beyond this resolution the sub pixels became less and less distinguishable the smaller they got.* However, half of 150dpi equivalent on a bright monitor was about 75dpi, and scaled for monitor viewing distance on the relatively small monitors of the day it came out that around 1024x768 resolution started to look good.* But on a bigger cinema like field of view it goes to around 2k, at around 150 dpi (equivalent to 300 dpi on a passive display) 4k is met.** I forget all the figures I did from years ago, but there seems to be likely deminishing returns past 4k due to integration of neighbouring pixels in our mind.* But there are a few more lines past the 2020 vision line on a test chart, but a lot of people can't even get to read to the 2020 line.* For some 8k is going to be more obviouse.
But despite having been able to go past the 2020 line on a good day after weeks of good nutrition and descent glasses, and not using the internet that drives my vision blurry, I can see branches on trees on the top of our short mountain ranges around here thousand to two thousand feet from a mile or two away), and the pixels and interspace between them on the better vmax cinema screens.* But the section of your eye that has the extra resolution that jump scans details of objects in a scene like ten times a second, is very small, and you have to go and look for the spot and these things to see it.** Most people don't even concentrate on this highest resolution area spot and see less resolution, as resolution declines as you move away from the spot.* At this point the color is integrated.* So, we are talking 150dpi real, 300 dpi effective, so the pixel themselves are probably also becoming more integrated.* So 8k would be 600dpi effective, and color sub pixels at the 1200dpi point.* This is probably the 2020 viision point, but I might be out a factor of two in these figures from years ago.
Now, even if you can see the smallest details through this small spot in your vision, what you will see will not be what you probably would normally expect.* You see extreme aliasing of detail and specks, as you are dealing with among other things the aliasing of the cells in your eyes.* Some might remember a debate between me and Graeme on the adequacy of 100% fill factor sensor pads a few years back, in light of foveon's nearly 100% fill factor.* I was advocating that a 100% fillfactor is enough and we didn't really need much of an anti-alaisjng filter, Graeme was advocating getting rid of it completely by a heavier filter..* This is one example, I see alaising a lot of the time at different sizes naturally, human vision works like that too, and things like those mentioned here integrate them.* Olpf for bayer softens the image a lot.* But once we get to these 8k+ levels of detail aliasing and pixel quality will start mattering less.* You could probably show undebayered Bayer images with not too many people noticing.* Such a system would allow a improvement in codec compression rates in cinema.
However, this 2400dpi resolution,* what is it, is it the ability to actually see dot details of 2400dpi, or just to notice they are there?* For instance a human hair can be 50 microns thin, but we can see it though it is 48000dpi, and even an standard definition camera can see it at night with a good camera light and bright enough hair, though the camera only has like 32dpi resolution relative to the position of the hair and width of the frame.* Can the standard definition image be said to have at least 48000dpi resolution or to be able to sense something is there within the pixels area?** I wonder if we really pick up 2400 dpi, or just sense it is there and as it flashes the area in our eye cells with aliasing as it moves around?** If we look at the figures, we start to see integration at 150dpi and 300 dpi in monochrome, that is 8 times smaller in each direction than the maximum dpi's, or 64 values of fullness.* Human vision sees around 127 unevenly spaced levels of intensity, that we use 256 even kevels on to cover, that is close to the 64 even values to say maybe we see a spot as between empty and full on brightness this detecting a small detail if suitably bright enough.* If we work backwards, integration of 256 dots in a 16x16 matrix would be 2400dpi / 16 = 150 dpi, which would work in with brighter images allowing less dots to saturate a cell.** Somehow I doubt it is right.** I had been meaning to buy a 2400dpi printer for decades to do a mesh graphic to test this.
So, 8k is not that needed, and if you have got it, performance on pixel level detail might not be such an issue, but contrast differences will likely be best preserved.* But of course, this is not the case if you want forensic detail for computional photography techniques, or for photography and blowing up images, but more for cheaper cameras.* With my own 8k camera design I was going use a special pixel architecture to take advantage of this.* So, for low end filmers it could be fine.* For pro photographers, the advertising industry, and computational photography block buster film makers, grab a 64k+ camera holographic or not.
I was going to post this to off topic for people not well versed in this, but this seems to be more the correct forum.* I'm trying to beat my sleeping tablet, so I'll give it a quick go.* What will follow will have a fair bit of speculation in it, as I never have been able to lay my hands on a good text on it, but I'm sure some of you will know the exact answers.
Human vision is said to max out at 2400 DPI resolution in monochrome at the tested distance, and 1200 DPI in color.* Some of us cab sit back and see the spaces between the pixels projected on a cinema screen, and screen fixed grain issues on silvered screens.* But in an emissive display like a bright CRT, but also likely other bright screens the resolution drops in half to 1200dpi monochrome and 600dpi color due to our eyes seeing less resolution at brighter levels from a shimmering object etc.* But in my years of hands on research I found that at 150/160dpi color starts to integrate (blend).* Indeed on pocket TV's below these resolutions the sub colors would start to stick out, but on an Atari Lynx handheld game system, that had 160dpi subpixels, you could see the brightness of the sub pixels but not so easily the colors. But I found beyond this resolution the sub pixels became less and less distinguishable the smaller they got.* However, half of 150dpi equivalent on a bright monitor was about 75dpi, and scaled for monitor viewing distance on the relatively small monitors of the day it came out that around 1024x768 resolution started to look good.* But on a bigger cinema like field of view it goes to around 2k, at around 150 dpi (equivalent to 300 dpi on a passive display) 4k is met.** I forget all the figures I did from years ago, but there seems to be likely deminishing returns past 4k due to integration of neighbouring pixels in our mind.* But there are a few more lines past the 2020 vision line on a test chart, but a lot of people can't even get to read to the 2020 line.* For some 8k is going to be more obviouse.
But despite having been able to go past the 2020 line on a good day after weeks of good nutrition and descent glasses, and not using the internet that drives my vision blurry, I can see branches on trees on the top of our short mountain ranges around here thousand to two thousand feet from a mile or two away), and the pixels and interspace between them on the better vmax cinema screens.* But the section of your eye that has the extra resolution that jump scans details of objects in a scene like ten times a second, is very small, and you have to go and look for the spot and these things to see it.** Most people don't even concentrate on this highest resolution area spot and see less resolution, as resolution declines as you move away from the spot.* At this point the color is integrated.* So, we are talking 150dpi real, 300 dpi effective, so the pixel themselves are probably also becoming more integrated.* So 8k would be 600dpi effective, and color sub pixels at the 1200dpi point.* This is probably the 2020 viision point, but I might be out a factor of two in these figures from years ago.
Now, even if you can see the smallest details through this small spot in your vision, what you will see will not be what you probably would normally expect.* You see extreme aliasing of detail and specks, as you are dealing with among other things the aliasing of the cells in your eyes.* Some might remember a debate between me and Graeme on the adequacy of 100% fill factor sensor pads a few years back, in light of foveon's nearly 100% fill factor.* I was advocating that a 100% fillfactor is enough and we didn't really need much of an anti-alaisjng filter, Graeme was advocating getting rid of it completely by a heavier filter..* This is one example, I see alaising a lot of the time at different sizes naturally, human vision works like that too, and things like those mentioned here integrate them.* Olpf for bayer softens the image a lot.* But once we get to these 8k+ levels of detail aliasing and pixel quality will start mattering less.* You could probably show undebayered Bayer images with not too many people noticing.* Such a system would allow a improvement in codec compression rates in cinema.
However, this 2400dpi resolution,* what is it, is it the ability to actually see dot details of 2400dpi, or just to notice they are there?* For instance a human hair can be 50 microns thin, but we can see it though it is 48000dpi, and even an standard definition camera can see it at night with a good camera light and bright enough hair, though the camera only has like 32dpi resolution relative to the position of the hair and width of the frame.* Can the standard definition image be said to have at least 48000dpi resolution or to be able to sense something is there within the pixels area?** I wonder if we really pick up 2400 dpi, or just sense it is there and as it flashes the area in our eye cells with aliasing as it moves around?** If we look at the figures, we start to see integration at 150dpi and 300 dpi in monochrome, that is 8 times smaller in each direction than the maximum dpi's, or 64 values of fullness.* Human vision sees around 127 unevenly spaced levels of intensity, that we use 256 even kevels on to cover, that is close to the 64 even values to say maybe we see a spot as between empty and full on brightness this detecting a small detail if suitably bright enough.* If we work backwards, integration of 256 dots in a 16x16 matrix would be 2400dpi / 16 = 150 dpi, which would work in with brighter images allowing less dots to saturate a cell.** Somehow I doubt it is right.** I had been meaning to buy a 2400dpi printer for decades to do a mesh graphic to test this.
So, 8k is not that needed, and if you have got it, performance on pixel level detail might not be such an issue, but contrast differences will likely be best preserved.* But of course, this is not the case if you want forensic detail for computional photography techniques, or for photography and blowing up images, but more for cheaper cameras.* With my own 8k camera design I was going use a special pixel architecture to take advantage of this.* So, for low end filmers it could be fine.* For pro photographers, the advertising industry, and computational photography block buster film makers, grab a 64k+ camera holographic or not.
Last edited: