Thread: Noise paradox (with R3D files)

Reply to Thread
Results 1 to 10 of 10
  1. #1 Noise paradox (with R3D files) 
    Dear all,

    My intention is not to stir up controversy, but to seek enlightenment. Here's a link to a dropbox directory with four files: three R3D files and a Davinci Resolve project that uses them:

    https://www.dropbox.com/sh/lefajhl2w...ZiTbevKua?dl=0

    The clip named "Sharp.R3D" is a single 8K frame of a DSC Chroma Du Monde (CDM) chart that's in focus (using a Tokina Cinema Vista 50mm at T2.8 if you must know). It produces the characteristic X step pattern on a waveform monitor.

    The clip named "Blurry.R3D" is also a single 8K frame of the same chart, but the camera has been dollied back a few feet without adjusting focus, leading to a blurry image, but with horizontal lines still (barely) visible when viewed on a waveform monitor.

    The clip named "MidBlur.R3D" is a point sort of midway through the dolly in which horizontal lines clearly visible, but about half the width of the normal X-step pattern.

    Alas, my test chart has endured a few facial scratches over the years...they show up in "Sharp". But otherwise the color swatches are pretty dang boring. And one can imagine that the result of blurring out the image would lead to a kind of idealized swatch where, at least at the center, we have "perfect" colors as the scratches are smoothly averaged in due to the large amount of bokeh present in the setup. And at first glance, everything we see is everything we expect.

    Loading the files into the Davinci Resolve project, we have a simple node tree: a pre-Group node that does nothing (but which could be used if we wanted to try using IPP2 LUTS for some reason). A clip node that does nothing (but which could be used if we wanted to play with primary or log tools to straighten out the measured response in a waveform monitor etc). And a post-group node which has an OpenFX Edge Detect plugin.

    When the image is Sharp.R3D, the edges detected are just what we expect: the edges of the CDM chart and the scratches caught by the light. When the image is Blurry.R3D, we see an absolute explosion of color noise (a very broad haze of points between IRE 30-60). MidBlur shows the haze of points between IRE 20 and 50.

    Now, I realize that what we're analyzing here is a very synthetic condition, but is there something we can learn?

    Looking at Blurry.R3D...

    Changing the RAW Denoise parameter from None to Maximum has some effect on the noise, largely reducing it from seriously RGB noise to a background of violet noise with green striations where very blurry edges are barely being detected (and giving a discernable sine-wave pattern that's still pretty hazy with bottoms of the valleys at IRE 10 and tips of the peaks at IRE 45). But it is by no means as quiet (noise-wise) as the "in focus" swatches of Sharp.R3D (where the noise is between IRE -5 and 0).

    Changing instead the plugin's Denoise parameter from 0.1 to the default of 0.2 changes the noise characteristic from random pixels to RGB squiggles. In the waveform monitor the noise characteristic shows the alternating pattern of swatch/blurred boundary/swatch/blurred boundary to be a smooth, thick sine wave ranging from IRE 10-40.

    Changing the timeline resolution from 8K to 4K also reduces the resulting noise considerably: with no Denoise in R3D decoding and 0.1 denoise parameter in the plugin, the broad haze of points is between IRE 15-45, and with R3D denoise set to Maximum, that becomes a sine wave ranging from IRE 5-20.

    Changing the timeline to 2K takes the broad haze of no R3D denoise down to a haze of IRE 0-15 and with Denoise set to Maximum, IRE -5 to less than 10.

    Looking at MidBlur.R3D...

    Changing R3D Denoise from none to Maximum, the wave pattern changes to a distorted sine wave where the valleys are 3x wider than the peaks. The valleys are a haze from IRE 5-20 and the peaks from IRE 20-35.

    Changing instead the plugin's Denoise parameter from 0.1 to 0.2 dramatically changes the noise character to RGB squiggles. The (wide) valleys range from IRE -2 to +7 and the (narrow) peaks range from IRE 17-28. Keeping the plugin value the same and setting R3D denoise to Maximum drops the valleys to IRE -5 to +4 and doesn't really change the peaks at all.

    With no R3D denoise and the plugin denoise parameter back to 0.1, reducing the timeline resolution from 8K to 4K drops the haze of RGB noise from the 8K range of IRE 20-50 down to IRE 10-30. At 2K the haze drops further to IRE -5 to +10.

    Obviously when looking at the actual images, not the output of the Edge Detect plugin, the noise, such as it is, is very difficult to see. But it's existence is observable, especially when the image has very little high-frequency detail. Given that often times shadows are not only dark, but also out of focus, this could well be a secondary reason why so many people report noise in the shadows. It's not because the shadows are dark and darkness brings out the noise in digital sensors, but because the shadows are out of focus, and that brings out the noise, which, compared to the darkness, is observable.

    It is also obvious that down-scaling images from 8K to 4K or even 2K really cuts down the noise in the blurry parts. But how'd that noise get there in the first place?

    This perhaps also suggests some extra traps we have to avoid when applying sharpening to images: we really, really want to avoid indiscriminately sharpening stuff that's intentionally blurry. One would think that such blurry parts lead to nicely self-cancelling convolutional kernels, but perhaps that is wrong, and one really should downscale first and then sharpen, or downscale, find edges, then upscale with blur so that final sharpening is only under the mask of the blurred upscaled edges. Hmm.

    Any other thoughts on this?
    Michael Tiemann, Chapel Hill NC

    "Dream so big you can share!"
    Reply With Quote  
     

  2. #2  
    Senior Member Aaron Lochert's Avatar
    Join Date
    Oct 2013
    Location
    Tucson, AZ
    Posts
    1,119
    I think the edge-detect tool might be doing some things behind the scenes. When there's no obvious edges to latch onto, it hunts for something to show you.

    Here's a 1:1 crop of the white tiles on all three with a strong curve applied to accentuate the noise in the whites. There's no difference in these to my eye as far as noise pattern goes.



    Same thing, but a different curve to accentuate noise in the black tiles.



    On this one it's harder to tell, but the actual underlying noise pattern seems very similar. It's just harder to detect because you see actual visual information punching through that noise. Because contrast is higher due to less color averaging via blur, I wonder if that's giving the Edge Detect tool an easier time to discern between what is truly captured detail and what is an imaging artifact. I'd be curious to see the test done again with a chart or black chip that doesn't have any scratches or at least softer light to call less attention to them. I also think there's an unintentional thing happening because you dolly back instead of racking the focus out: you are also shrinking the size of any potential details that might remain at full scale - thus reducing the contrast even further between true image detail and noise texture.

    But I do think there is something to be said that a lack of visual information can call attention to noise. That's just the principle of contrast. Details and textures act as camouflage to noise; without that, the noise is naked and on display. Shallow DoF leads to fewer places for noise to hide in plain sight.

    Our eyes kind of work the same way as the edge detect tool, I'd think. After all, edges, contrast, and other visual cues are what we look for when understanding what we're looking at. Without those informational cues telling us what is "detail", our eyes focus onto the noise instead. On the other hand, I actually like to have a little bit of grain to give those blurry background bits some texture and "detail". I remember Steve Yedlin saying something similar in his resolution demo when discussing grain -- when there is no grain, your eye hunts around for something to look at, but with some texture, it gives your eyes something to "latch onto". It's like looking at a painting and seeing all the brush strokes; it feels more tangible. I can see why some would want no noise, however, to help aid in the "window effect" as Phil calls it.

    There's also something I haven't even touched on but probably should since it's relevant to all of us: motion. Is this effect nearly as apparent in motion? Does the added motion blur make the grain more apparent? Or does motion aid in letting our eyes determine which parts of the image are true detail and thus we see through the grain and forget it's there?
    Reply With Quote  
     

  3. #3  
    Senior Member
    Join Date
    Dec 2009
    Location
    Hollywood, USA
    Posts
    6,352
    Quote Originally Posted by Michael Tiemann View Post
    Changing the timeline resolution from 8K to 4K also reduces the resulting noise considerably: with no Denoise in R3D decoding and 0.1 denoise parameter in the plugin, the broad haze of points is between IRE 15-45, and with R3D denoise set to Maximum, that becomes a sine wave ranging from IRE 5-20. Changing the timeline to 2K takes the broad haze of no R3D denoise down to a haze of IRE 0-15 and with Denoise set to Maximum, IRE -5 to less than 10.
    I've noticed for almost 20 years that as you up the bandwidth, you wind up with trade-offs in terms of noise. The wider bandwidth resolves the noise characteristics better, and while the picture is sharper (sometimes a lot sharper), the noise can become more noticeable. It's even more noticeable on scopes than it is on an actual video display. (There are similar issues with wide-bandwidth microphones in audio recording.)

    I think some stuff like this will show up with charts, but I'm not sure if it's a real world problem. Trust me, it's even worse with film, and we've lived with that for more than 100 years. I think this is an academic problem. (I hesitate to say "it's good enough," but a lot would depend on lighting and exposure.)

    To make the test more interesting, you should borrow or rent an Alexa LF, a Panasonic Varicam LT, and a Sony Venice and see how those compare with the same charts under the same lighting conditions. I would bet you that they all have compromises in sharpness, color accuracy, noise, detail, dynamic range, and other factors. No single camera scores well on all these parameters.
    marc wielage, csi colorist/post consultant daVinci Resolve Certified Trainer
    Reply With Quote  
     

  4. #4  
    Moderator Phil Holland's Avatar
    Join Date
    Apr 2007
    Location
    Los Angeles
    Posts
    11,554
    Noise Character is 100% a thing and though it's not discussed much, RED has put effort in this arena. It's sort of one of the sneaky reasons many were so fond of Dragon during it's initial launch. There's lots of interesting stuff here on REDuser if you dig through the catacombs. My efforts were mainly in it's relationship to film's apparent grain when scanned as that's been my benchmark from the side of things I'm on.

    There's interesting things going on for sure and it even depends to an extent on what tonally is in the frame on how much it effects a viewer's interpretation of noise. i.e. If you have a lot of stuff exposed at key and lower and your footage is noisy, it's way more apparent.

    Frame rate matters as well, with higher frame rates being kinder on the eye.

    As shown, yes, if you downsample you will get better results. That's part of the game plan and benefit of oversampling. Taking an 8K R3D downsampled to 4K you minimize the noise if there is any and you benefit from the additional captured detail scaled down.

    When it comes to post, I haven't had to do much of any noise reduction in some time, like several years now. But in terms of post sharpening, yep, best to use a deft hand and worth exploring various methods to get what you're after. Some take a very long time to process, but that's part of the whole thing.

    There's a black and white Epic thread I made ages ago with a whole bunch of sharpening methods somewhere on RU. There's some merit to sharpening before scaling down and some to sharpening at resolution, then there's also the perspective of not sharpening at all, which is what I do the most.
    Phil Holland - Cinematographer - Los Angeles
    ________________________________
    phfx.com IMDB
    PHFX | tools

    2X RED Monstro 8K VV Bodies and a lot of things to use with them.

    Data Sheets and Notes:
    Red Weapon/DSMC2
    Red Dragon
    Reply With Quote  
     

  5. #5  
    Quote Originally Posted by Michael Tiemann View Post
    Any other thoughts on this?
    If i was writing an edge detect plugin i would normalise the results. The aim is to detect edges not quantify noise. So maybe when there are edges you see little noise but when there is noise it is normalised to fit the full 0 to 100% and becomes very visible?

    I could try the same in Nuke but not sure if that wouldn't do the same. You could do a convolution filter as well.

    IMHO. I don't think it's an issue with the files though.

    cheers
    Paul
    Reply With Quote  
     

  6. #6  
    Senior Member
    Join Date
    Dec 2009
    Location
    Hollywood, USA
    Posts
    6,352
    Quote Originally Posted by Phil Holland View Post
    Noise Character is 100% a thing and though it's not discussed much, RED has put effort in this arena. It's sort of one of the sneaky reasons many were so fond of Dragon during it's initial launch. There's lots of interesting stuff here on REDuser if you dig through the catacombs. My efforts were mainly in it's relationship to film's apparent grain when scanned as that's been my benchmark from the side of things I'm on.
    Yeah, those of us who had to deal with film in the 1980s and 1990s remember very well the introduction of T-Grain negative with the Kodak Vision stocks, and the "character" of the noise got a lot better. The best way I can characterize it is that the noise pixels became smaller, which made them less noticeable. It was a huge step, but it had the drawback of being more expensive, which some studios resisted for awhile.

    When it comes to post, I haven't had to do much of any noise reduction in some time, like several years now. But in terms of post sharpening, yep, best to use a deft hand and worth exploring various methods to get what you're after. Some take a very long time to process, but that's part of the whole thing.
    One thing we tell our clients is that you can run into problems if you try to start sharpening everything, but I think there's value in doing very carefully-qualified sharpening of certain things you're trying to draw attention to, like the lead actor in a wide shot. I'm not a fan of using overall sharpening on everything all the time, because you run into the risk of aliasing and over-enhancement, which get you in trouble real fast. A little sharpening goes a long way.
    marc wielage, csi colorist/post consultant daVinci Resolve Certified Trainer
    Reply With Quote  
     

  7. #7  
    Senior Member Blair S. Paulsen's Avatar
    Join Date
    Dec 2006
    Location
    San Diego, CA
    Posts
    5,196
    My strategy is to audition scaling filters to see how much sharpening the filter (Lanczos, Mitchell, etc) can provide vs perceived noise levels. That's the closest I get to global sharpening on the full rez footage unless its a salvage operation. Then I do TNR on the footage at the mastering and/or contracted resolution stage knowing what the desired contrast curve looks like. If final color is a higher contrast crunchy look, then adding too much TNR can create nasty aliasing and may not be needed at all. With a gentler contract characteristic adding a dollop of TNR can add sharpness with minimal artifacting.

    Bottom line - RED allows you to use taste and judgement in determining just how sharp your images are, vs the more common method of boosting sharpness in camera that cannot be undone gracefully.

    Cheers - #19
    Reply With Quote  
     

  8. #8  
    Senior Member
    Join Date
    Dec 2009
    Location
    Hollywood, USA
    Posts
    6,352
    Quote Originally Posted by Blair S. Paulsen View Post
    My strategy is to audition scaling filters to see how much sharpening the filter (Lanczos, Mitchell, etc) can provide vs perceived noise levels. That's the closest I get to global sharpening on the full rez footage unless its a salvage operation. Then I do TNR on the footage at the mastering and/or contracted resolution stage knowing what the desired contrast curve looks like. If final color is a higher contrast crunchy look, then adding too much TNR can create nasty aliasing and may not be needed at all. With a gentler contract characteristic adding a dollop of TNR can add sharpness with minimal artifacting.
    I would argue that it's better to do NR first and then add sharpness after that stage, because the NR is going to tend to dull the sharpening, no matter what you do. But I'd also concede that there's a lot of "it depends" to the situation.

    If the noise is mostly in the dark areas of the picture, my tactic would be to qualify just the darkest parts of the image and apply some NR there. Chances are, the high-frequency detail parts of the image are going to also be a lot brighter, and if those aren't being enhanced, they should look natural and reasonable.

    You can also dive into the nitty gritty and start analyzing the noise in the Red channel, Green channel, and Blue channel separately, and you'll find there's different levels of noise in each. I find Blue can tend to be noisiest in low-light situations, so sometimes some monochrome NR just in that channel can do a world of good. (And this is not a Red-specific problem -- it affects a lot of cameras.)
    marc wielage, csi colorist/post consultant daVinci Resolve Certified Trainer
    Reply With Quote  
     

  9. #9  
    Senior Member Blair S. Paulsen's Avatar
    Join Date
    Dec 2006
    Location
    San Diego, CA
    Posts
    5,196
    Good notes Mark.

    Cheers - #19
    Reply With Quote  
     

  10. #10  
    Senior Member PatrickFaith's Avatar
    Join Date
    Nov 2011
    Location
    California
    Posts
    2,558
    Looking at this, I kind of have three "likes" based on the zones:
    - I like the blurred grain in the darks, prefer it a bit desaturated from where it's at
    - I like the mid grain as it is, wouldn't touch it
    - I like highlight zones grain with the current crispness, perhaps a bit desaturated

    When I've done tests like this though, Graeme has shown me multiple times how my seemingly perfect lenses are giving off all sorts of chromatic aberration that "colors" the grain(certain circumstances I've even had issues with master primes). If I new what I was doing I'd play with focus-chromatics and grain more, but recently with dsmc2 I've just been softening a bit then going with cream/low-grain as it goes from raw to quadhd or hd.
    Reply With Quote  
     

Tags for this Thread

View Tag Cloud

Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts