Thread: Komodo 12.2 stops of dynamic range in CineD test.

Reply to Thread
Page 4 of 11 FirstFirst 12345678 ... LastLast
Results 31 to 40 of 104
  1. #31  
    Senior Member
    Join Date
    Dec 2010
    Location
    Toronto & Vancouver
    Posts
    4,176
    But does any of that actually justify why the skintone highlight clipped after +1 (which, in their defence/coincidentally enough, corresponds with what their xyla results are showing, as faulty as they may or may not be)?

    Even if the scientific testing was completely botched, I presume they did the same botch job with the P6k charts (which again, in their defence, didn't show the missing colour channels in the two top stops), but they claim the p6k still managed ~+4 over before being unable to recover that skintone highlight accurately.

    I guess my question is, can that real-world Komodo +1 vs P6k +4 delta be fixed with better scientific testing (without underexposing 3 stops to protect the highlights enough to merely match the p6k's highlight performance, as that risks a noisy [potentially green] mid-low end)?

    Quote Originally Posted by Christoffer Glans View Post
    Essentially this is the problem and a possible solution.
    Pretty sure that's why they include the baked prores results; to have baked vs baked, and then include raw to make sure it doesn't have substantially different performance from the processed/polished baked codec. I mean, one could argue that compressed raw shouldn't be compared to uncompressed for the same reason.
    Last edited by Mike P.; 01-20-2021 at 12:34 PM.
    Reply With Quote  
     

  2. #32  
    Member Nick Vera's Avatar
    Join Date
    Sep 2013
    Location
    Los Angeles, CA
    Posts
    73
    For anyone seeing comparisons, recommendation is doing your own under/over tests with a light meter in a controlled environment, if you want to compare film cameras. Stop using manufacturer stats and other's opinions. This is the only way to find the truth
    Motion Pictures and Film Professional
    Reply With Quote  
     

  3. #33  
    Senior Member Christoffer Glans's Avatar
    Join Date
    Jun 2007
    Location
    Stockholm, Sweden
    Posts
    4,760
    Quote Originally Posted by Nick Vera View Post
    For anyone seeing comparisons, recommendation is doing your own under/over tests with a light meter in a controlled environment, if you want to compare film cameras. Stop using manufacturer stats and other's opinions. This is the only way to find the truth
    We're not, but we're also not listening to tests that aren't conducted correctly. If someone wants to test manufacturers' claims through a test that compares systems based on a standard, then they damn well need to have the standardization and procedures locked in a correct way. Not only does this spread misinformation, it also reinforces a lack of trust in others doing tests.

    If someone claims to do unbiased tests in their "lab". It becomes problematic if, again and again, they don't know how to handle the cameras and post processing. That's not a "lab" with any experts. I'm still waiting for the experts.
    "Using any digital cinema camera today is like sending your 35mm rolls to a standard lab. -Using a Red is like owning a dark room."
    Red Weapon 6K #00600 Red Komodo #002397
    Reply With Quote  
     

  4. #34  
    Senior Member Nick Morrison's Avatar
    Join Date
    Nov 2011
    Location
    Brooklyn
    Posts
    8,972
    All I will say is in *practical* terms we've been shooting Komodo, Gemini, and Dragon side by side and here's our conclusion:

    Komodo is noticeably better than DSMC1 Dragon - more usable DR, and less mid-tone noise.

    Komodo isn't quite as good as Gemini, but intercuts effortlessly. If you don't need off-speed, Komodo is a no brainer.

    Komodo has a "fat negative", and is very gradable. Our colorist has been impressed.

    Komodo is, quite frankly, remarkable value for what you're paying.
    Nick Morrison
    Founder, Director & Lead Creative
    // SMALL GIANT //
    smallgiant.tv
    Reply With Quote  
     

  5. #35  
    Senior Member Karim D. Ghantous's Avatar
    Join Date
    Oct 2011
    Location
    Melbourne AU
    Posts
    2,084
    Quote Originally Posted by Nick Morrison View Post
    Komodo is noticeably better than DSMC1 Dragon - more usable DR, and less mid-tone noise.
    Maybe I'm naive, but this says it all. (Figuratively speaking!)
    Good production values may not be noticed. Bad production values will be.
    Unsplash | Pinterest | Flickr | Instagram | 1961 (blog)
    Reply With Quote  
     

  6. #36  
    Senior Member
    Join Date
    Jan 2013
    Location
    US - Phoenix, AZ
    Posts
    141
    Here's a super long post about measuring dynamic range and why it's such a hard number to pin down. I think it's helpful in this situation.


    ---


    There is actually hard math for dynamic range and it's really easy to calculate - it's simply the ratio between the clipping point of the sensor and the average noise of the sensor.

    The math is simply the 20log10 (voltage ratio) of the Full Well Capacity in electrons (clipping point) divided by the root mean square Read Noise in electrons (average noise).

    So the math formula is just 20log(FWC/RN). That's very simple, relatively speaking.

    However, there are two problems with this approach.


    ---


    One is that while the clipping point of the sensor is really easy to figure out, the average noise is absolutely not, at least while measured externally. Imatest's page on dynamic range has so much information that I have not seen elsewhere that explains the challenges in actually measuring DR: https://www.imatest.com/solutions/dynamic-range/

    One huge takeaway is that even the tiny amounts of internal flare from lens optics absolutely affects dynamic range. That means that the dynamic range of a sensor, when paired with a lens, has an absolute limit (unless lens coatings get dramatically better) regardless of what the dynamic range of the sensor is actually capable of.

    Imatest says that they have never seen anything better than about 16.5 stops of DR, even though there are sensors out there with a specified 20-25 stops worth of range. Simply adding a lens completely negates that higher dynamic range.

    They also say specifically that dynamic range is improved by noise reduction. So, because of that, they recommend only using raw data that is stripped of any noise reduction and minimally demosaiced.

    They say specifically that "Noise reduction can have a profound effect on DR measurements. In particular, SNR = 1, which is a criterion for the DR limit in some standards, may never be reached."

    That means that noise reduction can prevent seeing where the noise floor is reached (SNR = 1 is where the signal and the noise floor is equal.) If you can't see the bottom limit, you have no way of actually measuring the average noise of the sensor.

    So, yes, mathematically, comparing the output of a sensor that is highly processed with noise reduction to something that is only processed as minimally as possible is not going to be a fair comparison.

    On top of that, Imatest actually has their own alternative to the Xyla chart because of challenges in using that chart accurately. The much brighter initial steps were leaking into the rest of the image due to lens flare, causing an inaccurate reading. (However, this was causing an increase in Dynamic Range, not a reduction in Dynamic Range.)


    ---


    The second problem with calculating Dynamic Range is that measured dynamic range is actually considered something completely different.

    Measured dynamic range is not Dynamic Range at all - it is a similar, but separate concept known as the Signal to Noise Ratio, and it is that SNR number that we actually want to know, not the true Dynamic Range.

    The discrepancy has come about from how a sensor engineer would define what Dynamic Range means versus what we would consider dynamic range.

    To a sensor engineer, Dynamic Range, in sensor-land, is only a mathematical ratio between Full Well Capacity and Read Noise. It is not theoretical, it is hard math. You have two numbers and you look at the ratio between them and that ratio can only be one thing.

    However, that hard number Dynamic Range does not include the impact that simply putting a lens in front of the sensor causes. It also does not include any other type of noise, just Read Noise. When there is lots of light available, the main type of noise limiting the amount of Signal to Noise Ratio is something called Shot Noise.


    ---


    Shot Noise is an inherent characteristic for photons to follow a certain pattern, known as the Poisson distribution. This distribution means that less light levels equals a noisier image, regardless of anything else.

    Basically, photons do not come all at once, hitting the sensor evenly, but come in a staggered way that creates noise.

    The only solution to the inherent noise of light (you can't currently escape it) is to increase the full well capacity. If you are able to capture more photons, it spreads out that distribution and lowers the effect of that noise.

    This website has a good breakdown of all of this, including the chart I am listing below, which shows that SNR has less stops than actual Dynamic Range: https://www.lumenera.com/blog/unders...paring-cameras .





    This quote sums it all up: "The result is that a larger dynamic range is always preferable because it allows for a higher signal-to- noise ratio, but it is not guaranteed. SNR will always be less than the dynamic range because it is limited by the noise in the image and is not always maximized due to challenging lighting conditions, exposure time limitations, and the choice of optics."
    Reply With Quote  
     

  7. #37  
    Senior Member Bastien Tribalat's Avatar
    Join Date
    Dec 2009
    Location
    Cannes area, France
    Posts
    813
    I'm late to the party and the rabbit hole but whatever.
    Another (reputable) team of testers from France have conducted those tests (here) + compared their results to what they found conducting the exact same tests on an Arri Alexa Mini and a Canon C300MKIII.

    So first, the TL;DR :
    Alexa Mini : 14+ stops (up to 15 in RAW after denoising)
    RED Komodo : 14 stops (up to 14+ in RAW after denoising)
    C300 Mark III : 13+ stops (up to 14+ in RAW after denoising)
    And now, a few more details.







    VIDEO EDITING - COLOR GRADING - VFX
    APPLE FINAL CUT PRO, AVID MEDIA COMPOSER
    ADOBE CREATIVE CLOUD, DAVINCI RESOLVE, REDCINE X PRO...
    Reply With Quote  
     

  8. #38  
    Senior Member Christoffer Glans's Avatar
    Join Date
    Jun 2007
    Location
    Stockholm, Sweden
    Posts
    4,760
    So I've been in talks to Gunther at CineD as well to explain the criticism and I think the consensus really boils down to the test being performed correctly (after the decode mistake, which he's genuinely feeling bad for, was fixed), so Gunther is not really at fault here, there's just a lack of information that has spiraled the community into negativity.

    The problem is that the article does not communicate the differences between the numbers. First off, not pinpointing how in-camera processing affects the Xyla test and that comparing between RAW systems and in-camera processing systems aren't really valid due to this difference. Second, that the interpretation of the test varies based on what we're actually trying to conclude. It's easy to blame Red for saying 16,5+ stops when it does not have that, but... it has, just count the visible bars on the Xyla test, it's 16,5+ stops. However, they're not usable stops. By measuring unprocessed R3D you get closer to the numbers that Gunther got to, so as a reference of tests it's valid, as long as only unprocessed RAW is compared against each other and all other in-camera processing systems are left out of the equation.

    This is why the French test is an interesting comparison since it reaches 14 stops and is closer to what I get from the Xyla test when processing RAW with supersampling and NR.

    So basically, the problem is more of how the article is framed, which led to the ideas of testing misconduct spiral out of hand together with the pitchforks against Red for "lying" about stops. This shows how important it is to communicate the variables and details of testing in order to underline what is actually measured and how to interpret the concluded numbers.

    For some reason, Gunther is unable to join Reduser and can't at this time explain or defend the methodology, but hopefully, until then, we can have an understanding of what the real issues are here.
    "Using any digital cinema camera today is like sending your 35mm rolls to a standard lab. -Using a Red is like owning a dark room."
    Red Weapon 6K #00600 Red Komodo #002397
    Reply With Quote  
     

  9. #39  
    Moderator Phil Holland's Avatar
    Join Date
    Apr 2007
    Location
    Los Angeles
    Posts
    12,077
    Quote Originally Posted by Christoffer Glans View Post
    For some reason, Gunther is unable to join Reduser and can't at this time explain or defend the methodology, but hopefully, until then, we can have an understanding of what the real issues are here.
    I don't have access to new member approvals, but we'll get him registered to post.

    Here's the link he's been providing in many online replies about their testing methods:
    https://www.cined.com/the-cinema5d-c...N_cSsQAV-UKjS4
    Phil Holland - Cinematographer - Los Angeles
    ________________________________
    phfx.com IMDB
    PHFX | tools

    2X RED Monstro 8K VV Bodies, 1X RED Komodo, and a lot of things to use with them.

    Data Sheets and Notes:
    Red Weapon/DSMC2
    Red Dragon
    Reply With Quote  
     

  10. #40  
    Senior Member
    Join Date
    Apr 2007
    Location
    New York, NY
    Posts
    1,045
    Quote Originally Posted by Bastien Tribalat View Post
    I'm late to the party and the rabbit hole but whatever.
    Another (reputable) team of testers from France have conducted those tests (here) + compared their results to what they found conducting the exact same tests on an Arri Alexa Mini and a Canon C300MKIII.
    I saw that test but I do question the usefulness of judging DR entirely from a waveform... Seems they judge more DR from komodo than the c300iii entirely because of the way the log curve is balanced, which seems somewhat meaningless when trying to judge what can be brought up from the shadows in a 10bit or even 12-16bit source. Also there is no mention of the fact that you can clearly see in their komodo waveform that the top two stops of DR appear to rely entirely on the built-in highlight recovery which may only work for monochromatic detail at certain whitebalances among other issues with it. And then I wonder if one were to use highlight recovery in post on the arriraw or C300iii raw if they would also benefit from an additional 1-2 stops of DR from highlight recovery. Definitely seems like a problematic issue to face. From some of the other tests I've seen, it does appear those highlight recovery stops really cannot be relied upon for certain kinds of tones and detail, but they may particularly improve the apparent results of a monochromatic xyla chart...
    Noah Yuan-Vogel | noahyv.com
    Reply With Quote  
     

Tags for this Thread

View Tag Cloud

Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts