Thread: Komodo 12.2 stops of dynamic range in CineD test.

Reply to Thread
Page 3 of 11 FirstFirst 1234567 ... LastLast
Results 21 to 30 of 104
  1. #21  
    Senior Member
    Join Date
    Dec 2010
    Location
    Toronto & Vancouver
    Posts
    4,176
    Seems a little odd that the test was done poorly, but only the RED succumbed to the problem of really poor above-mid-grey performance and false-positive recovered highlight stops (the top two stops of the P6k still have clear RGB definition/capture even with the improper testing). Also a little ironic that, after fixing the colour temp issue, the results didn't change with regards to the highlight recovery phenomenon/missing ~2 colour channels in the highest recovered stops (Sidetone: Did they re-do the chart with 3200k balanced incandescent/full spectrum lights or just change the WB to 3200k?)

    While it's cool that everyone is focusing on the imatest/xyla chart results and coming to their own DR conclusions (which are moot apparently), isn't the practical/real-world (and saturated) shot, that demonstrates why those those top two stops aren't all there, more important? CineD could not recover accurate skintone highlights after one stop overexposure (for comparison, they mentioned getting 4 stops over from the P6k).

    While I'd like to chalk this up to bad testing, even RED's GioScope puts mid-grey at chip11 of 16... which would put the skintone at ~12, which aligns fairly accurately with CineD's results, but whatever, it must be that time of year... And hey, at least the massive green tint discrepancy was solved (which was more of a concern, to be honest).

    Quote Originally Posted by Noah Yuan-Vogel View Post
    I do think the issue of whether to include recovered highlights or not is a complicated one. Is Komodo the only camera currently to have highlight recovery built in and visible on the monitor during recording? I would say it is a bit complicated since conceivably such highlight recovery is available to other raw cameras but not visible until processed in post. In your opinion how should the xyla chart be exposed with respect to RGB channel clipping? Should it the top chip be counted as in as long as at least one color channel is not yet clipped?
    Admittedly I presumed that's how IPP2 handles highlights on all RED sensors -- takes none clipped channel(s), ramps down saturation the closer it gets to clip, so that there's "detail/texture" there, but it lacks solid colour (hence it's good for improving perceived roll-off, but you can't recover those stops as "usable" image unless it's a b&w project). I noticed as much when testing IPP2 on MX; more discernible detail, but lack of colour. It works well for things that are monochromatic, like clouds, but not so good on, say, skin tones (it shows texture where there was none, but it's greyish when you try to bring it into useable range). Again, that's not a (b&w) chart, but anecdotal observation.

    As for others, BRAW has a highlight recovery check box in Resolve (which someone above mentioned wasn't used for CineD's testing), so it'd be neat to see how that looks on the scope. If memory serves (and again, anecdotally, by eye) it's not nearly as aggressive as IPP2. Come to think of it, pretty sure Canon raw has a 'Highlight Recovery' check box too.

    OH, and not to muddy the waters further, but back when Weapon came out, someone noticed if you set the raw whitebalance to something right near clip, then used a WB node (not the raw settings) to attempt to rebalance the image to proper wb in post, you could get an additional ~1+ stops of detail in the high-end. It didn't work most of the time because the colours had a tendency to get wonky/thin (the correction node didn't always look good, and you'd have to selectively correct tons of other things in the frame). Pretty sure this is that, but on the scopes.
    Last edited by Mike P.; 01-19-2021 at 02:54 PM.
    Reply With Quote  
     

  2. #22  
    Quote Originally Posted by AndreeMarkefors View Post
    As I wrote on another forum:



    Call it 8 stops, call it 20. As long as you compare camera tests done by the same individual/site you should be good. All is relative.

    I personally much prefer results that land around 12 stops than those that reach 16+.
    I read this as a half-hearted attempt a humor, and also a half-bitter expression of cynicism. And since we're all human, I'll say "I hear you!"

    But I'll also say that when the opportunity presents to use science to understand underlying systems, we should not treat objective truth as unknowable. Of course it is possible for people to do science wrong. And some people do it wrong intentionally, which is particularly insidious. But as long as people follow the cardinal rules: state your hypothesis, explain your methods, show your data, and prepare to be corrected if others cannot reproduce your results, then those who engage (and who practice well) should be trusted until proven otherwise.

    I saw the data of a chart with red, green, and blue showing that white balance for a tungsten source had not been corrected. I saw further the dropping of data based on the inconsistencies that resulted from that error. I read further that there were some other mistakes made in terms of codec parameters. When a result is reported with such large asterisks, it's best to put that result aside and either make a fresh one or wait for somebody else to do so. If and when the experiment is done properly, in proper conditions, according to scientific best practices, we'll get an objective result. Which is good for what it is.

    The objective result won't tell us what's not being measured, but it will tell us what is being measured, within the standards of error of the technique. That's all we can ask for. And it's a whole lot more than "you say 8 and I say 20. Believe whatever you want!"
    Michael Tiemann, Chapel Hill NC

    "Dream so big you can share!"
    Reply With Quote  
     

  3. #23  
    Senior Member Christoffer Glans's Avatar
    Join Date
    Jun 2007
    Location
    Stockholm, Sweden
    Posts
    4,758
    Quote Originally Posted by AndreeMarkefors View Post

    I agree with a lot of what you say, but they can't change their method around every now and then. Super-sampling and NR have not been part of their standardised test (I think they do mention it sometimes in the text though), so I wouldn't expect them to include it all of a sudden.

    The main win is that all cameras they test use a similar setup.
    But you can’t standardize a test done between a camera that shoots a raw format that is intended for post-processing, and cameras that use heavy in-camera processing.

    The “in-camera processing cameras” get to a finished image that is greatly reduced in post production possibilities, but might look good straight out of the camera.

    A true RAW camera however, does not look good straight out of the camera but greatly enhances post processing possibilities and get better consistent images at the end. It’s even proved to be the case in the test when he mentions that ProRes behaves better. Of course it does, it’s supersampled from 6K, and these are people saying they do a “lab test” without understanding what supersampling does to the final image •facepalm•

    If you’re gonna compare true performance, the R3D needs to be processed into something that is more close to the processed images you get from a “in-camera processing” camera.

    There can’t be one standard for processed footage and non-processed footage.
    "Using any digital cinema camera today is like sending your 35mm rolls to a standard lab. -Using a Red is like owning a dark room."
    Red Weapon 6K #00600 Red Komodo #002397
    Reply With Quote  
     

  4. #24  
    Senior Member
    Join Date
    Oct 2017
    Posts
    499
    Quote Originally Posted by Phil Holland View Post
    Seems like the test was updated due to Highest Quality not being chosen in Resolve, which of course has an impact on IMATEST due to how the R3D is rendered. Now they updated and state:

    "IMATEST calculates 12.5 stops at SNR = 2 and 13.6 stops of dynamic range at SNR = 1"

    Now if we can suggest to them to properly use the Xyla slides you'll gain a bit more in the shadows. And they aren't counting 2 of the stops of captured information in the highlights. Essentially getting you to what everybody else is getting.

    No offense because mistakes happen, but the damage is already done based on how that wildfire spreads on the net. Thankful they updated most of the issue.
    Sidenote Phil have you ever even used a bmpcc6k? Wondering what youd say or think.
    Reply With Quote  
     

  5. #25  
    Senior Member Karim D. Ghantous's Avatar
    Join Date
    Oct 2011
    Location
    Melbourne AU
    Posts
    2,083
    This goes to show that good methodology is hard to come by. People still think that focal length causes 'compression', for goodness sakes. No wonder it's hard to do tests properly.

    I haven't seen this test and I don't really care to. But I get the impression that they used a tungsten source and did not use a filter in front of the lens. Is that right? If so... wow.

    A couple of years ago I saw a visual test between the Alexa and the Dragon. The Dragon 'won' by a small amount, IIRC. Of course I can't find it now.
    Good production values may not be noticed. Bad production values will be.
    Unsplash | Pinterest | Flickr | Instagram | 1961 (blog)
    Reply With Quote  
     

  6. #26  
    Senior Member Christoffer Glans's Avatar
    Join Date
    Jun 2007
    Location
    Stockholm, Sweden
    Posts
    4,758
    Are there any unbiased, independent testers who are actually doing real tests? What about DXO, they tested Dragon when it hit the 101-point mark in their testing history. Would much rather see them test Komodo.
    "Using any digital cinema camera today is like sending your 35mm rolls to a standard lab. -Using a Red is like owning a dark room."
    Red Weapon 6K #00600 Red Komodo #002397
    Reply With Quote  
     

  7. #27  
    Senior Member
    Join Date
    May 2018
    Posts
    163
    I’m not sure why so many have an issue with this test. It’s my understanding that all cameras are tested using the same method. Using special consideration for each brand would skew the results. 12.5 stops is more than usable and right in line with what I expected from this camera. All manufactures quote higher numbers and then seem to score lower in independent test. Canon claims 16+ for their C300 Mk3 with its DGO sensor; Yet only score 13.1 in the CineD lab test. Sony claims 15+ for the FX9, but only managed 11.5 stops in the same test. This score puts the Komodo at essentially the same score as the Sony A7S3, granted the Sony will score much better at higher ISOs. It’s a very workable result, especially considering the global shutter.
    Reply With Quote  
     

  8. #28  
    Senior Member Christoffer Glans's Avatar
    Join Date
    Jun 2007
    Location
    Stockholm, Sweden
    Posts
    4,758
    Quote Originally Posted by Han Vogen View Post
    I’m not sure why so many have an issue with this test. It’s my understanding that all cameras are tested using the same method. Using special consideration for each brand would skew the results. 12.5 stops is more than usable and right in line with what I expected from this camera. All manufactures quote higher numbers and then seem to score lower in independent test. Canon claims 16+ for their C300 Mk3 with its DGO sensor; Yet only score 13.1 in the CineD lab test. Sony claims 15+ for the FX9, but only managed 11.5 stops in the same test. This score puts the Komodo at essentially the same score as the Sony A7S3, granted the Sony will score much better at higher ISOs. It’s a very workable result, especially considering the global shutter.
    How do you compare a RAW system without proper post processing with cameras that use in-camera processing? If you don't process the RAW footage correctly before comparing and getting the numbers, then you basically half-bake the performance of that system. So the biggest problem is that they standardize a test around the direct out of camera performance that is good for all cameras having everything baked into the file you get out of it. While not doing proper processing of the RAW files so that the footage from cameras using in-camera processing has improvements that you don't see on the RAW footage.

    The test needs to position each camera in a similar end point, not starting point. It's not rocket science to understand why. Either put everything into ACES apply NR and render out at 4K, or apply brand LOG modes, NR and render out at 4K. The end point should be a decided standard delivery format. Most common right now is 4K. So have every camera at an end point of 4K and process RAW footage correctly. Then compare the numbers.

    If you don't do this, then you don't actually measure performance based on actual performance but choose a specific workflow that just suits one type of system and judge everything accordingly. It's just plain invalid and flawed as a test.
    "Using any digital cinema camera today is like sending your 35mm rolls to a standard lab. -Using a Red is like owning a dark room."
    Red Weapon 6K #00600 Red Komodo #002397
    Reply With Quote  
     

  9. #29  
    Senior Member Christoffer Glans's Avatar
    Join Date
    Jun 2007
    Location
    Stockholm, Sweden
    Posts
    4,758
    Essentially this is the problem and a possible solution.

    "Using any digital cinema camera today is like sending your 35mm rolls to a standard lab. -Using a Red is like owning a dark room."
    Red Weapon 6K #00600 Red Komodo #002397
    Reply With Quote  
     

  10. #30  
    Moderator Phil Holland's Avatar
    Join Date
    Apr 2007
    Location
    Los Angeles
    Posts
    12,077
    Quote Originally Posted by Han Vogen View Post
    I’m not sure why so many have an issue with this test.
    I've engaged with Gunther online about this now and am awaiting a few answers.

    My main issues with this test are:

    - When published the data was off what has currently been revised, to the tune of ~2 stops which produced a somewhat viral wildfire spread of misinformation. This has been tended to and corrected, which is good.
    - Why the light and shadow slides are not being used to measure DR. They come with the Xyla and are used to negate the impact of the light side's flare lifting, contaminating, and adding noise to the noise floor. Yes this means all of their test results are invalid btw due to the unique optical pathways in each system as well as likely different lenses potentially used in these tests. You need to mitigate all variables when performing work like this.
    - Not fully digesting or acknowledging how highlight information is captured, measured, and impacted by a tapered highlight roll-off and simply "not using" a full stop or two of captured dynamic range.

    I'm not the only one who does tests such as these who have noticed this.

    The stranger notion is now he is mentioning "hence my suspicion that an in-built highlight recovery algorithm is at work here" and not recognizing that as captured DR.

    "Xyla21 chart is shot off-center to avoid lens flares" is not the same as using those slides. Not at all.

    In terms of color stability a good +/- test shows off where luminance and chromatic information lands within the usable latitude.

    One way or another visibly with our human eyes and compounded with the incorrect noise floor reading we are seeing different results that what is being presented.
    Phil Holland - Cinematographer - Los Angeles
    ________________________________
    phfx.com IMDB
    PHFX | tools

    2X RED Monstro 8K VV Bodies, 1X RED Komodo, and a lot of things to use with them.

    Data Sheets and Notes:
    Red Weapon/DSMC2
    Red Dragon
    Reply With Quote  
     

Tags for this Thread

View Tag Cloud

Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts