Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Dragon vs Alexa

Since usable is a fun term that has no basis on reality. And since there's only a handful of people who have defined a usable stop as a patch that is exposed clearly, measured when read, and contains a specific amount of captured detail rather than noise the best way to MEASURE it is through measuring it.

So for instance, with RED Weapon Dragon 6K through REDlogFilm using the Skin Tone - Highlight OLPF (this does make a difference, especially in highlights/clipping and noise floor texture/noise/grain) here's what you get:

phfx_REDWeaponTest2015_xyla21_ISOPatches_RLF.jpg



A few core concepts to discuss from there. RED Dragon has always been a recommended ISO Range of ISO 250-2000. With the Low Light Optimized OLPF you can certain stretch that further into the higher ISO ratings beyond that.

In a typical real world scenario you have access to 16+ stops if you utilized the recommended Base ISO of 800.

How many patches can you "see" here:

phfx_REDWeaponTest2015_xyla21_DC2RLF_ISO0800.jpg



Remember the Base ISO is typically the approximate area where you have equal stops above and below Middle Gray.

So generally speaking:

phfx_REDWeaponTest2015_highlightLatitude.jpg



Now OLPF selection does indeed play a role here. That chart above is for the Skin Tone - Highlight OLPF, which does a great job of holding onto highlights in general. If rating the other OLPFs I'm more in the ISO 1000-1280 for the Standard OLPF and ISO 1280-1600 for the Low Light Optimized OLPF.

Here's a rather deep look and test at Dragon's Dynamic Range:
http://www.reduser.net/forum/showthread.php?137883-RED-Weapon-Dynamic-Range-and-Latitude-Examined
 
The thing is though, once the camera manufacturer measures the absolute DR of the sensor they will arrive at a factual contrast/DR number: full well capacity vs noise floor. That is the fence around your playground. Let's say absolute max contrast is 20 stops.

From here, you can design a image profile where you spread the available DR "as thin as possible" in order to make every stop discernible, i.e. going for max. Or, you can target a more modest DR, let's say 14 stops, and create a image profile that uses the "left over" stops as "roll off" or noise guard.

I can imagine an image profile that hits the whites so gently, that 3 technical stops blend into 1 discernible stop when doing visual tests. Yet, the image going into that last stop might come across as 'creamy'.

I feel personally, that's a little of what's going on here. Had Sony, ARRI, RED, Panasonic, Canon or BMD been given the same sensor, I wouldn't be surprised if we'd see a delta of 2-3 stops in their respective color sciences' depending on the target customer they are developing against. Some aggressive, some more conservative.

And if you want a color science that's more balanced overall, you're surely going to end up with less contrast compared to someone who is fine with allowing one of the color channels peak higher and using that extra luminance to boost DR.

Obviously, these are just my own thoughts on the topic.
 
While I know what you mean, I don't buy the whole "rolling in and rolling out" when it comes to DR discussions. DR is the value between the extremes—what happens in between doesn't matter.

Yes, there's no such thing as roll off at the sensor level where light is encoded linearly. That's all down to the transforms that are available to you to use.

But that is essentially what I meant, Arri's color science is doing some smoothing at the extremes, while with Red you have to do that yourself.



In a typical real world scenario you have access to 16+ stops if you utilized the recommended Base ISO of 800.

Is that actually the recommended Base ISO? Isn't 400-640 closer? That's what I've encountered.


Then again, Alexa doesn't have HDRx and you CAN get good results with it. Never mind the grading (since color grading such a large DR is pretty hard to squeeze into REC709), but here's a video where I shot 25 stops with some post work to get it to look "normal". I'm not sure any other high end camera can get you this if you wanted.

 
Is that actually the recommended Base ISO?

Yes. And you'll notice that RAW Check is based on ISO 800 for Dragon.

The real question is if that will change for Helium as it is different.
 
Cmos censors do not have roll off. Thats something that can be mimicked in grading, or colorscene or what ever you want to call it. Both cameras shoot raw so no there is no such thing as roll of in camera.

However they got different clipping points. Weapon / Red has several different clipping points it simply clips different depending what OLPF you use. But with Standard OLPF I would say it clips one stop before the alexa.

So if they are both exposed with the same amount of light Red then has one stop less of highlight protection. So with that logic if you stick an ND.3 infront of the weapon then they got the same clipping point.


So with an ND.03 in the Weapon with a standard OLPF you could say the cameras are pretty well balanced in the high end. Whats interesting then is that it´s then quite easy to see that weapon got atleast a stop more coverage in the low end.

That tells me that Red has about 2 more stops than alexa and it´s about one stop more light sensitive.

Difficult to see on this image but kind of explain how the DR´s line up to each other.


PastedGraphic-6 by Björn Benckert, on Flickr
 
Hey Phil!

I've always struggled a bit to understand what you have going on here. Is this entire chart from a single exposure of the Xyla, which you have pushed and pulled in RCX with the ISO slider?

If so, at which ISO does the 1st chip on the left clip while the second chip next to it does not? (in other words, at which ISO is the Xyla chart exposed for a standard test?)

Thanks for helping me understand!

Since usable is a fun term that has no basis on reality. And since there's only a handful of people who have defined a usable stop as a patch that is exposed clearly, measured when read, and contains a specific amount of captured detail rather than noise the best way to MEASURE it is through measuring it.

So for instance, with RED Weapon Dragon 6K through REDlogFilm using the Skin Tone - Highlight OLPF (this does make a difference, especially in highlights/clipping and noise floor texture/noise/grain) here's what you get:

phfx_REDWeaponTest2015_xyla21_ISOPatches_RLF.jpg



A few core concepts to discuss from there. RED Dragon has always been a recommended ISO Range of ISO 250-2000. With the Low Light Optimized OLPF you can certain stretch that further into the higher ISO ratings beyond that.

In a typical real world scenario you have access to 16+ stops if you utilized the recommended Base ISO of 800.

How many patches can you "see" here:

phfx_REDWeaponTest2015_xyla21_DC2RLF_ISO0800.jpg



Remember the Base ISO is typically the approximate area where you have equal stops above and below Middle Gray.

So generally speaking:

phfx_REDWeaponTest2015_highlightLatitude.jpg



Now OLPF selection does indeed play a role here. That chart above is for the Skin Tone - Highlight OLPF, which does a great job of holding onto highlights in general. If rating the other OLPFs I'm more in the ISO 1000-1280 for the Standard OLPF and ISO 1280-1600 for the Low Light Optimized OLPF.

Here's a rather deep look and test at Dragon's Dynamic Range:
http://www.reduser.net/forum/showthread.php?137883-RED-Weapon-Dynamic-Range-and-Latitude-Examined
 
Interesting.

From my understanding, the roll-off is not a function of the sensor, per-se. RAW does not necessarily mean unprocessed data from the sensor. That information from the sensor is processed in very complex ways before it even gets to the point it becomes what we called "Uncompressed RAW". A lot of the way it interprets the color and exposure info, through the CFA and ADCs off the sensor is cooked up with "special sauce"...colorimetry. So your test does not mean that the Alexa is not definitively allocating extra stops towards a gradual roll into the clipping point. But I am not an engineer with Arri so I cannot speak with any amount of certainty as to how they arrive at their final image.
 
Hey Phil!

I've always struggled a bit to understand what you have going on here. Is this entire chart from a single exposure of the Xyla, which you have pushed and pulled in RCX with the ISO slider?

If so, at which ISO does the 1st chip on the left clip while the second chip next to it does not? (in other words, at which ISO is the Xyla chart exposed for a standard test?)

Thanks for helping me understand!

In the graphic with all of the patches the purposed is to show how the ISO Rating effects what is seen.

At any given point RED cameras capture the total possible Captured Dynamic Range for a given scene. The ISO mapping is just metadata.

In the combination chart each patch is actually exposed, meaning not clipping. The highlight patch of 1 is the fully exposed without R, G, or B clipping.

The reason I do this is that there's a no questions asked fully exposed patch without any possible clipping that can be counted a stop. As you navigate past this point individual channels will clip based on the light source the Xyla uses and which OLPF you're using.

Digging further it has been interesting to see which, how, and where each OLPF begins to clip under different lighting conditions, but that's a whole other conversation.

When I use the Xyla or when I do my patch test I do it both ways. The Xyla I work with twice, once with a full clip on the first patch and once without a clipping patch. For the patch test I fully clip and measure everything individually as it removes any chance of contamination and measure up to 36 stops of light (overkill). It also eliminates certain things that the Xyla isn't good at "controlling". If those numbers match up then usually the data is correct. The next is a measured exposure test, which takes a hell of a lot of time. If all 3 of things things match then life is good. If something is off I have to start all over from scratch. Xyla is the one shot solution (though best to do it in 2-5 shots with the shades). Patches are individual exposures. Takes a long time, but using a better light source. Measured is a reflected (i.e. not back illuminated) measured method and takes forever.
 
Interesting.

From my understanding, the roll-off is not a function of the sensor, per-se. RAW does not necessarily mean unprocessed data from the sensor. That information from the sensor is processed in very complex ways before it even gets to the point it becomes what we called "Uncompressed RAW". A lot of the way it interprets the color and exposure info, through the CFA and ADCs off the sensor is cooked up with "special sauce"...colorimetry. So your test does not mean that the Alexa is not definitively allocating extra stops towards a gradual roll into the clipping point. But I am not an engineer with Arri so I cannot speak with any amount of certainty as to how they arrive at their final image.

Yes but no matter how you try to roll off the highlights if you hit the clipping point of the sensor those pixels can not be held down or brought back no matter what post processing that goes on. Again the sensor are very much linear. For example with red if you set the colorspace to Camera RGB and Gamma curve to linear, then no pixels will be processed white before they actually clips, and when they clip they are not possible to retrieve back. You can do the same same with Alexa.
 
The thing with the Alexa Alev III sensor is that it has a dual-gain read-out that has an uncanny ability to pull in a scene's top and bottom exposure extremes. Christoffer's 25-stop test is sorta what an Alexa looks like all the time. DR is elastic when shooting with an Alexa. The Red sensors behave more linearly. At least to me. Controlled lighting tests of DR have limited value. The best tests are with uncontrolled exposure and mixed color temps.
 
Marc Hutchings said:
I understand the Alexa has better colour reproduction but I've been told how the Alexa holds more range in the highlights several times now would like to know what the deal is.
There is no industry standard that every company uses to report dynamic range. You can't simply compare specs on paper. We don't know what method one company uses vs. another or how they measure the dynamic range at all.

At the end of the day, you need to test the cameras yourself or read the results of a 3rd party you trust. Both cameras make great images. The camera is also only going to be as good as the person shooting the camera. If you don't know the in's and out's of the camera, the results won't be as good as someone who does. You really want to test the cameras and stress them. Find their breaking point so you know how to work around it. These are not point and shoot like an iPhone. I say that because with an iPhone, you can pretty much point and shoot and you get a nice image.

This is a Red forum, so you will get pro Red responses. Go to an Arri forum and you will get pro Arri responses. Go to a 3rd party forum and get a lot of people bickering. :) test yourself, test yourself, test yourself. Oh and before I forget, test yourself! :)

Marc Hutchigns said:
how do people discuss the cameras with no accurate or accepted test seems crazy.
This is normal. Same thing occurs in every industry. e.g. TV display manufactures make outrageous claims about the contrast ratio of their displays. Just like dynamic range of a camera, no agreed up standard is used by anyone. e.g. Years ago a display manufacture made some claims. Turns out they turned the display off when measuring black.

A pro will rent the cameras and run their own camera tests. Check out Shane Hurlbut's blog. He is a working DP. He has a blog that he charges information for. He publishes a lot of camera tests he does.

One of my favorite sites that reviews stuff is Cooks Illustrated. They don't accept advertising. They survive on subscriptions. They are about as unbiased as you can get. They are also great at their jobs. I have always wants to apply their approach to reviewing consumer electronic equipment.
 
The thing with the Alexa Alev III sensor is that it has a dual-gain read-out that has an uncanny ability to pull in a scene's top and bottom exposure extremes. Christoffer's 25-stop test is sorta what an Alexa looks like all the time. DR is elastic when shooting with an Alexa. The Red sensors behave more linearly. At least to me. Controlled lighting tests of DR have limited value. The best tests are with uncontrolled exposure and mixed color temps.

No it´s not elastic. It might be wider than it would be if it was not dual gain readout but it´s still linear and when it clips it clips, the rest is color science.
 
In yet another "camera vs. camera" thread it is beneficial to point out that cameras do not tend to battle each other.
That is a human tendency.

If put next to one another in most cases they just sit and watch, sometimes make some sounds.
 
IMHO after doing actual comparisons of the cameras the DR of a 6K DSMC2 is very similar to Alexa. So close it's very hard to call as a somewhat subjective call needs to be made on the noise floor. DSMC1 Dragon equipped cameras are a half to two thirds behind Alexa and the MX chip multiple stops. Also note these results would be slightly different if we were shooting 5K or 4K on the Red as the noise floor would become apparent more quickly the smaller the area of sensor used.
BUT DR is only one metric in evaluating a camera and the emphasise on DR is a hangover from the days electronic cameras struggled with 8 stops. Now we are cracking 14 stops any further improvement in DR is offering a diminishing rate of return.
Red has taken a while to catch up with Arris wonderful understanding of colour science and chip design. While I believe Red has caught up with Arri with the Dragon sensor the Alexa has built a well earned reputation.
I am very interested to see if the Heluim chip will start to outperform the Arri chip in terms of DR and colour although in terms of resolution we know which chip will win. But no one should be getting complacent as Arri must have a new chip sitting in the wings. I suspect this next generation of top end digital cameras will be the first to substantially surpass film acquisition.
 
There is no industry standard that every company uses to report dynamic range. You can't simply compare specs on paper. We don't know what method one company uses vs. another or how they measure the dynamic range at all.

At the end of the day, you need to test the cameras yourself or read the results of a 3rd party you trust. Both cameras make great images. The camera is also only going to be as good as the person shooting the camera. If you don't know the in's and out's of the camera, the results won't be as good as someone who does. You really want to test the cameras and stress them. Find their breaking point so you know how to work around it. These are not point and shoot like an iPhone. I say that because with an iPhone, you can pretty much point and shoot and you get a nice image.

This is a Red forum, so you will get pro Red responses. Go to an Arri forum and you will get pro Arri responses. Go to a 3rd party forum and get a lot of people bickering. :) test yourself, test yourself, test yourself. Oh and before I forget, test yourself! :)

This is normal. Same thing occurs in every industry. e.g. TV display manufactures make outrageous claims about the contrast ratio of their displays. Just like dynamic range of a camera, no agreed up standard is used by anyone. e.g. Years ago a display manufacture made some claims. Turns out they turned the display off when measuring black.

A pro will rent the cameras and run their own camera tests. Check out Shane Hurlbut's blog. He is a working DP. He has a blog that he charges information for. He publishes a lot of camera tests he does.

One of my favorite sites that reviews stuff is Cooks Illustrated. They don't accept advertising. They survive on subscriptions. They are about as unbiased as you can get. They are also great at their jobs. I have always wants to apply their approach to reviewing consumer electronic equipment.


I get what you are saying but I'm not about to rent an Alexa to test the DR which is why I thought I'd ask here to see what people's responses are, I've been reading this forum daily for a while now I know certain people have a heavy bias towards RED and I also know some guys who are pretty open and unbiased about the short comings of RED cameras so I think I can get a good idea by asking here. I already have expanded my knowledge just reading these posts so it has been useful to me. Im already a member of the 'inner circle' and i do find some of his videos useful. I'll definitely check out Cooks illustrated. Maybe I should have just stuck with my iphone! :wink5:
 
Last edited:
The thing with the Alexa Alev III sensor is that it has a dual-gain read-out that has an uncanny ability to pull in a scene's top and bottom exposure extremes. Christoffer's 25-stop test is sorta what an Alexa looks like all the time. DR is elastic when shooting with an Alexa. The Red sensors behave more linearly. At least to me. Controlled lighting tests of DR have limited value. The best tests are with uncontrolled exposure and mixed color temps.

Alexa is definitely easier to grade, I'm not sure if it's just because of the color science, but when I'm pushing an image with Red you see a clear clipping in highlights while on Alexa it rolls off without much further adjustments. However people look at it and say they are both linear, Alexa does something to the captured image, be it on a sensor level or color science level, but it's quite notable.

I also found the HDR test feeling much more like grading Alexa footage, even though Alexa supposedly has less stops than non-HDR Dragon. I always use rolloff in grading, softening the highlights and shadows. Red is looking way too linear directly out of the camera, which is also why I rarely use any pre-gamma and grade from LOG all the time. So even though Alexa and Dragon "should" look the same in terms of DR, the truth is they aren't, especially when you start grading. Why that is, I don't know, but real world tests of real things beats the math and charttests every time. There's a clear difference working with grading between the two and Alexa is the easier one when comparing. If I did HDRx with only 3-4 stops extra in order to not have such an extreme difference between the tracks exposure, I find it closer to how easy Alexa feels while grading.
 
the "highlight roll off" mentioned here has not only to do with the sensor but also with the debayer and the CS used in it... which Arri has the cleanest, resulting in a pretty neutral starting point... when it comes to highlight rendition some say it reminds them of film (of the available digital sensors)...

Dragon is great, but u can get (selective) contamination and fringing if u look closely and start bending the image quite a bit... this is more apparent under artificial, non-high quality lighting...

Helium is advertising better, cleaner color... we shall see now ! ALL HAIL RED !!!! ;-)))))
 
Alexa is definitely easier to grade, I'm not sure if it's just because of the color science, but when I'm pushing an image with Red you see a clear clipping in highlights while on Alexa it rolls off without much further adjustments. However people look at it and say they are both linear, Alexa does something to the captured image, be it on a sensor level or color science level, but it's quite notable.

I also found the HDR test feeling much more like grading Alexa footage, even though Alexa supposedly has less stops than non-HDR Dragon. I always use rolloff in grading, softening the highlights and shadows. Red is looking way too linear directly out of the camera, which is also why I rarely use any pre-gamma and grade from LOG all the time. So even though Alexa and Dragon "should" look the same in terms of DR, the truth is they aren't, especially when you start grading. Why that is, I don't know, but real world tests of real things beats the math and charttests every time. There's a clear difference working with grading between the two and Alexa is the easier one when comparing. If I did HDRx with only 3-4 stops extra in order to not have such an extreme difference between the tracks exposure, I find it closer to how easy Alexa feels while grading.

Log is not linear. Go to advance settings and export as Linear then the picture is linear.

Alexa log and red log is different from each other and does bend in different ways.
 
Log is not linear. Go to advance settings and export as Linear then the picture is linear.

Alexa log and red log is different from each other and does bend in different ways.

Didn't say that though, it's linear at the sensor level and Red looking "too linear" I meant that it clips rather abrupt rather than smoothly.

the "highlight roll off" mentioned here has not only to do with the sensor but also with the debayer and the CS used in it...

Since the debayer process is done after shooting Red, couldn't Red improve the processing algorithms of R3Ds for the SDK?
 
Back
Top