Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Dragon misconceptions and clarification...

He wasn't using one for this test.

What low pass filter did you use? Thx.

Footage looks great by the way - especially considering how much the sensor is being tested in some circumstances. This will certainly reignite Epic for Dp's.

+1 for optional LP filter on the Epic please to the Folks at Red.
 
Just wanted to clear that up for you so you might avoid getting banned... not by RED, but by productions who use RED cameras.

My God the cult is growing strong here. LOL. Didn't Rick Darge own multiple RED cameras totaling in the tens of thousands? He still got banned for posting his polite opinion.
 
My God the cult is growing strong here. LOL. Didn't Rick Darge own multiple RED cameras totaling in the tens of thousands? He still got banned for posting his polite opinion.

Andrae, I think you are referring to someone who took issue with what he said. I'm not good at remembering names so I'll just refer to him as the gentleman who owned a bunch of cameras who took issue with something negatively stated about the performance of RED cameras in some way or another. Let me just say that gentleman who owned a bunch of cameras really had my attention. Someone with that big of a buy certainly saw enough of the benefits of being RED centric to put his money where his testing led him.

You had the right conversation... just attributed the weighted end of the scale to the wrong person.

But I'm also not sure if that was a rebuttal to Rick Darge or to someone else. There are so many people throwing digs at RED after the Toia Dragon Test that it is hard to keep up with all the different ones who seem to choose to be fault finders.

OBTW Andrae, hope things are going well for you. Haven't seen you post here since you said you were selling your cameras. The Dragon, by all appearances, looks like a good reason to bet on RED again.'-)
 
Last edited:
Andrae, I think you are referring to someone who took issue with what he said. I'm not good at remembering names so I'll just refer to him as the gentleman who owned a bunch of cameras who took issue with something negatively stated about the performance of RED cameras in some way or another. Let me just say that gentleman who owned a bunch of cameras really had my attention. Someone with that big of a buy certainly saw enough of the benefits of being RED centric to put his money where his testing led him.

You had the right conversation... just attributed the weighted end of the scale to the wrong person.

But I'm also not sure if that was a rebuttal to Rick Darge or to someone else. There are so many people throwing digs at RED after the Toia Dragon Test that it is hard to keep up with all the different ones who seem to choose to be fault finders.

OBTW Andrae, hope things are going well for you. Haven't seen you post here since you said you were selling your cameras. The Dragon, by all appearances, looks like a good reason to bet on RED again.'-)

His name is Greg Strause (Respect to you sir for owning 14 RED Epic cameras)

http://www.reduser.net/forum/showthread.php?103853-Dragon-misconceptions-and-clarification/page16

Post 158.

"Flame away on Redspace, even Redgamma2. But there is no baked in look. You could always make Raw look great once we had RedColor - back in the day you just had to know how to move it around. Once you had 16flt EXR with Redcolor2, the complaints should have ended on the look front. I have 14 epics - I earn my money with them too. i deal with all the BS of people preferring Alexa - selling me on why a big bulky 3k camera with less high speed capabilities is better. But stock out of the box the images line up well chroma wise - just that the Alexa is 3k and has a 'bloomy' baked in look that you can't remove."
 
Alexa records 2K Prores 444 12 bit.
In a way, scanned film has that "baked in" look as well. There is no traditional metadata there:)

You're right about the 444, I was just typing away. My point still stands, though... As for scanned film, it's a different animal, but follows the RAW paradigm more than anything else. With film, there are several processes and techniques that can be applied while developing the negative. There is also the freedom to explore different and altered looks or processing during the scanning process before any decisions are made to bake-in a particular setting at the RGB level.
 
You're right about the 444, I was just typing away. My point still stands, though... As for scanned film, it's a different animal, but follows the RAW paradigm more than anything else. With film, there are several processes and techniques that can be applied while developing the negative. There is also the freedom to explore different and altered looks or processing during the scanning process before any decisions are made to bake-in a particular setting at the RGB level.

Yes, you can push and pull film as well as do silver bleach bypass but that is pretty much it. Once the film is developed i.e. "debayered" and scanned with set D min and D max i.e. "metadata" is adjusted the look is baked in. There is no way to manipulate scanned film as you could with metadata in Raw files, such as alter the color temperature, tint or exposure. So, I don't see much difference between Alexa Prores and scanned film.
 
In my opinion, I think it would be wise for all of us to be patient and wait for the official role out of footage (in R3D form) and start to form our opinions when there is software to process it. I mentioned this on another forum, and I think it is appropriate to mention here as well. When making an informed analysis of image quality, there are 4 things that are required.

1. Camera originals
2. Proper tools for the playback and manipulation of said footage (in this case, the new version of Redcine-X and subsequently 3rd party implementation of new color science)
3. Calibrated viewing environment capable of displaying the full potential of the footage (uncalibrated computer monitors, or even poorly calibrated broadcast monitors don't qualify)
4. Analysis tools such as scopes and image data tools

A compressed MP4, although a glimpse into what Dragon may offer, really does not offer the full story. Therefore, some of the negative comments I have seen on various places around the internet are in my opinion missing the mark. When RED is comfortable and ready to deliver production ready cameras and software, that is when I think these conversations really begin in earnest. Until then, we are just left doing a lot of guesswork, which means very little in the real world. Mark's test footage achieved what he was going after. Where does the sensor break. One series of tests, not the end of story. I wish people could understand that. Then again, we are all eager to know what it's capable of, so I understand people jumping the gun in their analysis. Human nature I guess.

This is good food for thought when you consider 6K and software! I am hoping you'll keep your hand in Jim, if only to see what it is you have really accomplished in entertainment as it plays out... Just saw "Stand Up Guys" and it was a master piece of writing and film making! Wow!

I asked you once if your intent with Scarlet wasn't really to revolutionize the potential of television production and you said, "No!" and yet... Haven't you?

Now I have Scarlet 0123 and it is in good working order although it does (some number of times) scare the hell out of me when I hit record and it doesn't. Outside of that I am thrilled with the picture and opportunity (regardless of many computer and Actor issues) it provides. Hoping you will eventually (re-consider) the idea of battle tested Epics as an offer for Scarlet owners who do not wish to trade in or upgrade at this time. Considering that Adobe CS6 doesn't do 6K and I have yet to hear about interchange with Scarlet Dragon, I will certainly be keeping my ear to the ground on Post! Thank you for everything you do!
 
Yes, you can push and pull film as well as do silver bleach bypass but that is pretty much it. Once the film is developed i.e. "debayered" and scanned with set D min and D max i.e. "metadata" is adjusted the look is baked in. There is no way to manipulate scanned film as you could with metadata in Raw files, such as alter the color temperature, tint or exposure. So, I don't see much difference between Alexa Prores and scanned film.
There's some truth in this. And I've encountered situations with film D.I.'s where I didn't like the scans on certain scenes and made them go back and redo them, because I felt they were skewed too far one way or the other in terms of DMin and DMax.

I would still rather work with Alexa raw (like with Codex), and too often I see DPs with Alexas relying too much on on-set monitors with questionable calibration and not using the histograms correctly. Having said that, it's a pretty bullet-proof camera. I don't doubt that Dragon will give DPs more range, but I think the post options of Alexa continue to push things in their direction, at least on the projects I work on these days. If and when Red can match this capability, then it'll be an even playing field.
 
The only attraction to IN camera Baking filters such as the Low Pass LowCon filter, are for those that want immediate results off the camera, and those that don't want to do the leg work or do not know out to, or do not want to pay some one to do, or do not have the time to spend doing or just don''t care to.

That's not true. One of the best parts of lowcon filters is that it takes data outside of the dynamic range of the sensor and moves it down into the captured range.

Imagine a lens flare because that's effectively what a lowcon filter is, a big flaring surface. If you only have a 0.0 -1.0 range image then you don't know how bright that highlight is. It could be 1.1 it could be 100.0 a flare or glint from a 100.0 will be 100 times brighter than a 1.1 but you don't know anymore since it's just "1.0". HDRx could fix some of this but it's not a perfect solution since a glint might get 'stretched' across the frame and drop below 1.0 on a car, or the glint might be half-way through the exposure etc. Once we get 30 stops of dynamic range then baking in filters will just be for the sake of 'immediate' results. But there are a lot of optical effects that depend on HDR imagery and by HDR I don't mean 14 stops of info.

A great example would be a grad filter. If your clouds are blown out applying a gradient in post won't restore the the cloud detail it'll just make a constant gray gradient in your burnt out sky.
 
Once we get 30 stops of dynamic range then baking in filters will just be for the sake of 'immediate' results. But there are a lot of optical effects that depend on HDR imagery and by HDR I don't mean 14 stops of info.
I'm not convinced that display devices can reproduce 30 stops, or even 15 stops. We constantly have to compress the dynamic range in color-correction anyway, usually for dramatic purposes. HDR still has problems with certain kinds of motion, so I don't think that's quite the answer yet, either.
 
I'm not convinced that display devices can reproduce 30 stops, or even 15 stops. We constantly have to compress the dynamic range in color-correction anyway, usually for dramatic purposes.
Yes, but if you have a diffusion filter you need to maintain energy conservation. Both of these white boxes are compressed for display at 1.0. However once you start processing them, they behave very differently.


HDR still has problems with certain kinds of motion, so I don't think that's quite the answer yet, either.
HDR has no inherent problem with motion. HDRx, red's multi-exposure trick, has trouble with motion since it's blending multiple exposure intervals and if you had a sensor with more than 3 colors like the speculated panavision camera where you have full Green/ND Green and Full red/ND RED etc you could have spatial artifacts and problems. However, if you just have an HDR sensor with 30 stops of dynamic range natively you wouldn't have any more motion artifacts than going from 8 stops in video to 16 stops in Dragon.
 
Yes, you can push and pull film as well as do silver bleach bypass but that is pretty much it. Once the film is developed i.e. "debayered" and scanned with set D min and D max i.e. "metadata" is adjusted the look is baked in. There is no way to manipulate scanned film as you could with metadata in Raw files, such as alter the color temperature, tint or exposure. So, I don't see much difference between Alexa Prores and scanned film.

You can adjust exposure and color temperature etc it just won't be quite as mathematically accurate without a really good LUT to get you back to true linear. In fact I would wager that thanks to FLUT white balance is done in RGB space *AFTER* the debayer. At which point RED isn't performing white balance or exposure until it's a standard floating point RGB image anyway. RED just has characterized a reference debayer point very accurately. And that wasn't by magic by having RAW files, it was by I'm sure spending a loooonnnnnggggg time with lots of very precise sensors calibrating their color science. You could also do it optically with film. If you wanted to adjust your color temperature you could always scan your film through an 80D filter.

This is what theoretically ACES should deliver. If you have an ACES file it should have all of that hard earned characterization work applied in reverse to move whatever capture format you have into a universal standard. Theoretically if every sensor was identical and everyone wrote a perfect reverse ACES profile you could shoot on film stock or a RED and result in an identical image. Then instead of round tripping by re-applying the RED -> ACES or FILM -> ACES inverse transforms you could flip it on its head and apply the FILM look to the RED ACES file and the RED look to the FILM ACES file (since theoretically they're identical.) Now there is a lot of "theoretical" statements in that sentence but apparently suggesting that in practice it might be impossible to perfectly match two sensor formats is a bannable offense I'll leave it to others to speculate on the likelihood that the theoretical potential of the workflow comes to pass.

RAW does highlight the benefits of having a well characterized image that gives you not just arbitrary data but photometric information as well. So instead of being "50% blue" you know that "50% blue = approximately this empirical real-world color". But all of the fanciness that we associate with RAW (Color Temperature, exposure, black point etc..) isn't an inherent feature of RAW itself nor done in RAW space. The important part of RAW in the equation is that it's a *consistent* starting point for software to perform those operations. But even creating a consistent starting point is next to impossible since you have to characterize the entire imaging chain (assuming that's even possible) so you have to know what color shift the lens is going to imbue, you have to know what color shifts will happen from IR or UV light affecting your sensor, you have to know what the effects of color spikes in your lights will create, you have to know what the affects will be from your IR/UV filtration will be if it's not built into the camera (will it give it a green caste etc.) Even NDs aren't perfectly "Neutral". And how your sensor/film reacts to this is much akin to drug interactions.
 
Really sorry if I am asking a previously answered question and did not see it. But with the camera being 6k native, will it be down sampling when shooting 4k? Or will it be cropping into the image at all like we currently do when switching to 2k on the MX sensor? I can see myself still shooting 4k or 5k for most jobs
 
Really sorry if I am asking a previously answered question and did not see it. But with the camera being 6k native, will it be down sampling when shooting 4k? Or will it be cropping into the image at all like we currently do when switching to 2k on the MX sensor? I can see myself still shooting 4k or 5k for most jobs

Yes Bob, it's like it currently works. At the core of what Red's cameras are about is windowing and super sampling, that's not changing. However, because Dragon has smaller pixels you will notice that 5K on Dragon now is very close to the Super 35mm 3-perf format. Whereas about 5.5K is going to be closer to the sensor size of Mysterium-X at 5K today. Basically you're getting more resolution if you consider the "format size".

For a 4K finish I think we'll see a lot of folks shooting 6K, 5.5K, 5K, and 4K. Just depends on their wants and needs really.
 
Thanks so much Phil, that makes it much more clear to me. Looks like a great upgrade from the MX.
 
This is fun...

When digital entered audio, it was thought to be "pure", and all kinds of recordingtechniques to compress dynamics were immediately ditched. WHile people were discussing that they wanted "the full" dynamics and not compress the digital signal.

That lead to some silly usages.

Digital is not "pure" (as is not analogue). It is a way of reprsenting a signal.

If you record an orchestra, (with a very diverse dynamic and spectra characteristic) you do not put a single mic on every instrument (even though that would give the most dynamic experience). Quite often two mics at quite a distance can be a very good way to start because of the diffusion, compression and limiting air and distance gives. And the funny thing is that it will sound like an orchestra and not like a ton of peaks...

Now, this is very easilly parallell to image aquisition.

The diff-filters didn't come with digital, and some stuff needs to be done prior to recording to get more of the signal you want and less of the signal you do not want.
As ALL digital editing is inherently destructive, and no signal is really "neutral", it can often be a good idea to start closer to the signal you want and use the bits to record THAT, instead of everything you do not need.

Controlling the optical path, is still part of image aquisition. Luckily "digital" does not change that... :)

Going back to an audio analogy.

I still have heard noone claim that Bonhams drums on Zeppelin I would've sounded any better with a ton of closemics, rather than the way it was recorded:
With an U87, 2 meter over the drums, and him adjusting the balance of the instruments in the headset while playing...

It's about capturing a Good Signal For The Purpose, and nothing that helps you do that is "wrong".
Making rules like that is just limiting your creative options and craft as a photographer...
 
That's not true. One of the best parts of lowcon filters is that it takes data outside of the dynamic range of the sensor and moves it down into the captured range.

Imagine a lens flare because that's effectively what a lowcon filter is, a big flaring surface. If you only have a 0.0 -1.0 range image then you don't know how bright that highlight is. It could be 1.1 it could be 100.0 a flare or glint from a 100.0 will be 100 times brighter than a 1.1 but you don't know anymore since it's just "1.0". HDRx could fix some of this but it's not a perfect solution since a glint might get 'stretched' across the frame and drop below 1.0 on a car, or the glint might be half-way through the exposure etc. Once we get 30 stops of dynamic range then baking in filters will just be for the sake of 'immediate' results. But there are a lot of optical effects that depend on HDR imagery and by HDR I don't mean 14 stops of info.

A great example would be a grad filter. If your clouds are blown out applying a gradient in post won't restore the the cloud detail it'll just make a constant gray gradient in your burnt out sky.


Gavin,

I am very familiar with it's works, yet I maintain my strong opinion on this, as I can achieve great results in post, never the same for all, but great results nonetheless, and yet I still have my Original RAW file to do as I please.


Mind you, I used to have every filter known to man at one point for my photography, it is just the way I fill now, no filters to bake an image in camera, as far as I can avoid it.

At only exception of a Polarizer filter, which I try to use full time, as I have yet to find a situation were I would have been better off without it, and this is what makes the Red Motion Mount super attractive to me now more then ever.
 
And the pola gives you an extra stop in the top, which I dig.

After carrying MB14-kitted R1s/Epics with ND 2.4. the motionmount seems like a dream come true.
 
Back
Top