Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Dragon....

I love RED. I have spent tens of thousands on you. I am not a big dude. I am just a believer. But you guys keep breaking your word. And that's not cool. I bought into M. I was told an upgrade would cost 6k. Then you devalued my investment by 10k, increased Dragon by 4k, and failed to deliver. I am only one voice.

Brett, this is what I believe. Red is doing the VERY best they can. They are giving 200% of their blood, sweat and tears, working crazy hours under immense pressure. Someone who "breaks their word" is someone who is casual about their commitments and casual about backing away from those commitments. That is NOT Red. But they ARE breaking their BACKS to bring you the most technologically advanced digital cinema camera in the world. They are making history, which, the last last time I checked, is NOT very easy. Even what you claim is a $14K loss is NOTHING compared to the value of what your Epic-X and your soon-to-be Epic-Dragon can do. As others have stated before... these cameras are UNDER priced. So show some gratitude. Cheer them on in this 9th inning. They are rounding third and heading for home (American Baseball) and the last thing they need is someone booing and throwing tomatoes at them. This is Red: not greedy corporate liars and swindlers, but dreamers who are swinging for the fence (American baseball, again) every time they step up to the plate. Do they ever strike out? Sometimes. Rarely. But it isn't for a lack of effort or care. And when they hit the home run, which they usually do, the stadium will roar and you will get to keep that home run ball and say, "Yeah, I've been a part of history and I shoot Red."
 
Ok, I have a last minute question: Will the word "Dragon" etched on the side of the brain only appear in newly purchased Epic-M Dragons? Or will upgrades have it too? The reason I ask is I just ordered a new Epic with the intention of upgrading to Dragon as soon as I get my turn starting in september. But marketing-wise it might be important to have the word on the camera for those behind-the-scenes photos.
 
Ok, I have a last minute question: Will the word "Dragon" etched on the side of the brain only appear in newly purchased Epic-M Dragons? Or will upgrades have it too? The reason I ask is I just ordered a new Epic with the intention of upgrading to Dragon as soon as I get my turn starting in september. But marketing-wise it might be important to have the word on the camera for those behind-the-scenes photos.

Upgrades will get it too. Confirmed multiple times.
 
Ok, I have a last minute question: Will the word "Dragon" etched on the side of the brain only appear in newly purchased Epic-M Dragons? Or will upgrades have it too? The reason I ask is I just ordered a new Epic with the intention of upgrading to Dragon as soon as I get my turn starting in september. But marketing-wise it might be important to have the word on the camera for those behind-the-scenes photos.


When you send in the Brain for the Upgrade, will need to be sent in with the Side SSD Module, which they'll Engrave for you... ;)

Additionally if you bough the MX Fan Kit upgrade, they'll also change the front Icon to the Dragon.
 
Ordered. I hope it gets here on time for the historic GP. Last year, I have to admit I struggled with the Epic at 7:00 during some very important practice sessions where I get most of the High quality "glamour" high speed shots. I was pushing 1250 Iso and was underexposing at 100 fps with a 2.8 lens. I "need" the Dragon's extra sensitivity for these shots. If it gets here on time, i will make sure to not disappoint the camera. Cannot wait!
 
@Kyle,

With a 16bit (linear) binary number, you can express a number between 1 .. up to a value that has been doubled 16 times
To be able to encode 16 stops of brightness (luma) - you therefore need at least 16 linear bits for each RGB.

(With clever use dither it is possible to create shades that goes beyond this limitation)

As the Dragon sensor is capable of 16+ stops of dynamic range, those least significant bits can actually be fully utilised.

Many cameras sensors may be used to store 14 or 16 bits ... but with mediocre signal to noise ratio, what is being recorded is a 'digital grain' of the noisiest analogue component of the camera.

When RED announced the power supply was being redesigned for the DRAGON upgrade, my take on that was that they never expected their (expensive) sensor technology to overtake the S/N ratio of the (much simpler to design) power supply.

Closely followed by the DR, I believe the metric you would be most intested in is the Signal to noise of the Dragon sensor (and not worry about how many bits are in used in encoding).

AJ

As far as color goes this is still 16 bit yes?
 
Ah I see. This along with a little bit of research makes sense. Beautiful.

@Kyle,

With a 16bit (linear) binary number, you can express a number between 1 .. up to a value that has been doubled 16 times
To be able to encode 16 stops of brightness (luma) - you therefore need at least 16 linear bits for each RGB.

(With clever use dither it is possible to create shades that goes beyond this limitation)

As the Dragon sensor is capable of 16+ stops of dynamic range, those least significant bits can actually be fully utilised.

Many cameras sensors may be used to store 14 or 16 bits ... but with mediocre signal to noise ratio, what is being recorded is a 'digital grain' of the noisiest analogue component of the camera.

When RED announced the power supply was being redesigned for the DRAGON upgrade, my take on that was that they never expected their (expensive) sensor technology to overtake the S/N ratio of the (much simpler to design) power supply.

Closely followed by the DR, I believe the metric you would be most intested in is the Signal to noise of the Dragon sensor (and not worry about how many bits are in used in encoding).

AJ
 
heh heh heh. Baby steps :)

Why baby steps? All of your competitors already have GPU debayer, and you've been in the RAW game longer than they have!
 
With a 16bit (linear) binary number, you can express a number between 1 .. up to a value that has been doubled 16 times
To be able to encode 16 stops of brightness (luma) - you therefore need at least 16 linear bits for each RGB.

I'd have to say you're oversimplifying here and in somewhat of the wrong way. It's commonly, and incorrectly, perceived that the bit-depth of your pixel container dictates the number of stops which can be represented. Bit precision is just that, it's all about the precision or the number of identifiable values or steps along your luma scale. We can further complicate things by trying to guesstimate or analyze what other information may be contained in a range of values beyond just luma info. At face value, a camera could encode for 20 stops into a 16 bit container and still have 3,276 gradations from one EV to the next, for a total of 65,536 discrete luma values from the darkest value to the brightest. But that is an over-simplificaton as well. To make full use of each discrete value as a specific luma quantity, the signal would have to be 100% noise free.

It all comes down to signal to noise in this situation and if it's as clean as they claim, a 16bit container should do quite well here, even if the camera "sees" a range of 20 to 22 stops. In reality, I'm expecting the camera to see 18~19 stops with a good 17.5 useable in most real-world controlled lighting scenarios. Probably still within a 16bit container, but I wouldn't mind if they bumped to 18, because I love extra precision where I can get it. Even if it is overkill. 20 stops at 18 bits on a linear scale is 262,144 discrete luma values, or 13,107 discrete luma values from one EV to the next -- assuming 0 noise. Of course, there will be measurable noise, even if it's as good as they claim, so this becomes an entirely academic speculation of a discussion. ;)


Why baby steps? All of your competitors already have GPU debayer, and you've been in the RAW game longer than they have!

Semantics, but debayer processing is not what slows down R3D processing. It's the wavelet compression. Wavelets are not something that have been all that compatible with GPUs in the past. It takes highly specialized custom ASICs to accelerate wavelet computations. That's a big reason why we have to use the Rocket card, and soon the Rocket X, to get wavelet acceleration and it comes at a steep price. Newer GPUs are becoming more capable and I still think that the Intel MIC platform (Xeon Phi, FKA Knights' Corner) is a real promising solution here. I've seen it demoed doing wavelet reconstruction / MJPEG processing. It's hard to quantify that in terms of R3D acceleration, but a pre-release Xeon Phi compute card was pushing 4 simultaneous 1080p MJPEG streams at 60Hz. So I think there's something to be looked at there.

When it comes to actual debayer or de-mosaic algorithms, GPUs love that sort of stuff. However, it's not particularly taxing to begin with. ARRI RAW is fast and easy to process compared to R3D because of two reasons. 1> No decompression needed. Just proper de-mosaic and color transformations / LUT application. 2> It's relatively small. Alexa 2.8K is essentially 1/4 the number of pixels per frame vs. the 5KFF Epic frame.
 
Red also didn't like having their codec reverse engineered in the early days, so there is some decryption overhead that takes place as well with R3D files, as I understand it.
 
But then in all honesty You also haven't seen DRAGON footage compared to 65/70mm in order to voice an honest and informed opinion... Just saying... ;)

We content ourselves that Digital surpassed 35mm... What ever Dragon does is icing on the cake, but I have no doubt it will indeed be closer to 65/70mm Film then 35mm!


Let's just wait and see what kind of Fire does Dragon spit before saying anything else about comparing one to the other.


What we really need is a 65/70mm Digital Sensor.......there is a certain look to sensors of that size that cannot be replicated. (At least to my knowledge) on a 35mm chip.
 
Red also didn't like having their codec reverse engineered in the early days, so there is some decryption overhead that takes place as well with R3D files, as I understand it.

Decryption time should be negligible. *grumbles the person who reverse engineered it...*

As to 16 bit... I "doubt" that RED is storing the data in 16 bit linear. There are far more efficient ways to store data than straight into 16 bit values. Just straight up saving RGB channels is also inefficient for compression. EXR for instance is 10bit + exponent and as a result its 16 bit container can hold wayyyy more than 16 stops without any notable degradation.

EXR for instance is:

1 bit for +/-
5 bits for exponent (stop)
and
10 bits for color.

So that's 32 stops (2^5) each with a full 10 bit log precision of all the values in that stop. And it handles negative values.
 
What we really need is a 65/70mm Digital Sensor.......there is a certain look to sensors of that size that cannot be replicated. (At least to my knowledge) on a 35mm chip.


There is a name for that: EPIC 645... 9k 65megapixels, as it was projected some time ago... ;)
 
What we need is a post workflow that isn't tied to the RedRocket. IMO, this is the biggest bunch of BS that has come from a camera manufacturer that it requires a PCI-e card to playback and render its material ie making the camera useful in post. Ill say this until the RedRocket cards are extinct, and RED stops blaming their customers with generic emails accusing of wrongful abuse to the cards heatsync.
This makes Dragon or any offering from RED now and in the future completely unatractive to me. Seriously, $6K is not a bargain nor is it a solution, that we should be stuck with as the only method to deal with this footage now or in the future. I've screamed this before and ill do it again because I actually really like what RED has done to our industry and feel this is a huge reason they get so much criticism and certain producers refuse to use thier cameras. More K's, more problems.
 
What we need is a post workflow that isn't tied to the RedRocket. IMO, this is the biggest bunch of BS that has come from a camera manufacturer that it requires a PCI-e card to playback and render its material ie making the camera useful in post.

There is a non-Rocket solution: Adobe Premiere. I edit 5K .R3D footage on my the year old Intel i7 930 and nVidia GTX 470 all the time. Sure, I can't play back at full resolution, but my monitor is only 2.6K, so I don't need full res playback. A RR would undoubtedly speed up the final render out times, but it certainly isn't a necessary component for the post workflow.
 
There is a non-Rocket solution: Adobe Premiere. I edit 5K .R3D footage on my the year old Intel i7 930 and nVidia GTX 470 all the time. Sure, I can't play back at full resolution, but my monitor is only 2.6K, so I don't need full res playback. A RR would undoubtedly speed up the final render out times, but it certainly isn't a necessary component for the post workflow.

offline/online solution is still widely used, it's still used a lot more than native editing. you can work on a slower computer, drive space for the dailies are smaller so you can fit everything on a lacie rugged and work on a laptop if you wanted. and the output render factor is important as well. especially when you are getting in notes on revisions every half hour and you need to send a recut out as fast as possible. that 2-3 minute render from a dnxhd 36 timeline turning into a 15 minute render adds up immensely over the course of a day, as well as the lifespan of the project in post.

transcoding will always be part of the business. doesn't matter how fast computers get, that just means the offline will be even faster. the offline will always be faster than working with master source footage if you are working with very low bitrate dailies.
 
There is a non-Rocket solution: Adobe Premiere. I edit 5K .R3D footage on my the year old Intel i7 930 and nVidia GTX 470 all the time. Sure, I can't play back at full resolution, but my monitor is only 2.6K, so I don't need full res playback. A RR would undoubtedly speed up the final render out times, but it certainly isn't a necessary component for the post workflow.

Kyle, with all due respect, I'm not referring to the individual owner operator here who is making it work in their bedroom or little home office. I'm talking about facilities, DIT's and the like that may not be editing at all and don't care what Premiere can and can't do. Professionals need to work in real time and speeds faster than realtime to accomplish dailies turnarounds that same day or next day at the latest. On big shows that requires a multiple machines with multiple RR's just to do the same thing that they can do with other codecs at a much lesser cost. I'm fully aware of how I can make a short form TVC or industrial / corporate piece quickly on a non-calibrated, simple MacBook Pro system, but that's not what is appropriate for larger infastructure post facilities and DIT's. With the rate of failure on these cards it's a terrible investment yet one that strong arms these companies big and small to own due to the lack of offering a simpler solution which I'm convinced could be done. But where's the money in that? The money is in the treatment not the cure.
 
Back
Top