I'd bet almost everyone with an Epic would happily slap down $5000.
I'm talking module sales.
Plenty pay $4750 for a Red Rocket.
I'm going to throw this out there, but I say if the client wants you to shoot on Alexa, shoot on Alexa. Give them what they want, take the money and smile. At the end of the day it's just a camera, the workflow on Alexa is nice and simple (heck, shoot to DNxHD and just hand over the files at the end of the day - if your producer is using Avid, that'll be super simple and straightforward).
No. You're saying "sell your Epics"???
I'd like to shoot REDRAW AND Prores / DNXhd, and have the perfect camera.
And there's no sensible reason that I shouldn't be able to.
If Red's happy enough to sell a Rocket solution, they should also see the sense in a module solution.
Jim acknowledged this years ago, and announced H264 and possible Prores modules.
Then it was left to AJA, and it wasn't an adequate fix. Unreliable, chunky... And a separate hire.
With a Dragon sensor and Prores/ DNXhd module saving alongside the RAW SSD, there'd be no producer arguments unless...
I'm guessing that the camera's architecture precludes the use of a module for varispeed shots, except via playback on camera.
So it would not really cut it when you're shooting lots of high speed.
Many users have been asking for a number of years for a pro res module solution. I would start using mine immediately.
We mostly find this a pretty satisfactory and comfortable range for a viewing experience. The question becomes how and when we compress the much broader DR and gamut modern sensors are capable of delivering into the limits of our viewing formats. Nothing new about this, Ansel Adams had the same issues with getting the most out of his negatives when making prints. It is about preserving the most relevant ranges of details and values to meet the creative intent of the image.
The difference between recording Bayer raw sensor data and encoded video is the difference between capturing the full gamut and dynamic range the sensor is capable of or capturing a compressed representation of that range according to some engineer's or manufacturer's idea of what that representation should look like.
Raw capture for motion is still in its infancy as a technology. A serious still photographer might spend hours tweaking the raw data of one image to arrive at what he considers the best representation of his artistic intent. Doesn't seem like there is much time devoted to that part of the raw workflow yet for most motion work. I know there are colorists and effects people who are great artists in their own right and you do see some of this happening at the high end with the best work, but the idea hasn't penetrated very far into more mundane video centric production communities.
|« Previous Thread | Next Thread »|