Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

POWER OF RED

Jarred Land

Fire Chief
Staff member
Joined
Dec 27, 2006
Messages
10,644
Reaction score
33
Points
10
Most of you that are here already know all this... but we still get calls every day asking what the benefits are for shooting on our cameras are so we put together a little video and a quick webpage together on RED.COM explaining some of the benefits of shooting RED.

http://www.red.com/power-of-red


It's pretty much the same info as Mark Toia's video from a few years tells so well, just in a more condensed, short attention span form :)

enjoy.
 
Hey Jarred, any chance stabilization can be done in the future using gyro metadata captured in camera? Is that even possible? A la SteadXP but as a plugin for the major NLEs?
 
Not a doubt in my mind. Everything you mentioned in the spot is something I've taken full advantage of. The ability to recrop from a 4K to a 1080 image was an eye-opener with my RED One. Stabilization, check. All of these are wonderful features along with a fantastic codec.

Keep it up!
 
  • Thread starter
  • Admin
  • #4
Hey Jarred, any chance stabilization can be done in the future using gyro metadata captured in camera? Is that even possible? A la SteadXP but as a plugin for the major NLEs?

The data from the accelerometers inside the camera is already in the R3D Metadata. People have used it in VFX and adobe has made it work and of course all other nle can as well.
 
A simple extract of gyro raw data from a r3d in rcxp into a simple txt file would be much apriciated by many Im sure. The current way of getting the data via redline comandline in a shell, is likly just utilized by a few.
 
The data from the accelerometers inside the camera is already in the R3D Metadata. People have used it in VFX and adobe has made it work and of course all other nle can as well.

For better/quicker IS, though? Cause that'd be dope!

Also, it'd be even more rad if you guys could enable it in RCXp/SDK as a "stabilization" slider (in all DSMC1+ footage, no less) to be like the C500mk2's or FX9's "IS" (as that essentially sounds like the same thing, as neither are physical IS, but it's presumably done using via metadata.)
 
Last edited:
Something we were talking about over on the FB group for DSMC2 owners (which might be another useful point to put on the website) was about multiple aspect ratio deliveries.

Timur brought up how 8K on Helium was perfect for him when it came to delivering projects with social media/mobile phone screens in mind because he's able to use the real estate of the sensor to do a multi-aspect ratio composition with a 6K 16:9 extraction and an 8K (height) 9:16 extraction, essentially allowing you to open up the matte on top and bottom to get more vertical height without having to crop in so far on the sides for the mobile phone version. If you don't have time to compose two different versions of the shot, the high resolution seems like a lifesaver.

Kinda reminds me of how the old GH2 worked, although that was more for maintaining image circle coverage. Would love to see RED explore that idea someday. Perhaps take Helium's pixel pitch, make a 36 x 36mm, 10,000 x 10,000 pixel sensor out of it. Extract just about any image you want, FF35 width in both horizontal and vertical dimensions of the sensor, or use the 10K to extract an 8K stabilized image which can then be downscaled to a true 4K RGB delivery. I think I might be on to something...
 
Something we were talking about over on the FB group for DSMC2 owners (which might be another useful point to put on the website) was about multiple aspect ratio deliveries.

Timur brought up how 8K on Helium was perfect for him when it came to delivering projects with social media/mobile phone screens in mind because he's able to use the real estate of the sensor to do a multi-aspect ratio composition with a 6K 16:9 extraction and an 8K (height) 9:16 extraction, essentially allowing you to open up the matte on top and bottom to get more vertical height without having to crop in so far on the sides for the mobile phone version. If you don't have time to compose two different versions of the shot, the high resolution seems like a lifesaver.

Kinda reminds me of how the old GH2 worked, although that was more for maintaining image circle coverage. Would love to see RED explore that idea someday. Perhaps take Helium's pixel pitch, make a 36 x 36mm, 10,000 x 10,000 pixel sensor out of it. Extract just about any image you want, FF35 width in both horizontal and vertical dimensions of the sensor, or use the 10K to extract an 8K stabilized image which can then be downscaled to a true 4K RGB delivery. I think I might be on to something...

One of the things announced at IBC was that Adobe was utilizing a "Smart Aspect Ratio" filter that would maintained subject matter centre frame independent of delivery AR (it'd basically automatically keep the subject in the centre whether it was 16:9 or 9:16)... very cool.
 
One of the things announced at IBC was that Adobe was utilizing a "Smart Aspect Ratio" filter that would maintained subject matter centre frame independent of delivery AR (it'd basically automatically keep the subject in the centre whether it was 16:9 or 9:16)... very cool.

The demo I saw of that didn't look too promising to me. Things that were integral to the shot were cropped out. The one example out of that Adobe demo that made me literally laugh out loud was the girl dancing. The 9:16 version decided she was no longer relevant to the shot and instead framed a shot of a bystander. Same with the guy with his kid in his arms. The 9:16 version totally crops out of the kid -- but the kid was the whole point of the shot.

Part of that is because they were using material with that locked-in vertical height. But the other part was because the DP of those shots clearly wasn't ever intending to have a 9:16 composition be made from the material they shot. If the DP framed the shots with the 16:9 center extraction like Timur is doing, they could open the matte vertically and not have to crop in so far left/right. Not to mention, Timur is also framing for the vertical aspect ratio too. I don't think AI is going to be able to save you unless you had the vertical aspect ratio in mind from the beginning.
 
The data from the accelerometers inside the camera is already in the R3D Metadata. People have used it in VFX and adobe has made it work and of course all other nle can as well.

Jarred did you use the in camera gyro information to stabilise the shot of the girl in the "Power of Red" video? Sony with their FX9 is doing a similar process and taking the Gyro information in their Metadata and then apparently the Sony Catalyst Browser will then interpret this data to stabilise the footage. It is a shame that RedcineX cannot do the same and the other workflows seem so opaque. Is there a White Paper or video that show how people go about stabilising footage from Red cameras leveraging the in camera Gyro information? The DSMC2's gyro data seems to never have been properly exploited a mistake Sony doesn't seem to be making. It does remain to be seen how well this stabilisation feature works with the FX9 as Sony software is rarely as good as their hardware. I have been promised early access to the FX9 so I will test this hopefully next month.
 
The gyro stuff comes into Nuke at the moment but only for the first frame. I have them looking at that and also hopefully will supply some sample footage to see what we can do with that be used for the tracker. We don't really know how accurate it is at the moment - so a bit of R&D is needed.

cheers
Paul
 
The data from the accelerometers inside the camera is already in the R3D Metadata. People have used it in VFX and adobe has made it work and of course all other nle can as well.

Now that Sony is finally catching up with the FX9 approach which is identical to this, and subsequently gyro and accelerometers metadata will be used more,
do you think we will finally see an easy way to take advantage of these datasets like direct implementation inside Resolve stabilizer?
Is RED SDK already passing this added metadata stream to Resolve internally?
 
The data from the accelerometers inside the camera is already in the R3D Metadata. People have used it in VFX and adobe has made it work and of course all other nle can as well.

I still think that the power of a stabilization option in the R3D RAW settings is what we need. It's good that workarounds and different methods for those tech-savvy people who are able to utilize it exists, but that is miles away from the potential to have such a simple slider in the R3D RAW settings. I can just imagine color grading a project in Resolve and just slide a setting from 0% post-stabilization to 100% post-stabilization without ever having to pull metadata and pull that through traditional methods of post-stabilization. It's actually easier to post-stabilize Red footage with a SteadXP than using the already existing internal gyro.

I still think that this is something you should look closer into. Having another powerful RAW setting that makes life easier, especially such a crucial feature like post-stabilization, would be a powerful positive for all those people with fast project turnarounds.

Another powerful thing would be to expand the capabilities of recording the physical position of the camera. Just imagine the possibility of just pulling a virtual camera from the metadata directly for VFX compositing. Imagine the time saved not having to rotoscope out features to get a clean camera track.
 
While resolution (and what we can do with it : reframing) has been an argument 4 years ago, Red should find a new argument using their tools when all camera company do make 6k+ cameras.

Size, unique picture quality, RAW compression, support for non proprietary accessories to custom make your camera is a big plus to shoot Red.

Hope for sensor stabilisation “à la” A7s2 and accurate autofocus eye tracking support for the next gen.

Pat
 
I still think that the power of a stabilization option in the R3D RAW settings is what we need. It's good that workarounds and different methods for those tech-savvy people who are able to utilize it exists, but that is miles away from the potential to have such a simple slider in the R3D RAW settings.

I agree. This would be a complete game changer for my line of work. Although I think a slider is too much of a simplified approach if you compare that to what SteadXP is doing. That being said, a slider would be a great start!
 
Yes power of red is resolution, raw and HDRx. Gyro based stabilisation is somthing that can benefit from all those. Red could very much set them selves in a category of their own and be truly disruptive in the cinema market if they just implement the same stuff that GoPro is tapping in to. Does not need to happen on board but should be quite easy atleast for a few standar lenses like the canon EF line or other known lenses.

Jarred should just buy the steady XP guys and start to implement it in RCXP.

Look at go pro they are at least running towards the ball even if that is not even close to where it will land, they still miss out on bandwidth, resolution, HDRx an lens choices.

https://www.youtube.com/watch?v=Mh-x8kbJT5k&feature=youtu.be
 
Can someone confirm, I now extracted r3d gyro metadata and this is what I found. Basically its not so accurate. What you see is one keyframe for each axis for a 300 frame handheld clip shooting from a small boat in waves. Clearly not any two frames in a row should have the same values... But as it looks it both a bit eratic and stays still for a couple of franes at the tine.

Now is it as should be, is the gyro simply not better? Or do I need to calibrate it or something. Good gyros today cost pennies. I would be happy to pay extra for a more accurate one.

I now put my programer to make a red camera data from r3d extract. Basically click on your r3d folder and get a fbx, nuke or flame camera in each r3d folder. Basically create a cg cam with all info that was there, backplane size, focal length, focus distance, iris, and pan tilt roll. But Im afraid the pan tilt roll data is really poor in quality, or?

https://flic.kr/p/2hpjC7f

My gopro has a great gyro, so does my phone, dji mavic, spark etc. I find it difficult to beleave that this is as good as it gets.
 
Back
Top