Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Dragon Update

Status
Not open for further replies.
Its not a complete free-for-all as remember the frame rate is determined not by the raw volume of pixels, but the volume of rows. To get higher frame rate, you need to reduce the number of rows.

So if yo have 5120x2700 frame size and a 4800x2700 frame size ( for example ) the framerate max is the same as the number of rows are the same... but you will get better compression numbers with the smaller number columns. And the inverse is true, If you change that to 5120x2700 vs 5120x2160, then your frame max rate goes up on the second one.

Make sense?

Thank you, Jarred. Very succinct, and cogent, explanation. I would love to parse REDuser -- would you guys mind? Searching for info buried in almost ten years of threads is a bit onerous. I glean most information solely from Recon, but have started digging deeper in other threads as an owner. First time I've said that here -- he he he. :)
 
Good to hear! Even hearing that from you Jarred, is better than complete silence!

Quiet here.. but the very opposite behind the scenes :)
 
Its not a complete free-for-all as remember the frame rate is determined not by the raw volume of pixels, but the volume of rows. To get higher frame rate, you need to reduce the number of rows.

So if yo have 5120x2700 frame size and a 4800x2700 frame size ( for example ) the framerate max is the same as the number of rows are the same... but you will get better compression numbers with the smaller number columns. And the inverse is true, If you change that to 5120x2700 vs 5120x2160, then your frame max rate goes up on the second one.

Make sense?
So Anamorphic should have some nice frame rate options due to the narrower width thats used on the sensor.
 
So Anamorphic should have some nice frame rate options due to the narrower width thats used on the sensor.

I read that differently, the max FPS is related to the height of the image (and anamorphic needs height for max pixels) and compression rate is getting better when the image has less width. So anamorphic should get better compression rates while the max FPS stays the same as height determines that, right ?
 
Jason, Mick got it exactly right.

if you have 5120x2700 frame size and a 4800x2700 frame size ( for example ) the framerate max is the same as the number of rows are the same...

but you will get better compression numbers with the smaller number columns.

And the inverse is true, If you change that to 5120x2700 vs 5120x2160, then your frame max rate goes up on the second one.
 
So if yo have 5120x2700 frame size and a 4800x2700 frame size ( for example ) the framerate max is the same as the number of rows are the same... but you will get better compression numbers with the smaller number columns.
Really happy to read that. I was working on a VFX project recently where we were shooting HFR elements on green screen. We didn't need the full width of the frame, and would have loved to have had the option to shoot e.g. 2700x2700 to be able to get REDCODE 5:1 (or better?) at 60 fps instead of REDCODE 8:1.
 
Its not a complete free-for-all as remember the frame rate is determined not by the raw volume of pixels, but the volume of rows. To get higher frame rate, you need to reduce the number of rows.

So if yo have 5120x2700 frame size and a 4800x2700 frame size ( for example ) the framerate max is the same as the number of rows are the same... but you will get better compression numbers with the smaller number columns. And the inverse is true, If you change that to 5120x2700 vs 5120x2160, then your frame max rate goes up on the second one.

Make sense?

Thanks Jarred, will this feature be available in the Scarlet Dragon too, right?
 
They could do something similar to the new time lapse modes ( or even HDRX) , that is to read the sensor out several times ( each readout being fully exposed ) and then combining the images together as if the shutter was open for one longer exposure.

Yeah that's what I did on a timelapse recently. I just recorded 360* shutter @ 12fps and then did frame averaging across 24 frames to generate one new "long exposure" without needing any ND. Has the nice side-effect of eliminating noise too vs one frame.

If Dragon can handle 100fps then you should be able to get 2 stops of ND by simply reading off the sensor 4 times and averaging the results in the DSP. Presto, magic ND without motion artifacts.

Or I guess it's possible Dragon can just adjust its sensitivity somehow in the chip with a low-gain mode.
 
Yeah that's what I did on a timelapse recently. I just recorded 360* shutter @ 12fps and then did frame averaging across 24 frames to generate one new "long exposure" without needing any ND. Has the nice side-effect of eliminating noise too vs one frame.

....

Hi Gavin,
Curious as to how, and with what software, you do this averaging out of 24 frames into one.
Thanks,
Damien
 
Hi Gavin,
Curious as to how, and with what software, you do this averaging out of 24 frames into one.
I guess he did it in camera. ;)

It's a interesting thought to expand functionality of averaging/combining for normal motion. You can do this right now, but currently you would always end up having results with a 360degree shutter. What would be needed is a way to set up something like "sub-frames" per frame or "sub-frames" per exposure-slot, so that we can set exposure time to 1/48th and then have the camera record (for example) 4 frames within that 1/48th sec (without any gaps between them) and then combine or average those frames.

Do i oversee something? Because i haven't tried that processing-stuff yet, only had a quick look and thought another time "i should get a EPIC". Hehe. ;)
 
Last edited:
He shot 12fps at 360/ open shutter in the camera. Then averaged them in post. You can average/ blend frames in post in Photoshop, Nuke, Fusion, AE, Registax, etc... Gavin didn't say which software he used, but if my goal was to average the frames into the equivalent of a long-exposure still, I would start with Photoshop. AE or Nuke if I needed to create a sequence of such for motion. Registax is an odd one I mentioned, but it's made specifically for averaging multiple exposures into one image and is a popular one used by astrophotographers.
 
I used a scripted nuke node. None of the normal time blending nodes wanted to do a straight linear blend of X frames so I did a 'frameblend' of frame A to B where A = time*24 and B = (time*24)+23. Or something like that. So frame 0 would be start:0 end:23, frame 1 would be [24, 47] etc.

Can epic do that in camera yet? My trade-in hasn't come yet, although it should be in the mail. I would still prefer to shoot it @ 12 or 24 fps since you can always pull out stills or regular motion in addition to the time-lapse. I prefer to bake as little as possible in camera. Not to mention if you're averaging 24 frames you get essentially an impossibly noise free image (which I imagine would also be true in camera).

It's certainly a good argument for use instead of ND. There would be no IR contamination since it's averaged shorter frames. You would get 1/nth noise and several other benefits as well.

Dragon can do 200 frames per second 4k. If you wanted 1/48th of a second frame and needed ND you could theoretically shoot 8 1/384th shutter frames. Blend them in camera. Pause for 1/48th of a second and then fire off another burst. Rinse and repeat and you would have the cleanest most noise free footage imaginable. It would effectively boost your dynamic range as a side-benefit since it would extend your usable noise floor down a couple stops. It's why I don't really complain about all the slow-mo features of RED cameras. Highspeed sampling has way more applications than just silly John Woo slo mos, even for regular 24p stuff. Multi-sample imagery opens up lots of computational photography opportunities like deep sampling. It also is another good argument for high native sensitivity taken to the extreme since highspeed footage needs lots of light or very fast sensors.
 
I used a scripted nuke node. None of the normal time blending nodes wanted to do a straight linear blend of X frames so I did a 'frameblend' of frame A to B where A = time*24 and B = (time*24)+23. Or something like that. So frame 0 would be start:0 end:23, frame 1 would be [24, 47] etc.

Can epic do that in camera yet? My trade-in hasn't come yet, although it should be in the mail. I would still prefer to shoot it @ 12 or 24 fps since you can always pull out stills or regular motion in addition to the time-lapse. I prefer to bake as little as possible in camera. Not to mention if you're averaging 24 frames you get essentially an impossibly noise free image (which I imagine would also be true in camera).

It's certainly a good argument for use instead of ND. There would be no IR contamination since it's averaged shorter frames. You would get 1/nth noise and several other benefits as well.

Dragon can do 200 frames per second 4k. If you wanted 1/48th of a second frame and needed ND you could theoretically shoot 8 1/384th shutter frames. Blend them in camera. Pause for 1/48th of a second and then fire off another burst. Rinse and repeat and you would have the cleanest most noise free footage imaginable. It would effectively boost your dynamic range as a side-benefit since it would extend your usable noise floor down a couple stops. It's why I don't really complain about all the slow-mo features of RED cameras. Highspeed sampling has way more applications than just silly John Woo slo mos, even for regular 24p stuff. Multi-sample imagery opens up lots of computational photography opportunities like deep sampling. It also is another good argument for high native sensitivity taken to the extreme since highspeed footage needs lots of light or very fast sensors.

Anyone have any idea of a workflow to sync the averaged down to 24fps footage to double system sound? Not being snarky, I am just trying to see how to work this into a workflow which may not have usual smpte timecode metadata on picture that we see with 23.98/24/30 fps dialog shoots.
 
Anyone have any idea of a workflow to sync the averaged down to 24fps footage to double system sound? Not being snarky, I am just trying to see how to work this into a workflow which may not have usual smpte timecode metadata on picture that we see with 23.98/24/30 fps dialog shoots.

Just use an old-school clapboard - but not as useful as real time code. For this you would need in-camera processing of frames.
 
Status
Not open for further replies.
Back
Top