Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

RED + NVIDIA Announce Realtime 8K R3D GPU Decoding

OOOooooooooo snap. For the longest time RED said this wasn't possible (hence RED rockets), but I'm guessing ProresRAW and new r3d-shooting consumer products (*cough*hydrogen*cough*) made this a necessity.

So this will put the "Which is playback is faster in RCX; OpenCL vs CUDA" debate to rest? Any word on the low-end? For example, can the 2gb 750M GPU in MacBook Pros playback 4k~8k at 1/2?

I understand that NLEs won't start incorporating it until December, but when can we expect to see this in RCXp/CUDA in general?
 
Last edited:
OOOooooooooo snap. For the longest time RED said this wasn't possible (hence RED rockets),

heh heh I think you missed my very public disdain for the RR. I said 3 years ago that it was one of my missions to make the Rocket's extinct. I never liked needing the Rocket for real time.. because it was so expensive and so single use. Actually "never liked" is probably not the right descriptor. "hated with a passion" is probably more suitable.

PC only thing? Or will options be available for Mac? Sorry didn't watch the keynote because of time constraints..

OpenCL may come in a year or two... but not till CUDA is completely fleshed out. AMD does have a pretty fast solution with their high end workstation cards that can get you to realtime the old way as long as you have a ninja Xeon or two or 4.
 
Have a few Rocket-X’s for sale. PM me
 
heh heh I think you missed my very public disdain for the RR. I said 3 years ago that it was one of my missions to make the Rocket's extinct. I never liked needing the Rocket for real time.. because it was so expensive and so single use. Actually "never liked" is probably not the right descriptor. "hated with a passion" is probably more suitable.

heh, I definitely recall your disdain (when GPU decoding assist was incorporated a few years ago), but didn't realize that meant you guys were actively trying to get an alternative solution going.

So when is it going to be in RCXp (same timeframe/December)? And how much of a playback boost are you guys seeing in the low-end/mobile solutions? (I'd be amped for 4k full-res/real-time playback and 8k 1/2 res/real-time playback on MBPs with discreet GPUs.)
 
OOOooooooooo snap. For the longest time RED said this wasn't possible (hence RED rockets), but I'm guessing ProresRAW and new r3d-shooting consumer products (*cough*hydrogen*cough*) made this a necessity.

So this will put the "Which is playback is faster in RCX; OpenCL vs CUDA" debate to rest? Any word on the low-end? For example, can the 2gb 750M GPU in MacBook Pros playback 4k~8k at 1/2?

Maybe Cineform 8k is the codec for you, the debayer on the GPU still needs a lot of juice, no-matter what codec.
This codec is freaking low on resources and only around 20% larger than R3D with the same quality.

A very good reason not to use....
 
Maybe Cineform 8k is the codec for you, the debayer on the GPU still needs a lot of juice, no-matter what codec.
This codec is freaking low on resources and only around 20% larger than R3D with the same quality.

A very good reason not to use....

Yeah, except my cameras shoot .r3d and transcoding is extraordinarily dumb/time consuming/storage wasting when/if there's a real-time alternative available.

In either case, I found this in the other thread (re:older hardware performance):

Jarred Land said:
The most important thing looking at older generation cards other than the CUDA cores is memory bandwidth and memory capacity. As much as current gen CUDA cards will benefit from the new software, there is a cutoff where if the card memory is too low it will become the bottleneck andf erase all the gains. We don't know what that exact cutoff is.. but I just dont want people to think that you can use a 2GB card from 5 years ago and expect anything great to happen.

1080ti's are probably at that cutoff but what replaces it likely will not. Of course the RTX cards are not.

I'd think the memory capacity bottleneck is a resolution dependent one, but obviously I'm in no position to know for sure. Similarly, I wouldn't expect *great* things to happen on older hardware, but as I said, I'd be more than content with real-time playback of 4k full-res and 1/2 res 8k on my current mobile hardware.

Currently, I can playback 6k 12:1 at 1/4... but at 5:1 I gotta drop it down to 1/8 (which FCPX doesn't have as Apple refuses to incorporate something as simple as user-selectable playback resolution).

With that in mind, I'd prefer it if RED let the user decide whether or not the performance boost (or lack thereof) is acceptable or not. Currently, RCXp doesn't let you enable GPU support with cards that have 1gb of vRAM, even though that was all that was required to give a healthy boost in playback performance in RCXp versions prior to v50 (when a 2gb limit was spontaneously enabled/enforced).
 
Last edited:
So December is when NLEs (adobe etc.?) get the new SDK - library code etc - for the improved processing? Does that mean we have the usual wait on Adobe's end?

Is there a way to ask the NLEs to engineer a more well structured/generic interface into your decode libraries so that you can decouple your development timelines from theirs? Have them just call into a shared library that can be updated at your leisure? So exhausting waiting on Adobe's timelines.
 
So December is when NLEs (adobe etc.?) get the new SDK - library code etc - for the improved processing? Does that mean we have the usual wait on Adobe's end?

Is there a way to ask the NLEs to engineer a more well structured/generic interface into your decode libraries so that you can decouple your development timelines from theirs? Have them just call into a shared library that can be updated at your leisure? So exhausting waiting on Adobe's timelines.

Well I am at my desk using the alpha in a very normal build of Premiere :)
 
Does this mean users like myself who are using the Titan X will see better playback and such for editing in Premiere or does it have to be a newer card?
 
This is great news. Looking forward to seeing how this all plays out this year. Thanks for continuing to push the technological envelope!
 
As an Epic Dragon owner I have to say it's so awesome to get these backward compatible upgrades like IPP2 and this GPU decoding for our existing cards. Very exciting news. Thanks to the team that made it happen.
 
Hey! Are you guys thinking the 1080 Ti will be able to do 1/2 playback in resolve via a eGPU? just wondering if I will need to sell that and buy a new card. thanks!
 
Last edited:
Exciting news!
Thanks Jarred :)
 
To put things in perspective:

In Davinci Resolve 14.3, a basic correction using the color wheels plus 4 Power Window nodes that include motion tracking.

RED 8192x4320 (9:1) 25 fps WEAPON 8K S35 B001_C096_0902AP_001.

tested with the RAW decode quality set to "Full Res."

Runs in real time on a TR1950x (overclocked to 4 GHz) with 1 GTX1080ti/P6000 or VEGA64/FE.

TR1950x ~ $ 800

GTX1080ti ~ $ 650
P6000 ~$ 5.000
Vega 64 ~ $ 550
Vega FE ~ $ 1000

Did you have performance mode on? What is your project resolution?
 
Did you have performance mode on? What is your project resolution?

Don't know the exact setting anymore (must have been during the Easter weekend, because we didn't go live at that time) we just tried to mimic the Puget test and how our system would compare to theirs.
And that went pretty well as I remember it, we measured the fps in the color tab.
 
Best news I heard in a long time! Its even better that this is on the software side as the improvements runs down even on some of the current cards! Well done red and nvidia!
 
This is more a code thing than a hardware thing. The results of our incredible software engineers working with Nvidias incredible software engineers we now have wavelet decode AND debayer working on GPU.. The new hardware helps of course but we are breaking 24fps on current cards with CUDA. The release of this is tied to the new hardware but everyone on NVIDIA will eventually win...

Will we see these benefits with 1080 ti cards running eGPU in OSX? Or is this a WINDOWS only thing?
 
Back
Top