Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Need some Test 5K and 6K media please

Eric Bowen

Well-known member
Joined
Oct 11, 2013
Messages
877
Reaction score
0
Points
0
Location
www.adkvideoediting.com
I am testing the Z97 with the 4790K at 4.7 GHz right now with Premeire CC 7.2.2 (33). I have run into something completely unexpected and unexplainable at the moment. I am not aware of Adobe's Red update coming down yet but I am able to playback 4K R3D at full resolution in Premiere which hasn't been possible before even on a Dual Xeon. I unfortunately only have 4K R3D files and not 5K or 6 K files to test. Any chance I can get some links for some R3D test footage in those resolutions.
 
Well the 6K media wont playback at Full res preview without dropping frames on the 4790K but it will at half res. The new interesting part is the 780Ti load was pushing upwards of 17 to 20% on the 4K media but only 4% on the 6K Media. Obviously some caching model/buffering optimizations have not been done with the 6K media that have been done with the 4K in Premiere.
 
First get it working, then optimize

First get it working, then optimize

I think Eric is right on about current tools often having fairly rudimentary protocols for things like distributing compute tasks efficiently, optimizing scaling math, caching, legit multi-threading support, etc. Thanks to all those gamers, GPU development has been intense, and once properly exploited, that power is just what visual artists need.

The primary reason I am pursuing an RR-X based suite in June 2014, rather than building a shredder workstation, is not because I think a workstation is fundamentally overmatched by 6K R3Ds. It's because the ecosystem is too immature at the moment, in several ways. Some folks (like our old friend Ted over at Devils & Demons) have managed to whip up monster workstations that can rip through 6K FF R3Ds with brute force. If I had enough booked work to be fairly confident I could ROI such a beast I'd get one tomorrow. As essentially a freelancer, that approach is too risky for me, and I'm guessing a number of RedUsers are nodding their heads at that.

The good news is that Rob and the Red Team are busily improving how the SDK utilizes available resources. OS developers are much more willing to assign tasks to GPUs. Thunderbolt is finally getting more engineering attention as the marketplace for it widens, not to mention the more robust solutions version 2 can support. It is my forecast that by sometime in 2015, a $6,000 workstation with 2 high end GPUs will support real time full decode workflows with hardware specs not that much greater than today's because of dramatically better utilization metrics.

Cheers - #19
 
I did export test with 20 seconds of the 6K media to 6K DPX and that took 5 mins 14 seconds so it was basically the same amount of time it took to export 30 seconds of 4K to 4K DPX.
 
Creative Cloud 2014 radically improved Red performance with the GPU debayer. The same 4 layer R3D 4K timeline that took 5:20 to export to 4K DPX took 1:48 on CC 2014 version wit the 4790K at 4.7Ghz and a 780Ti card. The 780Ti card load was pushing 53% on the updated version where as previous was 17 to 20%. 6K still would not play at full res preview without dropping some frames, but 1/2 res is silk. 4K 4 layers plays back easily at full res. The 6K export with 1 layer to 6K DPX was 1:28 seconds.
 
Thanks for the "Intel" Eric ;-)

Seriously, there are so many CPUs on the market it can be hard to determine which one makes the most sense for a particular build/application. Just buying the top of the line is not a particularly attractive strategy as they tend to have a significant price premium.

I'm also starting to consider where a lot of this is heading in terms of more utilization of GPU resources, specifically at what point higher clock speeds on the CPU - even with fewer cores - supports higher overall system performance by feeding the GPUs faster. As Jeff, Eric and other folks who know this space well have noted; determine your most critical use cases and supply the right resources for the most demanding tasks. With the speed at which this space is moving these days, it seems like more and more software coders are moving on to the the next project before they ever get a chance to optimize recent releases. If I'm misreading this please clue me in, but my supposition is that more and more key software packages will lack meaningful multi-threading support to efficiently leverage higher CPU core counts.

To use a car analogy it seems like top speed vs low end torque. Is your world about pulling big trailers, or getting there in a hurry?

Cheers - #19
 
Back
Top