Thread: Is a Nvidia A5000 enough for 8k?

Reply to Thread
Page 1 of 2 12 LastLast
Results 1 to 10 of 13
  1. #1 Is a Nvidia A5000 enough for 8k? 
    Senior Member Michael Lindsay's Avatar
    Join Date
    Apr 2007
    Location
    London UK
    Posts
    2,758
    Quick question:

    I have a resolve machine based on a HP z8 2x14 real cores @ 2.6ghz (lots of Ram balanced correctly)
    I use a small Quadra for GUI and a Titan XP for compute.
    Drives are never the bottle neck

    95% of my footage is RED and I often see 8k Monstro footage as I own one;-)

    The above machine is 'perfectly balanced' in Resolve up to about 6.5k running the wavelets on CPU and Debayering on GPU

    Now I have been frustrated by my machines inability to redline every CPU core while de waveleting since a much earlier version of Resolve but I figure that is never getting fixed now (May be a Red issue or a BM issue since there is a conflict of interest that make solving this less attractive perhaps or simply a resource issue?)...

    If it could redline all the cores rather than sitting about 50-65% usage then 7k and 8k would also be fine...

    MY SOLUTION

    If I switch to GPU de waveleting then I can peg my Titan XP at 99%+ by feeding it 7k+ footage..

    So I figure I need a new GPU with at least 60% more grunt than the Titan XP..

    Does anyone have a Nvidia A6000 in a HP z8?
    How fast is it with Red material!

    Do we feel the new A5000 (much better price and a easier power and heat proposition) is good enough to be about 60% more powerful in the above work flow than a Titan XP?

    thanks for any useful thoughts... (please don't tell me to dump my HP and go AMD etc...;-)

    Michael

    PS I alway turn on Max Premium debayer at 16bit!!! IPP2 decode inside a Resolve colour managed workflow...

    PPS sorry for the double post with liftgammagain
    Reply With Quote  
     

  2. #2  
    Senior Member Alex Lubensky's Avatar
    Join Date
    Nov 2016
    Location
    Kiev, Ukraine
    Posts
    316
    Yup, same here, I own a Red Scarlet MX and my PC is rather old one - Asus Sabertooth x79, Intel Core I7 3930K (6-core Sandybridge), 32gb RAM, Sata SSD, and a GTX1080 8GB (used to use GTX570 1.2 GB for display, but recent drivers are not supported by Resolve from v16). My CPU is the bottleneck of the whole system, being the weakest point.

    I was never getting a 25fps playback on a 4k timeline with full debayer on 16 bit before v17. It was about 14-16 fps playback on v15, got up to 19fps on v16, and right now on v 17.0 I've got a 25 fps smooth playback on same old PC. They've done something to the program core so it's now relating more on the GPU instead of the CPU - on v15 the core's where running 100% all the time, while the GPU was loaded on 32-26% max, same on V16 with a little less load on CPU - about 90-100% all the time, while the GPU was about 40-45%. And now I've got 17-18% load of CPU and 60-70% load on GPU. As far as I remember the decompression and debayer option was introduced on V16, so I suppose they've progressed on it vastly.

    Have to say that v17.1 was a major step backwards in terms of playback speed for me, so I've downgraded to 17.0 a day after messing up with drivers trying to realise what's wrong with the playback speed.

    Of course this doesn't answer Your question, but I hope this might provide some view on things.
    Reply With Quote  
     

  3. #3  
    Senior Member Michael Lindsay's Avatar
    Join Date
    Apr 2007
    Location
    London UK
    Posts
    2,758
    Thanks Alex it was great info....

    I have to go all the way back to 14 to get full CPU usage... I do realise that BM and Red are concentrating on GPU over CPU so all I am really looking to do is quantify a few things..

    Another question I have is:

    On machine where there is multiple identical Nvidia cards used how is the GPU de-waveleting working in the Red SDK?

    If it works well across 2 cards maybe accommodating 2 x A5000 cards is the way forwards?
    Reply With Quote  
     

  4. #4  
    Senior Member Alex Lubensky's Avatar
    Join Date
    Nov 2016
    Location
    Kiev, Ukraine
    Posts
    316
    It does seem that on Windows it's better to have 1 better GPU over 2 good GPU's in Resolve because of the "GPU memory full" issue. On MAC & Linux there's no such problem and using multiple GPU's won't result in lesser memory GB per card.
    I'd think of 3090 instead of A5000, same GB for less money, at least for a list price - miners got the GPU prices crazy at the moment.
    Reply With Quote  
     

  5. #5  
    Senior Member Michael Lindsay's Avatar
    Join Date
    Apr 2007
    Location
    London UK
    Posts
    2,758
    I have struggled to find anyone to share 2 small pieces of data with me..

    any help much appreciated!

    Basically as we know Red’s SDK changed a while back allow full decoding of R3d files on NVIDIA GPUs. This meant my well balance Davinci machine is no longer balanced if I engage this functionality.

    Puget test are useless for my specific questions (but very good for others)

    https://www.pugetsystems.com/labs/ar...formance-2051/

    As they don’t turn on GPU de-waveleting so the AMD processors they use are doing all the hard work so it is impossible to infer any useful data about scaling GPU (both in power or number) as it is choked re CPU

    So I want to work out how to meaningfully improve my Davinci machine. (I need about 65%+ more GPU grunt if I turn on fulll GPU decode… )

    The questions I have is how much better will a a5000 (or 3080ti) or a6000 (or3090) or 2xA5000 be over a Titan XP in this specific matter..?

    AND does R3d Decoding scale like many resolve tasks with multi GPU or is it like temporal NR t.. the SDK is all RED/nvidia so maybe it doesn’t scale for other reasons?

    If it scales with number then I can buy one a5000 or (3080ti) and if it isn’t quite good enough get a 2nd later… if it doesn’t scale then I must find out if the a6000 (or 3090ti) is in real terms nearly 2x as fast (for this specific task) and stump up the money (crazy prices at the min)

    Would anyone have this knowledge in concrete? (that isn’t just a boy in his bedroom reading marketing gibberish and spiting it out with confused adolescent confidence ;-)

    Even if someone has say 2 x identical RTX Nvidia cards and can see the dewavletting on GPU is working across both cards and is nearly 2x one card... that their usages are equal and maxing out when all they are doing is producing high quality dailies (no grading) for example? This would be really useful info.?

    But you must turn on



    do not leave as



    now if you have a top off the range AMD chip it may be better to leave the cpu to do the de-waveleting (like Puget do) to CPU as it means the GPU is left with much more under the hood for Grading.... But for me with 2 x Intel® Xeon® Gold 6132 Processors this only works up to about 7k.. If I was to build a new system (which I am not) then AMD with Nvidia would be a better option.

    Thank you

    Michael
    Attached Images
    Reply With Quote  
     

  6. #6  
    Senior Member Christoffer Glans's Avatar
    Join Date
    Jun 2007
    Location
    Stockholm, Sweden
    Posts
    4,754
    I use an RTX2080ti and I can work with 8K no problem. However, there's a difference between being able to play and work and edit in real time. Editing something in any creative capacity requires instant playback and feedback from the footage. So until we get that level of speed I will always use proxies for longer editing processes. But for grading, I have no problem using by 2080ti as a single card. Usually skipping generations so I'll probably buy the 4090ti when it comes out (if that is its name). With that card we might finally get fully instant handling of R3D if we work from a RAID SSD.
    "Using any digital cinema camera today is like sending your 35mm rolls to a standard lab. -Using a Red is like owning a dark room."
    Red Weapon 6K #00600 Red Komodo #002397
    Reply With Quote  
     

  7. #7  
    Senior Member Michael Lindsay's Avatar
    Join Date
    Apr 2007
    Location
    London UK
    Posts
    2,758
    What ever works for you is cool.... maybe I am just spoiled but I never ever ever want to look at red footage at anything less than full res premium (downsampled nicely) to my finishing resolution. This included editorial (which is in Avid)... (I hate it when I see Red footage badly processed in other peoples films - it happens allot)

    So basically I need a faster machine to even produce editorial files.. less than RT is a step back for me. When I am at 6k it is fine... CPU does half the decode and the Titan Xp has loads of headroom for nearly everything I do. I just want the same for 8k! and these new cards are 2 generations beyond what I have so it fits your approach ;-)

    Question...So if you run 8k files from a fast enough drive with your RTX2080ti doing both the de-wavelet and debayer (with no grade) you get less than RT? How many frames a second do you get? And if you look at GPU usage is it pegged just under 100%?

    thanks



    Quote Originally Posted by Christoffer Glans View Post
    I use an RTX2080ti and I can work with 8K no problem. However, there's a difference between being able to play and work and edit in real time. Editing something in any creative capacity requires instant playback and feedback from the footage. So until we get that level of speed I will always use proxies for longer editing processes. But for grading, I have no problem using by 2080ti as a single card. Usually skipping generations so I'll probably buy the 4090ti when it comes out (if that is its name). With that card we might finally get fully instant handling of R3D if we work from a RAID SSD.
    Reply With Quote  
     

  8. #8  
    Senior Member Blair S. Paulsen's Avatar
    Join Date
    Dec 2006
    Location
    San Diego, CA
    Posts
    5,369
    If you're willing to mess with it, you could find an in stock vendor with a rock solid return policy and see if the 3080Ti can do RT 8K R3D full/full at 24/25/30fps for yourself. If it doesn't, then you'll have to go 3090. It is a beast, but that 24GB of VRAM (properly utilized) should increase frame buffering capacity for better responsiveness and smoother playback.

    Cheers - #19

    If it helps, none of the Nvidia Ampere cards seem to be in stock anyway... ;-)
    Reply With Quote  
     

  9. #9  
    Senior Member Michael Lindsay's Avatar
    Join Date
    Apr 2007
    Location
    London UK
    Posts
    2,758
    Thanks Blair good idea!.. would you know if buying a 2nd GPU works well with de-waveleting or is it a bit like TNR and doesn’t parrallel GPU process efficantly?

    Quote Originally Posted by Blair S. Paulsen View Post
    If you're willing to mess with it, you could find an in stock vendor with a rock solid return policy and see if the 3080Ti can do RT 8K R3D full/full at 24/25/30fps for yourself. If it doesn't, then you'll have to go 3090. It is a beast, but that 24GB of VRAM (properly utilized) should increase frame buffering capacity for better responsiveness and smoother playback.

    Cheers - #19

    If it helps, none of the Nvidia Ampere cards seem to be in stock anyway... ;-)
    Reply With Quote  
     

  10. #10  
    Senior Member Blair S. Paulsen's Avatar
    Join Date
    Dec 2006
    Location
    San Diego, CA
    Posts
    5,369
    Assuming your config can handle a single bomber card, I think you're hedging your bet vs any multi-GPU topology. Why? Because counting on the usual suspects to optimize performance over more than one card has been hit, and mostly miss, for years.

    IAC, unless you have a great hook up, by the time you can actually get a 3080Ti/A5000 or a 3090/A6000 in hand, I'd expect reviews/tests that address your use case will provide the kind of info you'll need to make an informed decision.

    Cheers - #19
    Reply With Quote  
     

Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts