Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Nvidia Quadro K5000 for mac

Wouldn't expect to see wide availability and decent drivers for OSX for at least a couple more months (if ever ;-). While I share your dream that some hot GPU will make the aging MacPro a burner for CS6 I'll believe it when I see it. OTOH, the next Mac workstation might make excellent use of the K5000 if/when it appears. Jeff could probably explain in more detail but a GPU like the K5000 is much more likely to perform well when paired with newer CPUs, especially if it can have direct access via 16 PCIe3 lanes like on the Sandy Bridge chips (40 total). YMMV.

Would love to be wrong...

Cheers - #19
 
K5000 in a current Mac Pro doesn't make a lot of sense... I'm curious as to why it was announced at IBC. There is a possibility that nVidia jumped the gun and it will be coming to the next Mac Pro, which should arrive with the next Xeon CPU iteration. And that looks to be about 3~5 months away at this point.

If we install the K5000 in a current Mac Pro, the card can't run at full speed or capacity due to the PCIe v2.0 slots in the current Mac Pro. So there's a major bottleneck there...
 
bottleneck i'm almost is positive in the drivers. nvidia is in a tough situation. they made their 5xx series way too freakin awesome. it out gunned all their pro line cards by light years. then they purposely nerf their 6xx series so it won't do that to all their new workstation cards coming out. from what I've heard from companies testing the k5000 with their software right now, it's almost just like a underclocked gtx 680. which is very strange indeed, but the power requirements are MUCH MUCH lower which can explain the underclocking. but theoretically, with the amount of cuda cores it has it should just tear through everything. but at the moment it's not even performing as well as a quadro 6000...

could be a case of protecting their much more expensive line of cards...
 
Actually, it's a change in architectural design of the cards. The "Kepler" architecture, which the GTX 6xx and Quadro K series is based on, is radically different than the previous generation of GPUs, and their "Fermi" architecture. Fermi accomplished a great deal with not so many cores. Kepler increases the number cores significantly, but they are designed to work differently. Moving forward, it is a more advanced design. However I think nVidia took a step backward in performance to make it happen. Think of the current Kepler architecture as a "building year" for your favorite GPU sports team. ;)

nVidia hasn't crippled anything or nerfed anything via drivers. I know conspiracy theories abound, but it's more a case of two steps forward, one step back in the name of progress.

The Quadro K5000 is the Quadro version of the GTX680. Has 4GB RAM and is clocked lower than most GTX680 cards so that it fits into a more reasonable power profile, can use one PCIe power connector, will last longer and maintain more reliability on precision operations, blah, blah... This is the first entry into the Quadro line with Kepler. There are more Kepler-based Quadro and Tesla cards coming, there's an 8GB Telsa with dual K10 GPUs -- it's the Tesla version of the GTX 690. There is also a K20 Kepler GPU that has yet to hit full production and it increases the amount of compute power per core by quite a bit as well as increases precision for double+ and 64bit calculations in CUDA / OpenCL. An area where the Fermi cards outperform Kepler by a large margin at this time.
 
but seems kind of a mute point in taking a step backwards, because people are disappointed with the performance of the k5000. no point in even making this card if it's not gonna out perform some of their top end cards. all it provides is just a little better solution than the quadro 4000 at the moment, where people have moved to something like a gtx 580 as a solution for the horse power. granted it's a double with card and takes up more power but anybody looking for pure number crunching is willing to deal with that. it's just one of those things if what you are saying is true, great, nice job guys. but why did you even release this card if it's not absolutely kick ass?

i'd prefer not to have to use a gaming card if i don't have to. vendors don't officially support them, they just happen to work but they are hard to optimize for, and even harder to trouble shoot because of that. so again, I'm just hoping that better drivers down the line, and hopefully ivy bridge towers on the mac side of things change how the k5000 performs. maybe it will be better utilized then. i was pretty excited for the card and was ready to put down my money but knowing that i'm getting the same or near the same performance with the stuff I'm running now compared to the k5000 makes it pointless for me to get another card. and this isn't limited to just resolve or adobe either. there are plenty of on set dailies software that take advantage of cuda and all they need is the right kind of super charged card to function even better.

i guess the more disappointing thing is that it's so rare to see a workstation card from nvidia supported on the mac side. pc side has the quadro 6000 and so on which are screaming fast. it would be nice not to have to be so "bootleg" on the mac side for a change with nvidia. but i guess we can blame apple for that one.
 
The K5000 is essentially the replacement for the Qudro 4000. It's the first of the Kepler Quadros, it's not this mythical super high-end monster Quadro that everyone seems to want it to be. We have yet to see the K6000, K7000, Tesla K10, Tesla K20, etc..

I agree that the step backward is a little puzzling, but that's what happened. Compare the performance of the GTX580 with the GTX680. The 680 is slower in nearly every operation except high-bandwidth operations and massive texture processing / shader application. So it's faster in some games, but actually slower in most when you get down to real benchmarks.

Most good GTX580 cards on the market will outperform the Quadro 6000 for CUDA too... Quadro cards have their place and their purpose. There are some things the K10 Quadro/Tesla cards will do better than the previous generation Fermi counterparts, but all things considered, the performance is mostly similar and even a bit better at times on the Fermi cards. Sad, but true. As CUDA software is better tweaked and so are the drivers, we'll see performance gains with Kepler. We have already seen this in the GeForce line to some extent. But no, the cards are not going to be a significant upgrade over the Fermi cards in terms of raw compute power. They're going to be better for bus transactions and memory throughput, mostly due to the PCIe v3 interface. And Kepler is massively better for pushing data back onto the system bus, something the Fermi cards really suck at. For single-precision CUDA apps and general OpenGL use (like in Maya, Lightwave, etc..), Kepler is going to be no better than Fermi, and even noticeably slower in many situations until the drivers and software get some tweaking.

I'm not trying to defend nVidia here, just stating what's happening. IMO, I think Kepler is half-baked and released too early. Intel has a huge opportunity here to get their MIC tech (Xeon Phi, AKA Knights Corner) to market and computing like a boss... Unfortunately, they're late to the party, can't seem to wrangle enough developer support. It supports OpenCL and some x86 functionality, which is awesome, but most of the mainstream commercial apps out there with GPU or compute card acceleration are entrenched with CUDA.


Actually the Kepler architecture does have quite a few advantages for software being newly written or optimized for the new architecture. Something written from the ground up to take advantage of Kepler's strengths within the CUDA API can indeed outperform what we see on Fermi optimized CUDA applications. Just one more thing that's going to take time...
 
Last edited:
On the safe side, you can install a Quadro 4000 Mac edition card. They sell used for about $600-ish. New they're about $625 right now from most online vendors after PNY's $100 mail-in rebate that goes thru the end of the year.

If you're more adventurous and want something that gives a bit more kick -- about a 15% gain on CUDA performance in Premiere CS6, even more in Resolve, you can go for the GTX570. The catch there is that it only works properly under the current OSX 10.8.1 or 10.8.2 release and with the current CUDA drivers. You don't see a boot screen, you can't access functions outside the OS, like option-boot to choose a different drive to start from, that sort of thing. There's also the chance that future versions of OSX may break the driver compatibility or drop support.

When the Quadro K5000 Mac edition does start shipping, it's going to be a card with a $2300 street price and not a whole lot to offer over the above two options, especially when installed in a current Mac Pro that doesn't have PCIe v3 support.

The down-side to all this is that the ATI 5770 is a decent card as it is, very close in performance to a GTX 560 card. For apps that use OpenCL or OpenGL, you're not gaining anything by going to the nVidia card....
 
You can install two Quadro 4000's as they are a single slot width and each use one of the PCIe power connectors. The only thing you would gain by doing so is the ability to connect two monitors without using an adapter for one of them. Not a great solution, I ran a Mac Pro with two Quadro 4000's for a bit as I needed the dual GPUs for Resolve 8. Resolve 9 doesn't have this requirement anymore. Unfortunately, the second Quadro card will be crippled as you'll most likely put it into an X4 slot. If you put both Quadros into the two X16 slots, then you will have to cripple your Rocket...

If you go with the Quadro, you probably should just get one and then a DVI|DL to DisplayPort converter (about $200), the one from Atlona is the best I've used. Or consider trading off one of those Apple displays for one that has multiple inputs.
 
Jeff, thank you for your advice. I am a luddite when it comes to innner workings of macs and cards.. Are you saying the altona will send video to both cinema displays? and now the second quadro is basically only sending video to the 2nd monitor, so I basically have a very expensive card doing nothing? We installed the quadros as you said, to keep the rocket running- the rocket is now between the 2 quadros. 1 X16 has a quadro, 1 X16 has the rocket, and the X4 has the second quadro.. I can return the 2nd quadro (amazon prime). Does a gpu card lose performance if it is sending video to 2 monitors? If the card does lose performance , is there a single width card t that has 2 display ports out that can go in the X4 slot ?
Why dont they just give us a new mac pro??? :)
 
The Atlona device I linked takes a DVI Dual-Link signal and converts it to a DisplayPort signal (which is what the Apple 27" displays use for input). Each Quadro card has two connectors, one DVI dual-link and one DisplayPort. To connect two monitors is not a problem, but it's a bit awkward with the two different connectors on there, and in the case of using displays where you only have one input -- like the Apple 27" -- you need a converter for the one port that doesn't connect directly. So in your situation, using two Apple 27" monitors, you would connect the primary display to the DisplayPort connector on the card. The second would connect to the DVI-DL connector by way of the Atlona converter.

If you install two Quadros, you're basically spending extra money on nothing. The only two apps that we discuss here on these forums which can make use of the two GPUs are After Effects CS6 and DaVinci Resolve. If you spend a lot of time in Resolve, then having a secondary Quadro in an X4 slot will accelerate Resolve. With Resolve v9, your primary GUI card can be your CUDA card, so there is no need for the secondary GPU, unless you want a little extra boost. After Effects only uses the cards for OpenGL acceleration for it's Raytrace engine for doing rendered title graphics and whatnot. Not a whole lot of acceleration going on there for much of anything.

For everything else, the card is just sitting there idle, taking up a slot. If you connect a monitor to it, it's doing nothing other than running the secondary display. And it will run a bit sluggish in that X4 slot. Better to keep both displays on your GPU in slot-1. There is no performance hit worth noting. The frame-buffer on the card is large enough to support both displays just fine. There is no other single-width card that works in a Mac Pro and provides dual displayport connectors. In fact, if you're running Windows on a PC, I don't think there's a card in existence that offers that configuration. Stupid, but true.

As for the new Mac Pro, it's coming. Apple has said it's coming. We can speculate why they skipped this current iteration of Xeons, but I can think of a number of reasons. And, for what it's worth, we're not missing out on a whole lot, IMO. Just because PC makers keep releasing new systems that are supposedly better, doesn't mean they're actually new or really better. The current Xeon platform, which Apple has ignored, only started shipping in mid April, didn't see broad availability until late June and is already due to be replaced in the next 3~5 months.
 
"the performance is mostly similar and even a bit better at times on the Fermi cards"

You have numbers that are different than barefeets.com? The 570/580's that are sold by macvidcards and others are MUCH faster, not just a bit better than the 6000 at any CUDA operations not including double precision. Those 570/580's also have an EFI rom so one does get the boot screen. I have a 2.5GB 570 coming and am looking towards being able to run Resolve 9 at high speed.

Y'all be cool,
Robert
 
Jeff,

Thanks for all the advice you've given on graphics cards...

I use Resolve 9.0, and I just ordered a new Mac Pro this week so I'm deciding on graphics cards now. I'm thinking that I should just stick to the Resolve Confg Guide of Nvidia GT120 for GUI and GTX 570 for GPU. What do you think of that? I can certainly afford more, but will more expensive cards really give me a noticeable/significant increase in speed?

Also, the Mac Pro has a maximum 300 watts power supply, which isn't enough for the GTX570 and GT 120. Thus, how should I bring it up to the requisite wattage?


dezzy
 
"the performance is mostly similar and even a bit better at times on the Fermi cards"

You have numbers that are different than barefeets.com? The 570/580's that are sold by macvidcards and others are MUCH faster, not just a bit better than the 6000 at any CUDA operations not including double precision. Those 570/580's also have an EFI rom so one does get the boot screen. I have a 2.5GB 570 coming and am looking towards being able to run Resolve 9 at high speed.

Y'all be cool,
Robert


Not sure what you're asking/ saying. The portion of the statement you quoted was me commenting on CUDA performance between Kepler and Fermi GPUs for single-precision 32bit computations.

The 570 / 580 are great for performance on the Mac. Mac support for Kepler cards is, uh... non-existent, mostly. And Same with proper support for the Quadro 6000. The GeForce cards are faster than the Quadro 6000 in most regard, at least the GTX580 is. On OSX, both would be, as the Quadro 6000 doesn't run properly on OSX, hacked EFI or not.

Not sure what numbers at Barefeats you're looking at. I haven't looked at their benchmarks for cards in some time. Last I looked, they were somewhat outdated, at least for Kepler cards, and didn't take into account current support for GTX cards on OSX... But that may not be the case now, I'm too lazy to look.

Jeff,

Thanks for all the advice you've given on graphics cards...

I use Resolve 9.0, and I just ordered a new Mac Pro this week so I'm deciding on graphics cards now. I'm thinking that I should just stick to the Resolve Confg Guide of Nvidia GT120 for GUI and GTX 570 for GPU. What do you think of that? I can certainly afford more, but will more expensive cards really give me a noticeable/significant increase in speed?

Also, the Mac Pro has a maximum 300 watts power supply, which isn't enough for the GTX570 and GT 120. Thus, how should I bring it up to the requisite wattage?


dezzy


With Resolve 9, you no longer need a separate GUI card. So I would forget about the GT120, as running a separate, lower-spec'd GUI card will be a hassle for most everything else (like Premiere) and it wastes a valuable slot.

Mac Pro power supply is larger than 300W. You have roughly a 300W envelope to fit your graphics card -- namely slot-1 plus the two PCIe power connectors. A GTX570 fits into the power profile just fine. A GTX580 does not and while people are doing it successfully, I would recommend powering a GTX580 via a supplementary power supply.

Adding another GPU will increase your node capacity in Resolve, but this may or may not be beneficial, depending on how complex your grading setups tend to be. You're only going to get one card like a GTX570 into a Mac Pro. If you want to add a second, you'll have to place it in a PCIe expander box like the Cubix or Cyclone. That gets expensive and cumbersome.
 
Isn't there a 6xx card that would work for card 2 in a single slot with no extra power?

:)
 
It seems that in a Mac Pro, it comes down to a choice between: one great GPU (GTX570); or two mediocre ones (e.g. Radeon 6850 slim + GTX 560) because of the scant slots and power. Jeff, which option do you think would be better, both for Resolve and otherwise?
Also, I'm surprised to hear that Kepler doesn't perform well on Mac. I've read elsewhere of people getting awesome results in Resolve with a GTX 680, the only downside being the lack of EFI.
 
With Resolve 9, you no longer need a separate GUI card. So I would forget about the GT120, as running a separate, lower-spec'd GUI card will be a hassle for most everything else (like Premiere) and it wastes a valuable slot.

The Resolve 9 Mac config guide clearly states: "While a single GPU for both GUI and image processing is supported, Resolve 9 is optimized for using a dedicated GPU for the GUI and one or more dedicated GPUs for image processing.". Seems like it's still a really good idea to run dual GPUs with one dedicated for GUI. Thoughts?
 
Back
Top