Desktop versus Workstation GPU's?

CustomModAddict

Limp Gawd
Joined
Sep 2, 2010
Messages
184
Ok someone school me in the difference between the typical workstation video card and a typical desktop video card? I am designing my next custom build, and since I am thinking about getting into 3D CGI and rendering along with the video games I play. I was wondering if using a workstation card is better for both CGI and Gaming. All my photography stuff is done on my Mac. But I would be using the PC for stuff like Sketchup, Blender, 3ds Max, Maya, etc.

So should I just stick to either 570 or 580's in SLI, a 5970 or should I look at workstation cards?

Thanks.
 
Workstation cards have a lot more RAM, I don't know of any other differences.
 
Price and drivers are the typical big differences. Workstation cards have drivers better optimized for 3D/CGI work whereas desktop cards have drivers better optimized for 3D gaming. As such there are situations where a workstation card whose specs are significantly higher/faster than a desktop card but still have crappy 3D gaming performance than that desktop card due to those drivers due to drivers
 
from my limited knowledge of this: get an nvidia card for the CUDA?
 
Drivers and a few hardware changes.
Gaming drivers have to be designed and redesigned every few months because of new releases and bugs with older games.

Workstation drivers are meant to drive maybe 10 programs. These programs change maybe once every couple of years, so the driver programmers can really program out rock solid drivers.
 
i heard the new Sandy bridges are even faster than CUDA, floating points. hmmm
 
So basically wait before I get real serious with 3D CGI applications before I invest in a workstation GPU. Thanks for all the feedback guys.
 
So basically wait before I get real serious with 3D CGI applications before I invest in a workstation GPU. Thanks for all the feedback guys.
Absolutely, but I have also yet to come across an instance where a workstation GPU has been bad for games. That's not what they are made and optimized for but they should work just as well with them as a comparable consumer card. If not it should be possible to run consumer drivers on them with some modding.
 
Save a lot of cash, get the latest nvidia graphic card, 560-580gtx, and use almost any 3d app with no major issues. There is a difference between the two( pro and gaming cards) but imo, not enough to justify a 10x cost increase. Unless you have tight deadlines and a pipeline requiring certain special features, get the gaming cards and rock out.
 
What about virtualization? Is that only restricted to workstation cards or could one do it with consumer cards as well?
 
Save a lot of cash, get the latest nvidia graphic card, 560-580gtx, and use almost any 3d app with no major issues. There is a difference between the two( pro and gaming cards) but imo, not enough to justify a 10x cost increase. Unless you have tight deadlines and a pipeline requiring certain special features, get the gaming cards and rock out.

You would be surprised in the difference between a Geforce and Quadro based on identical hardware. Could be a difference from modeling with a few hundred thousand polys and a few million at a time.

You can find Radeon/Geforce cards that flash to Quadro/FireGL for the increased accuracy.
 
Yea, but it also depends on what apps you use. This older thread from the autodesk forums pretty much summed up what I have seen so far.
 
So basically wait before I get real serious with 3D CGI applications before I invest in a workstation GPU. Thanks for all the feedback guys.

pretty much. if you do this for a living then it might be worth it but just "thinking about it" I would not. and from what I have been told soft modding recent cards is pretty hard or impossible as they actually do change the hardware.
 
i heard the new Sandy bridges are even faster than CUDA, floating points. hmmm

Only if you are converting videos using the built-in conversion acceleration hardware (new to Sandy Bridge). This performance difference does not translate to any other realm of GPU-accelerated computing (or professional workstation application).

Otherwise, Sandy Bridge is just a bit faster than than Nehalem.
 
Prior to the 400 and 500 series cards from Nvidia, the consumer cards played nicely with higher end 3d programs like Blender and 3DS Max. With the latest offerings, however, Nvidia has severely crippled the cards performance, rendering them very slow using either OpenGL or Direct3d in these apps. In my case, my GTX 470 card is roughly half the speed of my 8800 GTX, and right around the same speed as my old 9800 Pro. If you check through the Autodesk forums you will see a lengthy thread with other users having the same problem, and Nvidia has confirmed they are aware of the slow performance with the cards, and insist that users wanting the same performance as their last gen cards now buy a more expensive Quadro based card.

Absolute bollocks that they would neuter a card so much that a card three generations removed is twice as fast.

My previous thread with benchmarks:
http://www.hardforum.com/showthread.php?t=1541555

Autodesk thread:
http://area.autodesk.com/forum/auto...ce-gtx-480-problems-in-3ds-max-2011/page-last
 
Back
Top