NVIDIA Unleashes Graphics Monster With New Quadro M6000

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
The NVIDIA Quadro M6000 is professional graphics in beast mode. It provides everything you need in one powerful card. 12GB of GPU memory. 7.0 teraflops of peak single-precision performance. Support for up to four 4K displays. And the incredible power efficiency of our Maxwell architecture. Quadro is all about professional design and visualization. We design, build, test and support Quadro specifically for professional users. And it’s certified on more than 100 popular professional applications. The M6000 ups the graphics game with 30 percent faster performance, on average, across a range of professional applications. This includes 2.4x faster ray tracing and 36-bit color support for high-dynamic range displays.
 
only 8 times faster in ray-tracing? I thought it would be so much faster.
 
why are Double Precision figures absent from marketing material?
DP is used often for accurate model simulation and even regular MATLAB work.
 
Emm, odd product. Double precision still seems absent like other Maxwell cards, vram staying at 12gb, and only four 4K screens possible. Seems nvidia sacrificed at lot to get their single precision up to 7tf which is good, but was expecting much higher for a newer gen chip with no (?) Double precision!?
 
only 8 times faster in ray-tracing? I thought it would be so much faster.

That seemed odd to me, too. When doing ray tracing in after effects, where a CPU would take 2 hours, my GTX 470 would take about 10 minutes. If that.
 
That seemed odd to me, too. When doing ray tracing in after effects, where a CPU would take 2 hours, my GTX 470 would take about 10 minutes. If that.

Perhaps the fastest CPU currently out is pretty damn good @ it?

Anand had a couple things to say about double precision and GM200 which I thought made it pretty clear that -at least for the moment- compute capability takes a back-seat to just about everything else in terms of Nvidia's up-coming GPU offerings.

It may be a jesture to all those that invested in the first titan to get a bit more work done with their money before having a viable option to upgrade to?
 
Emm, odd product. Double precision still seems absent like other Maxwell cards, vram staying at 12gb, and only four 4K screens possible. Seems nvidia sacrificed at lot to get their single precision up to 7tf which is good, but was expecting much higher for a newer gen chip with no (?) Double precision!?

Well it wasn't on the Maxwell die. They removed it completely. This is an odd step backwards...
 
Maxwell sacrificed DP performance to dedicate all die space for SP, it's actually one of the major reasons it's such an efficient process.

If you need DP you'll have to wait for pascal. This is what happens when we get stuck on the same process node for so long...
 
Maxwell sacrificed DP performance to dedicate all die space for SP, it's actually one of the major reasons it's such an efficient process. If you need DP you'll have to wait for pascal. This is what happens when we get stuck on the same process node for so long...

It is a different story: NVidia does not give a damn about DP anymore. Their target is now machine learning which is potentially grizzillions times bigger market than DP (think e.g. about self-driving cars). Pascal in fact will be optimized not towards DP but for sub-SP since those new applications do not even need full SP. DP will be served better with Intel Knights Landing chips which may act a coprocessor and as a general processor with up to 72 cores.
 
It is a different story: NVidia does not give a damn about DP anymore. Their target is now machine learning which is potentially grizzillions times bigger market than DP (think e.g. about self-driving cars). Pascal in fact will be optimized not towards DP but for sub-SP since those new applications do not even need full SP. DP will be served better with Intel Knights Landing chips which may act a coprocessor and as a general processor with up to 72 cores.

Yeah, that makes sense. Lets drop the market that nvidia pretty much dominates (compared to intel), and generates millions.

Nvidia had to do with SP as it just didn´t have enough die space to put proper DP.

Pascal is Mixed precision. Rumor says it will have FP 64 that can double as FP32. It could also mean that it only uses FP32 and double as FP16 but you can do that already so its less likely.
 
Yeah, that makes sense. Lets drop the market that nvidia pretty much dominates (compared to intel), and generates millions.

Nvidia had to do with SP as it just didn´t have enough die space to put proper DP.

Pascal is Mixed precision. Rumor says it will have FP 64 that can double as FP32. It could also mean that it only uses FP32 and double as FP16 but you can do that already so its less likely.

If it makes you feel better, nothing before the GTX400 series even had FP64.
Everything from the GTX200 series on back only had FP32/FP16.

I get what you are saying, but I believe NVIDIA's hand was forced to do this in order to get more performance out of the GPU/die-size ratio.
If NVIDIA had included more FP64 units into Maxwell, we would probably be looking at a 300-400 watt TDP, or at a minimum, a much lower clock-speed, which negates the whole point and purpose of this generation and architecture.

Kepler still provides a lot of FP64 performance (non-GeForce), and I'm sure NVIDIA will ride the coattails of that for some time.
 
Well it wasn't on the Maxwell die. They removed it completely. This is an odd step backwards...

Depends on their business plan, and reactions to perhaps unexpected delays I guess. Either way seems nvidia has handed a golden opportunity to AMD and their W9100 Dual Precision beast to gain market share in the professional market
 
Yeah, that makes sense. Lets drop the market that nvidia pretty much dominates (compared to intel), and generates millions. Nvidia had to do with SP as it just didn´t have enough die space to put proper DP. Pascal is Mixed precision. Rumor says it will have FP 64 that can double as FP32. It could also mean that it only uses FP32 and double as FP16 but you can do that already so its less likely.
If it makes you feel better, nothing before the GTX400 series even had FP64. Everything from the GTX200 series on back only had FP32/FP16.
I get what you are saying, but I believe NVIDIA's hand was forced to do this in order to get more performance out of the GPU/die-size ratio. If NVIDIA had included more FP64 units into Maxwell, we would probably be looking at a 300-400 watt TDP, or at a minimum, a much lower clock-speed, which negates the whole point and purpose of this generation and architecture. Kepler still provides a lot of FP64 performance (non-GeForce), and I'm sure NVIDIA will ride the coattails of that for some time.

You guys have to reboot your brain load and this will be done best by watching whole series of what NV boss had to say at the GPU Tech Conference 2015. You will then notice he had only one sentence about DP: "If you need DP go with the Titan Z". The rest was about the new world of applications relying on machine learning and vision, how the Pascal architecture will be optimized for it and the colossal the new market. Note that part in which Elon Musk joined him on stage.

Machine learning and vision needs very flexible floating point, not only FP32/FP16 but also in-between them, e.g. FP24. DP is miniature market by comparison. Moreover Intel Knights Landing chips will change the landscape for DP. With Xeon compatibility and up to 72 cores, it will be possible to make cards and/or build standalone workstations with these chips so imagine dual-/quad- Knights Landing workstations with 6/12 DP Teraflops and fully flexible processing architecture.

DP is thus becoming marginal for NVidia, they may put FP64 from combined FP32 units in Pascal chips if it fits the model of very flexible FP units an adds ultra low overhead. This is by far not certain since the main thrust will be sub-32 FP.
 
You guys have to reboot your brain load and this will be done best by watching whole series of what NV boss had to say at the GPU Tech Conference 2015.

Did I say something that was incorrect?
You and I already discussed this and I agreed with you the last time.

You might want to re-read what I posted, I never said anything that countered what you have stated time and time again.
 
Well we know it wont play Doom and I certainly dont have money to burn, but.....

Out of curiosity.....

How well would it do with DX 9 to 11 current applications/games?

It has been some time sine I looked at any materials on the Professional GPUs, last I did read they were kinda 'meh' with DX and were 'Uberish' with Open Gl. Is that still the case?
 
Back
Top