NVIDIA GTC 2017 Live Stream

That graph just pretty much said that Volta only uses less than a 1/4 power than pascal, holy fucking efficiency batman. Kind of makes you wonder what is coming for consumer end Volta's now!
 
That new chip is a monster - with gaming SKU's launching this year, wow.

Poor AMD just can't catch a break.
 
Any specs? Couldnt/Cant watch the stream.

815mm2 based on new TSMC 12nm FF process. 5k Cuda cores. FP32 performance peak at 15Tflops. 7.5tflops FP64. 120 tlops for deep learning inferencing. new "tensor core" for deep learning. will be nvidia direct answer to google TPU it seems. 300wTDP (same as GP100). 16GB HBM 2. NVLink 2.
 
815mm2 based on new TSMC 12nm FF process. 5k Cuda cores. FP32 performance peak at 15Tflops. 7.5tflops FP64. 120 tlops for deep learning inferencing. new "tensor core" for deep learning. will be nvidia direct answer to google TPU it seems. 300wTDP (same as GP100). 16GB HBM 2. NVLink 2.

If they release a gaming version of a GPU that big they will be testing the $2000 price point, and no doubt some around here will buy it.
 
I just finally got my hands on the P100. They feel obsolete now when looking at an increase to 7.5tflop and a reduction to 150W.

Anyone want to do a gofundme for a DGX or VGX? He said if I preorder here now (I'm at GTC), at $149k, I get a free volta upgrade...
 
If they release a gaming version of a GPU that big they will be testing the $2000 price point, and no doubt some around here will buy it.
They won't. Its clear that nvidia is separating its consumer line from the HPC. Like they did with GP100.

But surely the consumer version will have the same architechture minus the things you don't need on a gaming card like FP64 and the tensor cores.
 
I don't imaging tensorcores being as much use in the gaming front in the same sense that it is important to machine learning. The function A x B + C matrix is fairly common to tensorflow matrix math, but not necessarily to games.
 
I don't imaging tensorcores being as much use in the gaming front in the same sense that it is important to machine learning. The function A x B + C matrix is fairly common to tensorflow matrix math, but not necessarily to games.

There has been studies looking to use tensor product for rendering, more professional segment focus I agree but depending upon the flexibility of Tensor Core/Function Unit it may have some applications for consumers although like you I think it will be limited to say upper Tesla/Quadro/maybe Volta Titan (now its more a prosumer card) and never consumer.
That said even without it, there is a high chance we will see accelerated FP16 mixed precision FP32 cores now on consumer without Tensor.
Remember V100 is both still 2xFP16 (so now 30TFLOPS) and also around a massive 4x accelerated performance FP16 using Tensor functions (depending upon task I agree) relative to P100.
Tensor also accelerates FP32 in similar way but that has around 2x performance gains in said dependent task-function relative to P100.

Nvidia has made a great multi-function arch that is easily modular with Volta, seriously putting HPC competition under pressure while also having much that makes it greater for other segments.
CHeers
 
Last edited:
I like how Nvidia is so confident in their market position they are totally chill announcing $70K and $150K hardware, and it's a good deal :eek:
 
lol, Intel got crushed with this too.

Jensen Huang is a kick ass CEO, Imagine if instead of AMD/ATI merging, AMD/NVidia merged ( with Huang as CEO of course).

Intel would probably be 2nd place in desktop CPUs by now. Maybe they could have picked up ATI and had more resources to improve it.
 
Jensen Huang is a kick ass CEO, Imagine if instead of AMD/ATI merging, AMD/NVidia merged ( with Huang as CEO of course).

Intel would probably be 2nd place in desktop CPUs by now. Maybe they could have picked up ATI and had more resources to improve it.
Maybe but Nvidia has never showed a strong interest in the CPU market outside their ARM offerings.
 
Jensen Huang is a kick ass CEO, Imagine if instead of AMD/ATI merging, AMD/NVidia merged ( with Huang as CEO of course).

Intel would probably be 2nd place in desktop CPUs by now. Maybe they could have picked up ATI and had more resources to improve it.

Well it almost happened back in mid 2000s, but Hector Ruiz didn't want Huang as the CEO.
 
Jensen was great to work for when I worked at nvidia years ago. I've always wondered what life would be like if I stuck around. Our section had this dancing caddyshack doll that he'd trigger to dance to disrupt our trains of thought when he passed by.

I think this keynote served up notice to intel and amd that they have some serious tech in the works. My company sprung for 2x p100's for me to play with and they were "hard" to get. Seeing the conference I can see all the GPU allocations going to everyone wanting to sell a machine learning box.
 
Jensen was great to work for when I worked at nvidia years ago. I've always wondered what life would be like if I stuck around. Our section had this dancing caddyshack doll that he'd trigger to dance to disrupt our trains of thought when he passed by.

I think this keynote served up notice to intel and amd that they have some serious tech in the works. My company sprung for 2x p100's for me to play with and they were "hard" to get. Seeing the conference I can see all the GPU allocations going to everyone wanting to sell a machine learning box.
This should light a fire under intels bum, hopefully we see something serious from them.
 
Maybe but Nvidia has never showed a strong interest in the CPU market outside their ARM offerings.

Well, he might have more interest if he had a starting point. Trying to get into the x86 patent thicket is nearly impossible without owning AMD or Intel.

I just lament what actually happened, the merger of 2nd place companies that leaves both it's CPU and GPU development starved for resources.
 
It would be neat to see Nvidia try to do something with IBM's OpenPOWER to challenge x86. Everybody always talks about ARM, but IBM's ridiculous specs are way more interesting to dream about.
 
It would be neat to see Nvidia try to do something with IBM's OpenPOWER to challenge x86. Everybody always talks about ARM, but IBM's ridiculous specs are way more interesting to dream about.

To run Linux on?
 
Well getting MS to build Windows on anything other than x86 is a big hurdle, but they've done it before. This is all just wild fantasy of course.
 
To run Linux on?

Well if we talk about purely hypothetical scenarios - the only way Linux could get popular would be if Nvidia and Valve joined forces and started major push for it. But it looks like windows 10 threat wasn't big enough.

I still wonder why Nvidia didn't - Microsoft long term goal is obviously bringing everything into UWP and shared code with gcn based xbox is going to be problem for them.
 
Well if we talk about purely hypothetical scenarios - the only way Linux could get popular would be if Nvidia and Valve joined forces and started major push for it. But it looks like windows 10 threat wasn't big enough.

I still wonder why Nvidia didn't - Microsoft long term goal is obviously bringing everything into UWP and shared code with gcn based xbox is going to be problem for them.

You can only keep making consoles with an ever inefficient uarch for so long before you have to change. And if you dont, the other console maker will and win it all. Once a CPU company always a CPU company.
 
Back
Top