NVIDIA "Volta" GPUs at May 2017?

TPU sources WCCFTech, which sources Fudzilla. WCCFTech is making the assumption that if Volta is also on 16nm FinFET instead of 10nm that that would move up its timeline by a whole year...

It's just clickbait, move along. The post by drakken about electric sine waves in a GPU was more entertaining.
 
The first Volta will come at the next GTC and be a Tesla. So expect Volta Geforce cards a couple of months after. And forget any silly HBM2 dreams outside of the Tesla GV100.
 
Nvidia said in their recent chip inspection tour that they are already sampling 10nm test chips. This was April.



So next year is possible but not sure if we'll see it in dGPUs then.
 
I wonder how long it'll take for 10nm GPUs.

High volume 10nm could be really far away. I mean, the 14nm and 16nm FinFET processes today are essentially 20nm with smaller gates through FinFET process changes.
 
Nvidia said in their recent chip inspection tour that they are already sampling 10nm test chips. This was April.



So next year is possible but not sure if we'll see it in dGPUs then.


High volume 10nm could be really far away. I mean, the 14nm and 16nm FinFET processes today are essentially 20nm with smaller gates through FinFET process changes.

Yeah I was wondering because TSMC was saying next year. Now what that means for GPUs is another story... since they don't seem to be top priority. Also read TSMC's 10nm is equal to Intel's 14nm. Marketing nonsense.

I like that guy in the video.
 
Last edited:
If this turns out to be true, who is doing the catching up? nVidia with the HBM2 or AMD with Vega being vetted as Pascal's rival when Volta is released?
 
  • Like
Reactions: N4CR
like this
Well, If this is true, then Pascal will be one very short living architecture. Year after 1080 we will be getting 1180s with HBM2, faster, cooler and with probably properly working async. Guess I'll hold on my 980ti until 2017.
 
Well, If this is true, then Pascal will be one very short living architecture. Year after 1080 we will be getting 1180s with HBM2, faster, cooler and with probably properly working async. Guess I'll hold on my 980ti until 2017.

We wont see any HBM2 (Besides GV100). And Async already works correctly.
 

Because HBM/2 it's expensive and actually doesn't offer major improvement over G5X for gaming it will be useful someday but not that day is coming soon if and only if at some point high-end GPUs start to be bandwidth starved and bandwidth bottlenecked.
 
Because HBM/2 it's expensive and actually doesn't offer major improvement over G5X for gaming it will be useful someday but not that day is coming soon if and only if at some point high-end GPUs start to be bandwidth starved and bandwidth bottlenecked.

But why does Fury X overtake 980Ti at 4k resolution while being weaker at other resolutions? Not to mention that Fury X has 4gb and should be memory starved and offer lower performance at 4k, but it's the total opposite. I'm gaming at 4k and that is important to me.
 
  • Like
Reactions: N4CR
like this
But why does Fury X overtake 980Ti at 4k resolution while being weaker at other resolutions? Not to mention that Fury X has 4gb and should be memory starved and offer lower performance at 4k, but it's the total opposite. I'm gaming at 4k and that is important to me.

/Waits for a post touting nvidias' superior memory compression, 16nm, Godfather Jen and 2.0GHz at 60 degrees overclocking.

'These are not the droids you are looking for.'

Way I see it, regardless of gaming, it's used on the top Nvidia chip for a reason, cost aside; because it's the best. Or they'd have used GDDR5X.
 
But why does Fury X overtake 980Ti at 4k resolution while being weaker at other resolutions? Not to mention that Fury X has 4gb and should be memory starved and offer lower performance at 4k, but it's the total opposite. I'm gaming at 4k and that is important to me.

that's almost entirely due poor AMD driver management, a thing that I always say is AMD have great hardware but poorly optimized and poorly managed via drivers, this is the same history that's been happening since the HD7900 series, they "tend" to perform better at higher resolution 7970 versus GTX 680, R9 290X versus 780TI/Titan, the problem with AMD is called drivers and that's evidenced in DX12 where the Fury X perform as intended, the Fury Have the hardware to perform better than a 980TI at any resolution, but the 980TI driver optimization is just much better and also the CPU overhead of AMD doesn't help at lower resolutions while that same overhead is greatly reduced at 4K where in every possible scenario you will be absolutely GPU limited and CPU overhead play a less important factor than other resolutions..

That first point without mention that nvidia have much more advanced texture/memory/delta color compression techniques than AMD which reduces greatly the bandwidth need, while AMD cards has been always known for being bandwidth starved GPUs which offered high performance improvements by vRAM overclocking while isn't that the case with nvidia cards, in maxwell you can overclock the memory 1 freaking entire gigahertz and receive little performance increase even at 4K that by itself speak of the advanced bandwidth management, AMD cards need all that help from HBM while Nvidia cards doesn't.

oh and yes, Fury X is easily bottlenecked by the vRAM amount but AMD do some tricks via drivers to have a "dynamic" vRAM pool with the utilization of system RAM to help the Fury X which doesn't always help as everyone system RAM is way slower than vRAM...
 
But why does Fury X overtake 980Ti at 4k resolution while being weaker at other resolutions? Not to mention that Fury X has 4gb and should be memory starved and offer lower performance at 4k, but it's the total opposite. I'm gaming at 4k and that is important to me.
The 980 Ti isn't a GDDR5x part...
 
dang. As much as i love technology and growth, i kinda want it to slow down lol. nah keep going! ready for the matrix!
 
Back
Top