Volta GDDR6 release in May?

DK is not the US, to put it bluntly:
DK laws is nightmare stuff for US lawyers..,you would never get rich in DK suing anyone for anything.


Keep that in mind.


Ye, the cowboy law system in the US have almost no merit in most of rest of the world where common sense tends to rule.

You dont get any money directly from being runover by a car or killed in traffic. That may be up to insurance and welfare if any.
 
Automotive is certainly also the future.

Cloud, automation/automotive and IoT is the growth places for the long term.
Yeah.
One aspect regarding scary high revenue for automotive is that it goes way beyond consumer cars; consider trains/aspects of planes/ships/buses/heavy good vehicle/etc - some of these would be around advanced safety features rather than just autopilot but further in the future.
It has massive potential, then there is the military as well.
Shorter term for me is like you say Cloud/GRID type solutions-virtual GPU environments/Deep Learning being applied to more segments for its training that can be used in many diverse ways.

Cheers
 
Ye, lets say 50-100$ per car. Its certainly not 1000. But then software etc comes on top.

Its not going to be some crazy revenue flow. I think 1B$ a quarter would be the absolute top. While 500-750 is more realistic.

And also the car companies will either need to make use of Nvidia services/infrastructure for training side of AI and to a lesser extent inferencing beyond the vehicle itself or buy and build these themselves along with said HW and SW that still involved Nvidia.
Cheers
 
What some need to appreciate and why there will not be too large a gap with the launch of the GV104 is that Nvidia will also need to support the GV100 with a GV102.
This was one reason the Pascal Titan launched quite quickly (August 2016) as a prosumer card 2 months after GP104 (reality beginning June 2016), and then followed by the P40 (Tesla GP102 in Oct 2016) and P6000 (Quadro GP102 in Oct 2016).

They will not launch the GV104 and GV102 at the same time and there is a need to support the GV100 in the enterprise-HPC/scientific-acadamic (especially for Deep Learning support) quite promptly with another card.
Therefore the GV104 will launch roughly 2 months again before GV102 (which will be the new Titan Prosumer and then followed up with the actual Tesla/Quadro models).
One should expect the GV102 card at latest Feb'18 and the more I think about this the higher the chance it will be quite a bit earlier and even maybe before end of this Year/early Jan'18 if Nvidia does manage a good roll-out of the V100 beginning Q3.

This is another reason why GV104 will be early-to-mid Q4 or possibly even late Q3 if rumours are correct about an accelerated rollout.
Cheers
 
I do not see any indication of increased clocks. It seems to me the same clock speeds as Pascal. The performance increase seems to be only coming from increase cuda core counts here. Clocks are nearly identical or even little less. I am expecting around 50% best from added cores and clock speeds as I really don't expect them to increase over pascal.


Ah found it

https://www.hpcwire.com/2017/05/10/nvidias-mammoth-volta-gpu-aims-high-ai-hpc/

+ Higher clocks and higher power efficiency.

“It has a completely different instruction set than Pascal,” remarked Bryan Catanzaro, vice president, Applied Deep Learning Research at Nvidia. “It’s fundamentally extremely different. Volta is not Pascal with Tensor Core thrown onto it – it’s a completely different processor.”
 
DK is not the US, to put it bluntly:
DK laws is nightmare stuff for US lawyers..,you would never get rich in DK suing anyone for anything.


Keep that in mind.

We don't even have consist laws across the US yet...every state is going a different route from what I've seen, and that's never going to work until a federal standard is adopted.
 
Completely different instruction set? I thought nvidia intended to guarantee binary compatibility all the way up to G80
 
Completely different instruction set? I thought nvidia intended to guarantee binary compatibility all the way up to G80
It will have compatibility but you will not be getting the most out of the GPU.
Caffe blog has a training comparison between FP32 cuDNN 6 with P100 vs FP16 cuDNN 7 with V100 and the difference is 2.5x (remember one is FP32 and other FP16 training so that improves it a lot but important to also note the 2x16 vector dot product is not 100% performance gain over FP32, it is more around 60-80%).
With true optimisation and making full use of the new instruction set/arch would mean that gap would be much more.

Edit:
Here is the link: https://caffe2.ai/blog/2017/05/10/caffe2-adds-FP16-training-support.html

Cheers
 
"The news is circulating..."

Among who, besides the WCCFTech ouroboros that has infected the tech news world with their clickbait nonsense that even "reputable" sites like Tom's and Guru3D are now reposting?

"...but with the newly launched AMD high-speed graphic chips..."

Please. AMD isn't even a blip on NVIDIA's radar anymore. AMD is going on 3 generations behind in the graphics technology department by the time Vega launches, at which point they might be only 1 generation behind.


Then you woke up. Vega is spotted at 16GB HBM 2 and 1600 mhz boost clock. Simple math extrapolating to Fury X performance says that will compete or probably beat the Nvidia 1080Ti and Titan XP (more RAM as well). Not surprising since AMD has gained on Nvidia in performance for the last several gens, for example Fury X being within 5% of 980 Ti or so last gen (whereas AMD used to be nowhere close to Nvidia best). Since Nvidia has no other cards on the horizon and Vega will come out soon, that would make AMD faster than Nvidia without doubt for the first time in ages.
 
Then you woke up. Vega is spotted at 16GB HBM 2 and 1600 mhz boost clock. Simple math extrapolating to Fury X performance says that will compete or probably beat the Nvidia 1080Ti and Titan XP (more RAM as well). Not surprising since AMD has gained on Nvidia in performance for the last several gens, for example Fury X being within 5% of 980 Ti or so last gen (whereas AMD used to be nowhere close to Nvidia best). Since Nvidia has no other cards on the horizon and Vega will come out soon, that would make AMD faster than Nvidia without doubt for the first time in ages.
Of course it will compete with the comparatively priced Nvidia cards, if it doesn't after this long it's gonna be a blunder of the century.
But being on par or slightly (~5%) faster than the competition after 13 months is still really underwhelming, especially with Volta now on the horizon.
On top of that there are rumors that availability is going to be pretty bad at launch, because of HBM. So Vega launches in June, next two months is game draught, stock situation is uncertain and there's a possibility that Volta will come out Q3/Q4. Not exactly a rosy proposition IMO.
 
To clarify what I meant about 1080 being small die and that they should release larger die first is exactly what AMD is doing with 7nm roadmap.
Heaps of cores/HEDT/pay the premium comes first, then the mainstream stuff after that.
Instead of mainstream stuff (1080) then the disabled premium down the line..
E.g. I'd rather pay the ever encroaching 2k mark for a Titan, if it lasted 2-3 years as top dog.
Same BS with Polaris and no Vega.

Node change is not there, 12nm is an optimization of 16nm, so yeah that is not where majority of the performance increase is coming from. (linked in the other Volta thread). nV's architectural improvements gen to gen has been something AMD hasn't been able to compete with since their GCN launch.

This is obvious when looking at the die sizes expected as I pointed out earlier.
Current Ti is 480ish. Next Ti will be ~600mm.
Nvidia architecture has improved but really they have mostly relied on clock bumping last gen or two, the free ride has mostly ended I'd say, just like we saw in dCPU, hence now they're running out of room on '12nm' which has lead them to a larger die size to regain performance metric. Which will make their next process shrink even more interesting to watch, I wonder if it'll take longer. They ditched 10nm for '12nm' so things are obviously not easy. Just like Pascal not being in the 2014 roadmaps, I'd say they sniffed trouble a while off.

You and others always mention they can't compete but when looking at polaris vs 1060 undervolted they are practically on parity. The tech is nearly there already.
 
This is obvious when looking at the die sizes expected as I pointed out earlier.
Current Ti is 480ish. Next Ti will be ~600mm.
Nvidia architecture has improved but really they have mostly relied on clock bumping last gen or two, the free ride has mostly ended I'd say, just like we saw in dCPU, hence now they're running out of room on '12nm' which has lead them to a larger die size to regain performance metric. Which will make their next process shrink even more interesting to watch, I wonder if it'll take longer. They ditched 10nm for '12nm' so things are obviously not easy. Just like Pascal not being in the 2014 roadmaps, I'd say they sniffed trouble a while off.

You and others always mention they can't compete but when looking at polaris vs 1060 undervolted they are practically on parity. The tech is nearly there already.


Volta is different, its x2 the perf watt over Pascal,

AMD can't even compete with Pascal, they are 40% behind in perf/watt compared to Pascal. And we can always under volt a 1060 too. Also the 1060 isn't the most power efficient chip in the Pascal line up, its the 1080 so if we put that against Pascal, its 100% or 2x power efficiency. AMD has no chance in competing with Volta when it comes to that with Vega. Every generation since the introduction of AMD's GCN its been getting worse for AMD, they need to really change their design philosophies to start catching up and with the increase in costs associated with new nodes, they need cash to do it.
 
Yes what podunk backwards European county are you from? One of those Euro countries that lets Muslims in to firebomb you daily? Or the crappy teeth, bad health care, typical of Europeans? Post it, should be fun to compare your country's wealth and military power with that of the USA.. (it's always the arrogant meddling leftist Eiros talking like this)

I'm a veteran from Denmark (where Shintai also resides)...you got something you want to say to me, Pinkie?
 
Yes what podunk backwards European county are you from? One of those Euro countries that lets Muslims in to firebomb you daily? Or the crappy teeth, bad health care, typical of Europeans? Post it, should be fun to compare your country's wealth and military power with that of the USA.. (it's always the arrogant meddling leftist Eiros talking like this)

Seriously, how fucking old are you? You sound like a child!

Did talking about graphics cards and memory really get into an emotional state where it made sense to post the above?

Grow the fuck up or at least refrain from posting shite because it would save us all time and effort.
 
To clarify what I meant about 1080 being small die and that they should release larger die first is exactly what AMD is doing with 7nm roadmap.
Heaps of cores/HEDT/pay the premium comes first, then the mainstream stuff after that.
Instead of mainstream stuff (1080) then the disabled premium down the line..
E.g. I'd rather pay the ever encroaching 2k mark for a Titan, if it lasted 2-3 years as top dog.
Same BS with Polaris and no Vega.



This is obvious when looking at the die sizes expected as I pointed out earlier.
Current Ti is 480ish. Next Ti will be ~600mm.
Nvidia architecture has improved but really they have mostly relied on clock bumping last gen or two, the free ride has mostly ended I'd say, just like we saw in dCPU, hence now they're running out of room on '12nm' which has lead them to a larger die size to regain performance metric. Which will make their next process shrink even more interesting to watch, I wonder if it'll take longer. They ditched 10nm for '12nm' so things are obviously not easy. Just like Pascal not being in the 2014 roadmaps, I'd say they sniffed trouble a while off.

You and others always mention they can't compete but when looking at polaris vs 1060 undervolted they are practically on parity. The tech is nearly there already.
The performance is similar between 1060 and 480 but power usage isn't, so not quite in parity. Granted, most of us here will not give a shit about power usage here, but to say they are in parity isn't quite true.
 
To clarify what I meant about 1080 being small die and that they should release larger die first is exactly what AMD is doing with 7nm roadmap.
Heaps of cores/HEDT/pay the premium comes first, then the mainstream stuff after that.
Instead of mainstream stuff (1080) then the disabled premium down the line..
E.g. I'd rather pay the ever encroaching 2k mark for a Titan, if it lasted 2-3 years as top dog.
Same BS with Polaris and no Vega.



This is obvious when looking at the die sizes expected as I pointed out earlier.
Current Ti is 480ish. Next Ti will be ~600mm.
Nvidia architecture has improved but really they have mostly relied on clock bumping last gen or two, the free ride has mostly ended I'd say, just like we saw in dCPU, hence now they're running out of room on '12nm' which has lead them to a larger die size to regain performance metric. Which will make their next process shrink even more interesting to watch, I wonder if it'll take longer. They ditched 10nm for '12nm' so things are obviously not easy. Just like Pascal not being in the 2014 roadmaps, I'd say they sniffed trouble a while off.

You and others always mention they can't compete but when looking at polaris vs 1060 undervolted they are practically on parity. The tech is nearly there already.
Pascal WAS on the roadmap in early 2014 - I even have a Q1 presentation with it, it was 2013 it did not exist.

Why do you keep insisting Pascal was just a clock bump!

It introdiced mixed precision cores of true fp16 with GP100 and DP2A (Int8) with GP102 and lower, improved Polymorph Engine, Simultaneous Multi-Projection and SIngle Pass Stereo, improvements for VR, improved efficiency while also increasing performance and functions, Unified Memory (more for HPC).
And much more but plenty was said about Pascal when it was announced so cannot be bothered to reiterate everything that came with Pascal.
Cjeers
 
So, what would it really take to saturate PCIe 3.0? Will Volta saturate it or how close does it come?
 
So, what would it really take to saturate PCIe 3.0? Will Volta saturate it or how close does it come?

Man I highly doubt PCIe 3.0 will be saturated by volta. Cards take years to catch up to PCIe specs and then they release new ones. Usually PCIe specs are designed to last a good while.
 
Man I highly doubt PCIe 3.0 will be saturated by volta. Cards take years to catch up to PCIe specs and then they release new ones. Usually PCIe specs are designed to last a good while.
it depends the application. IT has been shown that PCIe 3.0 16x is saturated for some games. techpowerup shows this in their reviews.

its not a major issue in most games but some games take tangible hits.
 
Man I highly doubt PCIe 3.0 will be saturated by volta. Cards take years to catch up to PCIe specs and then they release new ones. Usually PCIe specs are designed to last a good while.

everything is relative.. it depend on games, some games are more sensitive to PCI-E bandwidth than others.. so cherry picking a bit some 1080P scenarios with a GTX 1080 GPU:

warhammer_1920_1080.png


jc3_1920_1080.png


farcryprimal_1920_1080.png


acsyndicate_1920_1080.png


ok that's just a bit of cherry pick, but that's with JUST a GTX 1080, what could mean that for 1080Ti and Titan X Pascal with huge bandwidth?.. what could it mean to volta?.

Technically we can say PCI-E 3.0 is saturated.
 
it depends the application. IT has been shown that PCIe 3.0 16x is saturated for some games. techpowerup shows this in their reviews.

its not a major issue in most games but some games take tangible hits.

lol you beat me to it.
 
do you enjoy paying through the roof for graphics cards?
A look at 1080 Ti should have explained to you by now that nVidia milks customers in mathematically optimal way by dropping prices halfway into their lifecycle (together with release of big chip consumer SKU).
 
everything is relative.. it depend on games, some games are more sensitive to PCI-E bandwidth than others.. so cherry picking a bit some 1080P scenarios with a GTX 1080 GPU:

warhammer_1920_1080.png


jc3_1920_1080.png


farcryprimal_1920_1080.png


acsyndicate_1920_1080.png


ok that's just a bit of cherry pick, but that's with JUST a GTX 1080, what could mean that for 1080Ti and Titan X Pascal with huge bandwidth?.. what could it mean to volta?.

Technically we can say PCI-E 3.0 is saturated.
Those graphs are quite misleading without more info. Likely the PCI bandwidth affects MAX FPS more than min at all. Prob have to find one of those games where the game is l;imited to 120/144/160 or so to get any real idea.
 
Those graphs are quite misleading without more info. Likely the PCI bandwidth affects MAX FPS more than min at all. Prob have to find one of those games where the game is l;imited to 120/144/160 or so to get any real idea.


What kind of more info?.. couple of higher peaks doesn't have a big impact on average chart as couple of drops doesn't do it. However I'm always curious of frame times on those kind of charts.

I see pretty clear average FPS there, lower average FPS have a direct mean, but in what sense?, I can see as "pretty much meh, the difference isn't so big anyway" 4 - 5fps isn't much to be worried" but at the same time the could be micro stuttering, jittery and unstable frame times, lower minimums, (max FPS are always irrelevant at least to me), so at the end can mean a poor overall gaming experience that isn't strictly reflected on the average FPS chart.. and it may mean why some people complain about "not so smooth" gaming experience with older platforms as Sandy bridge which is PCI-E gen2. The fact that there's a difference related to pci-e bandwidth of about 5fps mean saturation to me, and is a direct possible worse gaming experience.. which its all about.
 
Technically we can say PCI-E 3.0 is saturated.

We need a nicely done and well explained article/blog/video explaining how PCIe 3.0 is being saturated and why it's time for PCIe 4.0. I'd like to see if the 1080 ti saturates PCIe 3.0. I wonder how Volta & the RX Vega would do to saturate PCIe 3.0? Is there any way to inspire the good folks here at HF to tackle this issue head-on ... including info on PCIe lanes for NVMe SSD's etc? What would it really take to saturate PCIe 3.0 - list the games and other work loads that people could run into that would hit the PCIe 3.0 limitations.

; )
 
What kind of more info?.. couple of higher peaks doesn't have a big impact on average chart as couple of drops doesn't do it. However I'm always curious of frame times on those kind of charts.

I see pretty clear average FPS there, lower average FPS have a direct mean, but in what sense?, I can see as "pretty much meh, the difference isn't so big anyway" 4 - 5fps isn't much to be worried" but at the same time the could be micro stuttering, jittery and unstable frame times, lower minimums, (max FPS are always irrelevant at least to me), so at the end can mean a poor overall gaming experience that isn't strictly reflected on the average FPS chart.. and it may mean why some people complain about "not so smooth" gaming experience with older platforms as Sandy bridge which is PCI-E gen2. The fact that there's a difference related to pci-e bandwidth of about 5fps mean saturation to me, and is a direct possible worse gaming experience.. which its all about.
for me i wish everything added minimum frame rate and 99 percentile or whatever

We need a nicely done and well explained article/blog/video explaining how PCIe 3.0 is being saturated and why it's time for PCIe 4.0. I'd like to see if the 1080 ti saturates PCIe 3.0. I wonder how Volta & the RX Vega would do to saturate PCIe 3.0? Is there any way to inspire the good folks here at HF to tackle this issue head-on ... including info on PCIe lanes for NVMe SSD's etc? What would it really take to saturate PCIe 3.0 - list the games and other work loads that people could run into that would hit the PCIe 3.0 limitations.

; )

its well known we need more bandwidth. people stuck on 16x lanes sucks.
 
In the benches listed you get this from double the bandwidth.

Warhammer: 5.1% (Tested with the broken DX12 mode for NVidia)
Just Cause 3.9%
Far Cry 3.8%
AC 1.1%

But lets add the rest:
Anno 2205 0.4%
BF1 1.7%
Batman -4.0%
Civ6 0.1%
COD3 -2.7%
Deus Ex -0.6%
Doom 1.3%
F1 -0.1%
FO4 0.0%
GTAV 0.3%
Hitman 0.3%
Mafia3 1.6%
NMS 0.1%
Rainbow6 0.4%
Tomb Raider -0.9%
SW2 -0.3%
Witcher 0.2%

That's gives an average around 0.6%

The result is funny enough also close to nothing on the site.
perfrel_1920_1080.png
 
why does this make you so happy?do you enjoy paying through the roof for graphics cards?you must have money burning a hole in your pocket
I never understood this fanatic Fanboyisme. i don't mind bleeding money out my ass as long as "my team" is winning...

I hope vega come ouy and trashes volta with 4times the speed at 1`/20th the cost. but not because it AMD. because it would mean it was a better offer and AMD needs a win to bring as back some competitiveness
 
I never understood this fanatic Fanboyisme. i don't mind bleeding money out my ass as long as "my team" is winning...

I hope vega come ouy and trashes volta with 4times the speed at 1`/20th the cost. but not because it AMD. because it would mean it was a better offer and AMD needs a win to bring as back some competitiveness

That sounds real objective doesn't it? Now try say the opposite and you get a hellfire.
 
Back
Top